Sample records for mixed-effects model analysis

  1. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  3. Semiparametric mixed-effects analysis of PK/PD models using differential equations.

    PubMed

    Wang, Yi; Eskridge, Kent M; Zhang, Shunpu

    2008-08-01

    Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.

  4. MIXOR: a computer program for mixed-effects ordinal regression analysis.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-03-01

    MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.

  5. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.

    PubMed

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.

  6. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

    PubMed

    Janssen, Dirk P

    2012-03-01

    Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

  7. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  9. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Mixed models and reduced/selective integration displacement models for nonlinear analysis of curved beams

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Peters, J. M.

    1981-01-01

    Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.

  11. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    PubMed Central

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  12. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  13. Real longitudinal data analysis for real people: building a good enough mixed model.

    PubMed

    Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E

    2010-02-20

    Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.

  14. Finite element modeling and analysis of tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Andersen, C. M.

    1983-01-01

    Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.

  15. Random effects coefficient of determination for mixed and meta-analysis models

    PubMed Central

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2011-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070

  16. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  17. Multilevel mixed effects parametric survival models using adaptive Gauss-Hermite quadrature with application to recurrent events and individual participant data meta-analysis.

    PubMed

    Crowther, Michael J; Look, Maxime P; Riley, Richard D

    2014-09-28

    Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Random effects coefficient of determination for mixed and meta-analysis models.

    PubMed

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  19. Transient modeling/analysis of hyperbolic heat conduction problems employing mixed implicit-explicit alpha method

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; D'Costa, Joseph F.

    1991-01-01

    This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.

  20. Mixing with applications to inertial-confinement-fusion implosions

    NASA Astrophysics Data System (ADS)

    Rana, V.; Lim, H.; Melvin, J.; Glimm, J.; Cheng, B.; Sharp, D. H.

    2017-01-01

    Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.

  1. Mixing with applications to inertial-confinement-fusion implosions.

    PubMed

    Rana, V; Lim, H; Melvin, J; Glimm, J; Cheng, B; Sharp, D H

    2017-01-01

    Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.

  2. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    PubMed

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  3. Effects of imperfect mixing on low-density polyethylene reactor dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, C.M.; Dihora, J.O.; Ray, W.H.

    1998-07-01

    Earlier work considered the effect of feed conditions and controller configuration on the runaway behavior of LDPE autoclave reactors assuming a perfectly mixed reactor. This study provides additional insight on the dynamics of such reactors by using an imperfectly mixed reactor model and bifurcation analysis to show the changes in the stability region when there is imperfect macroscale mixing. The presence of imperfect mixing substantially increases the range of stable operation of the reactor and makes the process much easier to control than for a perfectly mixed reactor. The results of model analysis and simulations are used to identify somemore » of the conditions that lead to unstable reactor behavior and to suggest ways to avoid reactor runaway or reactor extinction during grade transitions and other process operation disturbances.« less

  4. The use of mixed effects ANCOVA to characterize vehicle emission profiles

    DOT National Transportation Integrated Search

    2000-09-01

    A mixed effects analysis of covariance model to characterize mileage dependent emissions profiles for any given group of vehicles having a common model design is used in this paper. These types of evaluations are used by the U.S. Environmental Protec...

  5. Analysis and modeling of subgrid scalar mixing using numerical data

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  6. MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-05-01

    MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.

  7. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  8. Using Mixed-Effects Structural Equation Models to Study Student Academic Development.

    ERIC Educational Resources Information Center

    Pike, Gary R.

    1992-01-01

    A study at the University of Tennessee Knoxville used mixed-effect structural equation models incorporating latent variables as an alternative to conventional methods of analyzing college students' (n=722) first-year-to-senior academic gains. Results indicate, contrary to previous analysis, that coursework and student characteristics interact to…

  9. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  10. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data

    PubMed Central

    Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-01-01

    Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741

  11. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  12. A mixed-effects model approach for the statistical analysis of vocal fold viscoelastic shear properties.

    PubMed

    Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei

    2017-11-01

    A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Likelihood-Based Random-Effect Meta-Analysis of Binary Events.

    PubMed

    Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D

    2015-01-01

    Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.

  14. Decision-case mix model for analyzing variation in cesarean rates.

    PubMed

    Eldenburg, L; Waller, W S

    2001-01-01

    This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.

  15. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    PubMed

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model

    NASA Astrophysics Data System (ADS)

    Megann, A.; Nurser, G.

    2014-12-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.

  17. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui).

    PubMed

    Magezi, David A

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).

  18. Correcting for population structure and kinship using the linear mixed model: theory and extensions.

    PubMed

    Hoffman, Gabriel E

    2013-01-01

    Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.

  19. Analysis and Modeling of a Two-Phase Jet Pump of a Thermal Management System for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Sherif, S.A.; Hunt, P. L.; Holladay, J. B.; Lear, W. E.; Steadham, J. M.

    1998-01-01

    Jet pumps are devices capable of pumping fluids to a higher pressure by inducing the motion of a secondary fluid employing a high speed primary fluid. The main components of a jet pump are a primary nozzle, secondary fluid injectors, a mixing chamber, a throat, and a diffuser. The work described in this paper models the flow of a two-phase primary fluid inducing a secondary liquid (saturated or subcooled) injected into the jet pump mixing chamber. The model is capable of accounting for phase transformations due to compression, expansion, and mixing. The model is also capable of incorporating the effects of the temperature and pressure dependency in the analysis. The approach adopted utilizes an isentropic constant pressure mixing in the mixing chamber and at times employs iterative techniques to determine the flow conditions in the different parts of the jet pump.

  20. Mixed Models and Reduction Techniques for Large-Rotation, Nonlinear Analysis of Shells of Revolution with Application to Tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Andersen, C. M.; Tanner, J. A.

    1984-01-01

    An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.

  1. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  2. Physiological effects of diet mixing on consumer fitness: a meta-analysis.

    PubMed

    Lefcheck, Jonathan S; Whalen, Matthew A; Davenport, Theresa M; Stone, Joshua P; Duffy, J Emmett

    2013-03-01

    The degree of dietary generalism among consumers has important consequences for population, community, and ecosystem processes, yet the effects on consumer fitness of mixing food types have not been examined comprehensively. We conducted a meta-analysis of 161 peer-reviewed studies reporting 493 experimental manipulations of prey diversity to test whether diet mixing enhances consumer fitness based on the intrinsic nutritional quality of foods and consumer physiology. Averaged across studies, mixed diets conferred significantly higher fitness than the average of single-species diets, but not the best single prey species. More than half of individual experiments, however, showed maximal growth and reproduction on mixed diets, consistent with the predicted benefits of a balanced diet. Mixed diets including chemically defended prey were no better than the average prey type, opposing the prediction that a diverse diet dilutes toxins. Finally, mixed-model analysis showed that the effect of diet mixing was stronger for herbivores than for higher trophic levels. The generally weak evidence for the nutritional benefits of diet mixing in these primarily laboratory experiments suggests that diet generalism is not strongly favored by the inherent physiological benefits of mixing food types, but is more likely driven by ecological and environmental influences on consumer foraging.

  3. MANOVA vs nonlinear mixed effects modeling: The comparison of growth patterns of female and male quail

    NASA Astrophysics Data System (ADS)

    Gürcan, Eser Kemal

    2017-04-01

    The most commonly used methods for analyzing time-dependent data are multivariate analysis of variance (MANOVA) and nonlinear regression models. The aim of this study was to compare some MANOVA techniques and nonlinear mixed modeling approach for investigation of growth differentiation in female and male Japanese quail. Weekly individual body weight data of 352 male and 335 female quail from hatch to 8 weeks of age were used to perform analyses. It is possible to say that when all the analyses are evaluated, the nonlinear mixed modeling is superior to the other techniques because it also reveals the individual variation. In addition, the profile analysis also provides important information.

  4. Modeling condensation with a noncondensable gas for mixed convection flow

    NASA Astrophysics Data System (ADS)

    Liao, Yehong

    2007-05-01

    This research theoretically developed a novel mixed convection model for condensation with a noncondensable gas. The model developed herein is comprised of three components: a convection regime map; a mixed convection correlation; and a generalized diffusion layer model. These components were developed in a way to be consistent with the three-level methodology in MELCOR. The overall mixed convection model was implemented into MELCOR and satisfactorily validated with data covering a wide variety of test conditions. In the development of the convection regime map, two analyses with approximations of the local similarity method were performed to solve the multi-component two-phase boundary layer equations. The first analysis studied effects of the bulk velocity on a basic natural convection condensation process and setup conditions to distinguish natural convection from mixed convection. It was found that the superimposed velocity increases condensation heat transfer by sweeping away the noncondensable gas accumulated at the condensation boundary. The second analysis studied effects of the buoyancy force on a basic forced convection condensation process and setup conditions to distinguish forced convection from mixed convection. It was found that the superimposed buoyancy force increases condensation heat transfer by thinning the liquid film thickness and creating a steeper noncondensable gas concentration profile near the condensation interface. In the development of the mixed convection correlation accounting for suction effects, numerical data were obtained from boundary layer analysis for the three convection regimes and used to fit a curve for the Nusselt number of the mixed convection regime as a function of the Nusselt numbers of the natural and forced convection regimes. In the development of the generalized diffusion layer model, the driving potential for mass transfer was expressed as the temperature difference between the bulk and the liquid-gas interface using the Clausius-Clapeyron equation. The model was developed on a mass basis instead of a molar basis to be consistent with general conservation equations. It was found that vapor diffusion is not only driven by a gradient of the molar fraction but also a gradient of the mixture molecular weight at the diffusion layer.

  5. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    PubMed

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  6. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models

    PubMed Central

    Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong

    2016-01-01

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471

  7. Bayesian inference on risk differences: an application to multivariate meta-analysis of adverse events in clinical trials.

    PubMed

    Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng

    2013-05-01

    Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.

  8. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  9. A mixed-effects regression model for longitudinal multivariate ordinal data.

    PubMed

    Liu, Li C; Hedeker, Donald

    2006-03-01

    A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.

  10. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  11. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  12. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models.

    PubMed

    Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong

    2016-04-07

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  13. Estimation of the linear mixed integrated Ornstein–Uhlenbeck model

    PubMed Central

    Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate

    2017-01-01

    ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536

  14. Pharmacoeconomics of parenteral nutrition in surgical and critically ill patients receiving structured triglycerides in China.

    PubMed

    Wu, Guo Hao; Ehm, Alexandra; Bellone, Marco; Pradelli, Lorenzo

    2017-01-01

    A prior meta-analysis showed favorable metabolic effects of structured triglyceride (STG) lipid emulsions in surgical and critically ill patients compared with mixed medium-chain/long-chain triglycerides (MCT/LCT) emulsions. Limited data on clinical outcomes precluded pharmacoeconomic analysis. We performed an updated meta-analysis and developed a cost model to compare overall costs for STGs vs MCT/LCTs in Chinese hospitals. We searched Medline, Embase, Wanfang Data, the China Hospital Knowledge Database, and Google Scholar for clinical trials comparing STGs to mixed MCT/LCTs in surgical or critically ill adults published between October 10, 2013 and September 19, 2015. Newly identified studies were pooled with the prior studies and an updated meta-analysis was performed. A deterministic simulation model was used to compare the effects of STGs and mixed MCT/LCT's on Chinese hospital costs. The literature search identified six new trials, resulting in a total of 27 studies in the updated meta-analysis. Statistically significant differences favoring STGs were observed for cumulative nitrogen balance, pre- albumin and albumin concentrations, plasma triglycerides, and liver enzymes. STGs were also associated with a significant reduction in the length of hospital stay (mean difference, -1.45 days; 95% confidence interval, -2.48 to -0.43; p=0.005) versus mixed MCT/LCTs. Cost analysis demonstrated a net cost benefit of ¥675 compared with mixed MCT/LCTs. STGs are associated with improvements in metabolic function and reduced length of hospitalization in surgical and critically ill patients compared with mixed MCT/LCT emulsions. Cost analysis using data from Chinese hospitals showed a corresponding cost benefit.

  15. Extension of the Haseman-Elston regression model to longitudinal data.

    PubMed

    Won, Sungho; Elston, Robert C; Park, Taesung

    2006-01-01

    We propose an extension to longitudinal data of the Haseman and Elston regression method for linkage analysis. The proposed model is a mixed model having several random effects. As response variable, we investigate the sibship sample mean corrected cross-product (smHE) and the BLUP-mean corrected cross product (pmHE), comparing them with the original squared difference (oHE), the overall mean corrected cross-product (rHE), and the weighted average of the squared difference and the squared mean-corrected sum (wHE). The proposed model allows for the correlation structure of longitudinal data. Also, the model can test for gene x time interaction to discover genetic variation over time. The model was applied in an analysis of the Genetic Analysis Workshop 13 (GAW13) simulated dataset for a quantitative trait simulating systolic blood pressure. Independence models did not preserve the test sizes, while the mixed models with both family and sibpair random effects tended to preserve size well. Copyright 2006 S. Karger AG, Basel.

  16. Hierarchical Bayes approach for subgroup analysis.

    PubMed

    Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C

    2017-01-01

    In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.

  17. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    PubMed

    Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P

    2009-04-01

    Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for analyzing healthcare performance data, it has rarely been used to assess case-mix adjustment of such data. The purpose of this article is to investigate whether multilevel regression analysis is a useful tool to detect case-mix adjusters in consumer assessment of healthcare. We used data on 11,539 consumers from 27 Dutch health plans, which were collected using the Dutch Consumer Quality Index health plan instrument. We conducted multilevel regression analyses of consumers' responses nested within health plans to assess the effects of consumer characteristics on consumer experience. We compared our findings to the results of another methodology: the impact factor approach, which combines the predictive effect of each case-mix variable with its heterogeneity across health plans. Both multilevel regression and impact factor analyses showed that age and education were the most important case-mix adjusters for consumer experience and ratings of health plans. With the exception of age, case-mix adjustment had little impact on the ranking of health plans. On both theoretical and practical grounds, multilevel modeling is useful for adequate case-mix adjustment and analysis of performance ratings.

  18. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies.

    PubMed

    Koerner, Tess K; Zhang, Yang

    2017-02-27

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.

  19. The Relaxation Matrix for Symmetric Tops with Inversion Symmetry. II; Line Mixing Effects in the V1 Band of NH3

    NASA Technical Reports Server (NTRS)

    Boulet, C.; Ma, Q.

    2016-01-01

    Line mixing effects have been calculated in the ?1 parallel band of self-broadened NH3. The theoretical approach is an extension of a semi-classical model to symmetric-top molecules with inversion symmetry developed in the companion paper [Q. Ma and C. Boulet, J. Chem. Phys. 144, 224303 (2016)]. This model takes into account line coupling effects and hence enables the calculation of the entire relaxation matrix. A detailed analysis of the various coupling mechanisms is carried out for Q and R inversion doublets. The model has been applied to the calculation of the shape of the Q branch and of some R manifolds for which an obvious signature of line mixing effects has been experimentally demonstrated. Comparisons with measurements show that the present formalism leads to an accurate prediction of the available experimental line shapes. Discrepancies between the experimental and theoretical sets of first order mixing parameters are discussed as well as some extensions of both theory and experiment.

  20. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  1. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    PubMed

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  2. Mixed-effects Gaussian process functional regression models with application to dose-response curve prediction.

    PubMed

    Shi, J Q; Wang, B; Will, E J; West, R M

    2012-11-20

    We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Multivariate statistical approach to estimate mixing proportions for unknown end members

    USGS Publications Warehouse

    Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.

    2012-01-01

    A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.

  4. Effects of Crimped Fiber Paths on Mixed Mode Delamination Behaviors in Woven Fabric Composites

    DTIC Science & Technology

    2016-09-01

    continuum finite - element models. Three variations of a plain-woven fabric architecture—each of which had different crimped fiber paths—were considered... Finite - Element Analysis Fracture Mechanics Fracture Toughness Mixed Modes Strain Energy Release Rate 16. SECURITY...polymer FB Fully balanced laminate FEA Finite - element analysis FTCM Fracture toughness conversion mechanism G Shear modulus GI, GII, GIII Mode

  5. Three-dimensional turbulent-mixing-length modeling for discrete-hole coolant injection into a crossflow

    NASA Technical Reports Server (NTRS)

    Wang, C. R.; Papell, S. S.

    1983-01-01

    Three dimensional mixing length models of a flow field immediately downstream of coolant injection through a discrete circular hole at a 30 deg angle into a crossflow were derived from the measurements of turbulence intensity. To verify their effectiveness, the models were used to estimate the anisotropic turbulent effects in a simplified theoretical and numerical analysis to compute the velocity and temperature fields. With small coolant injection mass flow rate and constant surface temperature, numerical results of the local crossflow streamwise velocity component and surface heat transfer rate are consistent with the velocity measurement and the surface film cooling effectiveness distributions reported in previous studies.

  6. Three-dimensional turbulent-mixing-length modeling for discrete-hole coolant injection into a crossflow

    NASA Astrophysics Data System (ADS)

    Wang, C. R.; Papell, S. S.

    1983-09-01

    Three dimensional mixing length models of a flow field immediately downstream of coolant injection through a discrete circular hole at a 30 deg angle into a crossflow were derived from the measurements of turbulence intensity. To verify their effectiveness, the models were used to estimate the anisotropic turbulent effects in a simplified theoretical and numerical analysis to compute the velocity and temperature fields. With small coolant injection mass flow rate and constant surface temperature, numerical results of the local crossflow streamwise velocity component and surface heat transfer rate are consistent with the velocity measurement and the surface film cooling effectiveness distributions reported in previous studies.

  7. Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis

    PubMed Central

    Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas

    2016-01-01

    The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246

  8. Functional mixed effects spectral analysis

    PubMed Central

    KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG

    2011-01-01

    SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437

  9. Coding response to a case-mix measurement system based on multiple diagnoses.

    PubMed

    Preyra, Colin

    2004-08-01

    To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.

  10. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    PubMed

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  11. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies

    PubMed Central

    Koerner, Tess K.; Zhang, Yang

    2017-01-01

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers. PMID:28264422

  12. Role of crystal field in mixed alkali metal effect: electron paramagnetic resonance study of mixed alkali metal oxyfluoro vanadate glasses.

    PubMed

    Honnavar, Gajanan V; Ramesh, K P; Bhat, S V

    2014-01-23

    The mixed alkali metal effect is a long-standing problem in glasses. Electron paramagnetic resonance (EPR) is used by several researchers to study the mixed alkali metal effect, but a detailed analysis of the nearest neighbor environment of the glass former using spin-Hamiltonian parameters was elusive. In this study we have prepared a series of vanadate glasses having general formula (mol %) 40 V2O5-30BaF2-(30 - x)LiF-xRbF with x = 5, 10, 15, 20, 25, and 30. Spin-Hamiltonian parameters of V(4+) ions were extracted by simulating and fitting to the experimental spectra using EasySpin. From the analysis of these parameters it is observed that the replacement of lithium ions by rubidium ions follows a "preferential substitution model". Using this proposed model, we were able to account for the observed variation in the ratio of the g parameter, which goes through a maximum. This reflects an asymmetric to symmetric changeover of the alkali metal ion environment around the vanadium site. Further, this model also accounts for the variation in oxidation state of vanadium ion, which was confirmed from the variation in signal intensity of EPR spectra.

  13. The value of a statistical life: a meta-analysis with a mixed effects regression model.

    PubMed

    Bellavance, François; Dionne, Georges; Lebeau, Martin

    2009-03-01

    The value of a statistical life (VSL) is a very controversial topic, but one which is essential to the optimization of governmental decisions. We see a great variability in the values obtained from different studies. The source of this variability needs to be understood, in order to offer public decision-makers better guidance in choosing a value and to set clearer guidelines for future research on the topic. This article presents a meta-analysis based on 39 observations obtained from 37 studies (from nine different countries) which all use a hedonic wage method to calculate the VSL. Our meta-analysis is innovative in that it is the first to use the mixed effects regression model [Raudenbush, S.W., 1994. Random effects models. In: Cooper, H., Hedges, L.V. (Eds.), The Handbook of Research Synthesis. Russel Sage Foundation, New York] to analyze studies on the value of a statistical life. We conclude that the variability found in the values studied stems in large part from differences in methodologies.

  14. A new unsteady mixing model to predict NO(x) production during rapid mixing in a dual-stage combustor

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1992-01-01

    An advanced gas turbine engine to power supersonic transport aircraft is currently under study. In addition to high combustion efficiency requirements, environmental concerns have placed stringent restrictions on the pollutant emissions from these engines. A combustor design with the potential for minimizing pollutants such as NO(x) emissions is undergoing experimental evaluation. A major technical issue in the design of this combustor is how to rapidly mix the hot, fuel-rich primary zone product with the secondary diluent air to obtain a fuel-lean mixture for combustion in the second stage. Numerical predictions using steady-state methods cannot account for the unsteady phenomena in the mixing region. Therefore, to evaluate the effect of unsteady mixing and combustion processes, a novel unsteady mixing model is demonstrated here. This model has been used to study multispecies mixing as well as propane-air and hydrogen-air jet nonpremixed flames, and has been used to predict NO(x) production in the mixing region. Comparison with available experimental data show good agreement, thereby providing validation of the mixing model. With this demonstration, this mixing model is ready to be implemented in conjunction with steady-state prediction methods and provide an improved engineering design analysis tool.

  15. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer.

    PubMed

    Covarrubias-Pazaran, Giovanny

    2016-01-01

    Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.

  17. The analysis and modelling of dilatational terms in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.

    1991-01-01

    It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.

  18. The analysis and modeling of dilatational terms in compressible turbulence

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.

    1989-01-01

    It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.

  19. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  20. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  1. Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses

    PubMed Central

    Preyra, Colin

    2004-01-01

    Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940

  2. Application of linear mixed-effects model with LASSO to identify metal components associated with cardiac autonomic responses among welders: a repeated measures study

    PubMed Central

    Zhang, Jinming; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C

    2017-01-01

    Background Environmental and occupational exposure to metals is ubiquitous worldwide, and understanding the hazardous metal components in this complex mixture is essential for environmental and occupational regulations. Objective To identify hazardous components from metal mixtures that are associated with alterations in cardiac autonomic responses. Methods Urinary concentrations of 16 types of metals were examined and ‘acceleration capacity’ (AC) and ‘deceleration capacity’ (DC), indicators of cardiac autonomic effects, were quantified from ECG recordings among 54 welders. We fitted linear mixed-effects models with least absolute shrinkage and selection operator (LASSO) to identify metal components that are associated with AC and DC. The Bayesian Information Criterion was used as the criterion for model selection procedures. Results Mercury and chromium were selected for DC analysis, whereas mercury, chromium and manganese were selected for AC analysis through the LASSO approach. When we fitted the linear mixed-effects models with ‘selected’ metal components only, the effect of mercury remained significant. Every 1 µg/L increase in urinary mercury was associated with −0.58 ms (−1.03, –0.13) changes in DC and 0.67 ms (0.25, 1.10) changes in AC. Conclusion Our study suggests that exposure to several metals is associated with impaired cardiac autonomic functions. Our findings should be replicated in future studies with larger sample sizes. PMID:28663305

  3. The isotropic spectrum of the CO2 Raman 2ν3 overtone: a line-mixing band shape analysis at pressures up to several tens of atmospheres.

    PubMed

    Verzhbitskiy, I A; Kouzov, A P; Rachet, F; Chrysos, M

    2011-06-14

    A line-mixing shape analysis of the isotropic remnant Raman spectrum of the 2ν(3) overtone of CO(2) is reported at room temperature and for densities, ρ, rising up to tens of amagats. The analysis, experimental and theoretical, employs tools of non-resonant light scattering spectroscopy and uses the extended strong collision model (ESCM) to simulate the strong line mixing effects and to evidence motional narrowing. Excellent agreement at any pressure is observed between the calculated spectra and our experiment, which, along with the easy numerical implementation of the ESCM, makes this model stand out clearly above other semiempirical models for band shape calculations. The hitherto undefined, explicit ρ-dependence of the vibrational relaxation rate is given. Our study intends to improve the understanding of pressure-induced phenomena in a gas that is still in the forefront of the news.

  4. Scale model performance test investigation of exhaust system mixers for an Energy Efficient Engine /E3/ propulsion system

    NASA Technical Reports Server (NTRS)

    Kuchar, A. P.; Chamberlin, R.

    1980-01-01

    A scale model performance test was conducted as part of the NASA Energy Efficient Engine (E3) Program, to investigate the geometric variables that influence the aerodynamic design of exhaust system mixers for high-bypass, mixed-flow engines. Mixer configuration variables included lobe number, penetration and perimeter, as well as several cutback mixer geometries. Mixing effectiveness and mixer pressure loss were determined using measured thrust and nozzle exit total pressure and temperature surveys. Results provide a data base to aid the analysis and design development of the E3 mixed-flow exhaust system.

  5. Mixing {Xi}--{Xi}' Effects and Static Properties of Heavy {Xi}'s

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aliev, T. M.; Ozpineci, A.; Zamiralov, V. S.

    It is shown the importance of mixing of heavy baryons {Xi}--{Xi}' with the new quantum numbers for analysis of its characteristics. The quark model of Ono is used as an example. Masses of new baryons as well as mixing angles of the states {Xi}--{Xi}' are obtained. The same reasoning is shown to be valid for the interpolating currents of these baryons in the framework of the QCD sum rules.

  6. Optimization of the time series NDVI-rainfall relationship using linear mixed-effects modeling for the anti-desertification area in the Beijing and Tianjin sandstorm source region

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie

    2018-05-01

    Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.

  7. Stellar evolution with turbulent diffusion. I. A new formalism of mixing.

    NASA Astrophysics Data System (ADS)

    Deng, L.; Bressan, A.; Chiosi, C.

    1996-09-01

    In this paper we present a new formulation of diffusive mixing in stellar interiors aimed at casting light on the kind of mixing that should take place in the so-called overshoot regions surrounding fully convective zones. Key points of the analysis are the inclusion the concept of scale length most effective for mixing, by means of which the diffusion coefficient is formulated, and the inclusion of intermittence and stirring, two properties of turbulence known from laboratory fluid dynamics. The formalism is applied to follow the evolution of a 20Msun_ star with composition Z=0.008 and Y=0.25. Depending on the value of the diffusion coefficient holding in the overshoot region, the evolutionary behaviour of the test stars goes from the case of virtually no mixing (semiconvective like structures) to that of full mixing over there (standard overshoot models). Indeed, the efficiency of mixing in this region drives the extension of the intermediate fully convective shell developing at the onset of the the shell H-burning, and in turn the path in the HR Diagram (HRD). Models with low efficiency of mixing burn helium in the core at high effective temperatures, models with intermediate efficiency perform extended loops in the HRD, finally models with high efficiency spend the whole core He-burning phase at low effective temperatures. In order to cast light on this important point of stellar structure, we test whether or not in the regions of the H-burning shell a convective layer can develop. More precisely, we examine whether the Schwarzschild or the Ledoux criterion ought to be adopted in this region. Furthermore, we test the response of stellar models to the kind of mixing supposed to occur in the H-burning shell regions. Finally, comparing the time scale of thermal dissipation to the evolutionary time scale, we get the conclusion that no mixing in this region should occur. The models with intermediate efficiency of mixing and no mixing at all in the shell H-burning regions are of particular interest as they possess at the same time evolutionary characteristics that are separately typical of models calculated with different schemes of mixing. In other words, the new models share the same properties of models with standard overshoot, namely a wider main sequence band, higher luminosity, and longer lifetimes than classical models, but they also possess extended loops that are the main signature of the classical (semiconvective) description of convection at the border of the core.

  8. Analyzing Association Mapping in Pedigree-Based GWAS Using a Penalized Multitrait Mixed Model

    PubMed Central

    Liu, Jin; Yang, Can; Shi, Xingjie; Li, Cong; Huang, Jian; Zhao, Hongyu; Ma, Shuangge

    2017-01-01

    Genome-wide association studies (GWAS) have led to the identification of many genetic variants associated with complex diseases in the past 10 years. Penalization methods, with significant numerical and statistical advantages, have been extensively adopted in analyzing GWAS. This study has been partly motivated by the analysis of Genetic Analysis Workshop (GAW) 18 data, which have two notable characteristics. First, the subjects are from a small number of pedigrees and hence related. Second, for each subject, multiple correlated traits have been measured. Most of the existing penalization methods assume independence between subjects and traits and can be suboptimal. There are a few methods in the literature based on mixed modeling that can accommodate correlations. However, they cannot fully accommodate the two types of correlations while conducting effective marker selection. In this study, we develop a penalized multitrait mixed modeling approach. It accommodates the two different types of correlations and includes several existing methods as special cases. Effective penalization is adopted for marker selection. Simulation demonstrates its satisfactory performance. The GAW 18 data are analyzed using the proposed method. PMID:27247027

  9. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    PubMed Central

    Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs

    2018-01-01

    Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344

  10. A Unified Development of Basis Reduction Methods for Rotor Blade Analysis

    NASA Technical Reports Server (NTRS)

    Ruzicka, Gene C.; Hodges, Dewey H.; Rutkowski, Michael (Technical Monitor)

    2001-01-01

    The axial foreshortening effect plays a key role in rotor blade dynamics, but approximating it accurately in reduced basis models has long posed a difficult problem for analysts. Recently, though, several methods have been shown to be effective in obtaining accurate,reduced basis models for rotor blades. These methods are the axial elongation method,the mixed finite element method, and the nonlinear normal mode method. The main objective of this paper is to demonstrate the close relationships among these methods, which are seemingly disparate at first glance. First, the difficulties inherent in obtaining reduced basis models of rotor blades are illustrated by examining the modal reduction accuracy of several blade analysis formulations. It is shown that classical, displacement-based finite elements are ill-suited for rotor blade analysis because they can't accurately represent the axial strain in modal space, and that this problem may be solved by employing the axial force as a variable in the analysis. It is shown that the mixed finite element method is a convenient means for accomplishing this, and the derivation of a mixed finite element for rotor blade analysis is outlined. A shortcoming of the mixed finite element method is that is that it increases the number of variables in the analysis. It is demonstrated that this problem may be rectified by solving for the axial displacements in terms of the axial forces and the bending displacements. Effectively, this procedure constitutes a generalization of the widely used axial elongation method to blades of arbitrary topology. The procedure is developed first for a single element, and then extended to an arbitrary assemblage of elements of arbitrary type. Finally, it is shown that the generalized axial elongation method is essentially an approximate solution for an invariant manifold that can be used as the basis for a nonlinear normal mode.

  11. The impact of composite AUC estimates on the prediction of systemic exposure in toxicology experiments.

    PubMed

    Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar

    2015-06-01

    Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.

  12. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications

    PubMed Central

    Austin, Peter C.

    2017-01-01

    Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954

  13. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.

    PubMed

    Austin, Peter C

    2017-08-01

    Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).

  14. Cost-effectiveness of rivaroxaban for stroke prevention in atrial fibrillation in the Portuguese setting.

    PubMed

    Morais, João; Aguiar, Carlos; McLeod, Euan; Chatzitheofilou, Ismini; Fonseca Santos, Isabel; Pereira, Sónia

    2014-09-01

    To project the long-term cost-effectiveness of treating non-valvular atrial fibrillation (AF) patients for stroke prevention with rivaroxaban compared to warfarin in Portugal. A Markov model was used that included health and treatment states describing the management and consequences of AF and its treatment. The model's time horizon was set at a patient's lifetime and each cycle at three months. The analysis was conducted from a societal perspective and a 5% discount rate was applied to both costs and outcomes. Treatment effect data were obtained from the pivotal phase III ROCKET AF trial. The model was also populated with utility values obtained from the literature and with cost data derived from official Portuguese sources. The outcomes of the model included life-years, quality-adjusted life-years (QALYs), incremental costs, and associated incremental cost-effectiveness ratios (ICERs). Extensive sensitivity analyses were undertaken to further assess the findings of the model. As there is evidence indicating underuse and underprescription of warfarin in Portugal, an additional analysis was performed using a mixed comparator composed of no treatment, aspirin, and warfarin, which better reflects real-world prescribing in Portugal. This cost-effectiveness analysis produced an ICER of €3895/QALY for the base-case analysis (vs. warfarin) and of €6697/QALY for the real-world prescribing analysis (vs. mixed comparator). The findings were robust when tested in sensitivity analyses. The results showed that rivaroxaban may be a cost-effective alternative compared with warfarin or real-world prescribing in Portugal. Copyright © 2014 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.

  15. Dialectical Behavior Therapy for Borderline Personality Disorder: A Meta-Analysis Using Mixed-Effects Modeling

    ERIC Educational Resources Information Center

    Kliem, Soren; Kroger, Christoph; Kosfelder, Joachim

    2010-01-01

    Objective: At present, the most frequently investigated psychosocial intervention for borderline personality disorder (BPD) is dialectical behavior therapy (DBT). We conducted a meta-analysis to examine the efficacy and long-term effectiveness of DBT. Method: Systematic bibliographic research was undertaken to find relevant literature from online…

  16. Experiments in dilution jet mixing effects of multiple rows and non-circular orifices

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.; Srinivasan, R.; Coleman, E. B.; Meyers, G. D.; White, C. D.

    1985-01-01

    Experimental and empirical model results are presented that extend previous studies of the mixing of single-sided and opposed rows of jets in a confined duct flow to include effects of non-circular orifices and double rows of jets. Analysis of the mean temperature data obtained in this investigation showed that the effects of orifice shape and double rows are significant only in the region close to the injection plane, provided that the orifices are symmetric with respect to the main flow direction. The penetration and mixing of jets from 45-degree slanted slots is slightly less than that from equivalent-area symmetric orifices. The penetration from 2-dimensional slots is similar to that from equivalent-area closely-spaced rows of holes, but the mixing is slower for the 2-D slots. Calculated mean temperature profiles downstream of jets from non-circular and double rows of orifices, made using an extension developed for a previous empirical model, are shown to be in good agreement with the measured distributions.

  17. Experiments in dilution jet mixing - Effects of multiple rows and non-circular orifices

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.; Srinivasan, R.; Coleman, E. B.; Meyers, G. D.; White, C. D.

    1985-01-01

    Experimental and empirical model results are presented that extend previous studies of the mixing of single-sided and opposed rows of jets in a confined duct flow to include effects of non-circular orifices and double rows of jets. Analysis of the mean temperature data obtained in this investigation showed that the effects of orifice shape and double rows are significant only in the region close to the injection plane, provided that the orifices are symmetric with respect to the main flow direction. The penetration and mixing of jets from 45-degree slanted slots is slightly less than that from equivalent-area symmetric orifices. The penetration from two-dimensional slots is similar to that from equivalent-area closely-spaced rows of holes, but the mixing is slower for the 2-D slots. Calculated mean temperature profiles downstream of jets from non-circular and double rows of orifices, made using an extension developed for a previous empirical model, are shown to be in good agreement with the measured distributions.

  18. A comparison of methods for estimating the random effects distribution of a linear mixed model.

    PubMed

    Ghidey, Wendimagegn; Lesaffre, Emmanuel; Verbeke, Geert

    2010-12-01

    This article reviews various recently suggested approaches to estimate the random effects distribution in a linear mixed model, i.e. (1) the smoothing by roughening approach of Shen and Louis,(1) (2) the semi-non-parametric approach of Zhang and Davidian,(2) (3) the heterogeneity model of Verbeke and Lesaffre( 3) and (4) a flexible approach of Ghidey et al. (4) These four approaches are compared via an extensive simulation study. We conclude that for the considered cases, the approach of Ghidey et al. (4) often shows to have the smallest integrated mean squared error for estimating the random effects distribution. An analysis of a longitudinal dental data set illustrates the performance of the methods in a practical example.

  19. The Box-Cox power transformation on nursing sensitive indicators: Does it matter if structural effects are omitted during the estimation of the transformation parameter?

    PubMed Central

    2011-01-01

    Background Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Methods Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI®) for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Results Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. Conclusions The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects. PMID:21854614

  20. The Box-Cox power transformation on nursing sensitive indicators: does it matter if structural effects are omitted during the estimation of the transformation parameter?

    PubMed

    Hou, Qingjiang; Mahnken, Jonathan D; Gajewski, Byron J; Dunton, Nancy

    2011-08-19

    Many nursing and health related research studies have continuous outcome measures that are inherently non-normal in distribution. The Box-Cox transformation provides a powerful tool for developing a parsimonious model for data representation and interpretation when the distribution of the dependent variable, or outcome measure, of interest deviates from the normal distribution. The objectives of this study was to contrast the effect of obtaining the Box-Cox power transformation parameter and subsequent analysis of variance with or without a priori knowledge of predictor variables under the classic linear or linear mixed model settings. Simulation data from a 3 × 4 factorial treatments design, along with the Patient Falls and Patient Injury Falls from the National Database of Nursing Quality Indicators (NDNQI® for the 3rd quarter of 2007 from a convenience sample of over one thousand US hospitals were analyzed. The effect of the nonlinear monotonic transformation was contrasted in two ways: a) estimating the transformation parameter along with factors with potential structural effects, and b) estimating the transformation parameter first and then conducting analysis of variance for the structural effect. Linear model ANOVA with Monte Carlo simulation and mixed models with correlated error terms with NDNQI examples showed no substantial differences on statistical tests for structural effects if the factors with structural effects were omitted during the estimation of the transformation parameter. The Box-Cox power transformation can still be an effective tool for validating statistical inferences with large observational, cross-sectional, and hierarchical or repeated measure studies under the linear or the mixed model settings without prior knowledge of all the factors with potential structural effects.

  1. Analysis of Composite Skin-Stiffener Debond Specimens Using Volume Elements and a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The debonding of a skin/stringer specimen subjected to tension was studied using three-dimensional volume element modeling and computational fracture mechanics. Mixed mode strain energy release rates were calculated from finite element results using the virtual crack closure technique. The simulations revealed an increase in total energy release rate in the immediate vicinity of the free edges of the specimen. Correlation of the computed mixed-mode strain energy release rates along the delamination front contour with a two-dimensional mixed-mode interlaminar fracture criterion suggested that in spite of peak total energy release rates at the free edge the delamination would not advance at the edges first. The qualitative prediction of the shape of the delamination front was confirmed by X-ray photographs of a specimen taken during testing. The good correlation between prediction based on analysis and experiment demonstrated the efficiency of a mixed-mode failure analysis for the investigation of skin/stiffener separation due to delamination in the adherents. The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is also demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.

  2. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    PubMed

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.

  3. A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.

    PubMed

    Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin

    2017-02-01

    The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage

    ERIC Educational Resources Information Center

    Galyardt, April

    2012-01-01

    This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…

  5. Mathematical Modelling and Analysis of the Tumor Treatment Regimens with Pulsed Immunotherapy and Chemotherapy

    PubMed Central

    Pang, Liuyong; Shen, Lin; Zhao, Zhong

    2016-01-01

    To begin with, in this paper, single immunotherapy, single chemotherapy, and mixed treatment are discussed, and sufficient conditions under which tumor cells will be eliminated ultimately are obtained. We analyze the impacts of the least effective concentration and the half-life of the drug on therapeutic results and then find that increasing the least effective concentration or extending the half-life of the drug can achieve better therapeutic effects. In addition, since most types of tumors are resistant to common chemotherapy drugs, we consider the impact of drug resistance on therapeutic results and propose a new mathematical model to explain the cause of the chemotherapeutic failure using single drug. Based on this, in the end, we explore the therapeutic effects of two-drug combination chemotherapy, as well as mixed immunotherapy with combination chemotherapy. Numerical simulations indicate that combination chemotherapy is very effective in controlling tumor growth. In comparison, mixed immunotherapy with combination chemotherapy can achieve a better treatment effect. PMID:26997972

  6. Mathematical Modelling and Analysis of the Tumor Treatment Regimens with Pulsed Immunotherapy and Chemotherapy.

    PubMed

    Pang, Liuyong; Shen, Lin; Zhao, Zhong

    2016-01-01

    To begin with, in this paper, single immunotherapy, single chemotherapy, and mixed treatment are discussed, and sufficient conditions under which tumor cells will be eliminated ultimately are obtained. We analyze the impacts of the least effective concentration and the half-life of the drug on therapeutic results and then find that increasing the least effective concentration or extending the half-life of the drug can achieve better therapeutic effects. In addition, since most types of tumors are resistant to common chemotherapy drugs, we consider the impact of drug resistance on therapeutic results and propose a new mathematical model to explain the cause of the chemotherapeutic failure using single drug. Based on this, in the end, we explore the therapeutic effects of two-drug combination chemotherapy, as well as mixed immunotherapy with combination chemotherapy. Numerical simulations indicate that combination chemotherapy is very effective in controlling tumor growth. In comparison, mixed immunotherapy with combination chemotherapy can achieve a better treatment effect.

  7. Modeling and Analysis of Mixed Synchronous/Asynchronous Systems

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan

    2012-01-01

    Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.

  8. Genetic mixed linear models for twin survival data.

    PubMed

    Ha, Il Do; Lee, Youngjo; Pawitan, Yudi

    2007-07-01

    Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.

  9. The Robustness of LISREL Estimates in Structural Equation Models with Categorical Variables.

    ERIC Educational Resources Information Center

    Ethington, Corinna A.

    1987-01-01

    This study examined the effect of type of correlation matrix on the robustness of LISREL maximum likelihood and unweighted least squares structural parameter estimates for models with categorical variables. The analysis of mixed matrices produced estimates that closely approximated the model parameters except where dichotomous variables were…

  10. Solutions of the chemical kinetic equations for initially inhomogeneous mixtures.

    NASA Technical Reports Server (NTRS)

    Hilst, G. R.

    1973-01-01

    Following the recent discussions by O'Brien (1971) and Donaldson and Hilst (1972) of the effects of inhomogeneous mixing and turbulent diffusion on simple chemical reaction rates, the present report provides a more extensive analysis of when inhomogeneous mixing has a significant effect on chemical reaction rates. The analysis is then extended to the development of an approximate chemical sub-model which provides much improved predictions of chemical reaction rates over a wide range of inhomogeneities and pathological distributions of the concentrations of the reacting chemical species. In particular, the development of an approximate representation of the third-order correlations of the joint concentration fluctuations permits closure of the chemical sub-model at the level of the second-order moments of these fluctuations and the mean concentrations.

  11. Marketing Mix Formulation for Higher Education: An Integrated Analysis Employing Analytic Hierarchy Process, Cluster Analysis and Correspondence Analysis

    ERIC Educational Resources Information Center

    Ho, Hsuan-Fu; Hung, Chia-Chi

    2008-01-01

    Purpose: The purpose of this paper is to examine how a graduate institute at National Chiayi University (NCYU), by using a model that integrates analytic hierarchy process, cluster analysis and correspondence analysis, can develop effective marketing strategies. Design/methodology/approach: This is primarily a quantitative study aimed at…

  12. Health economic comparison of SLIT allergen and SCIT allergoid immunotherapy in patients with seasonal grass-allergic rhinoconjunctivitis in Germany.

    PubMed

    Verheggen, Bram G; Westerhout, Kirsten Y; Schreder, Carl H; Augustin, Matthias

    2015-01-01

    Allergoids are chemically modified allergen extracts administered to reduce allergenicity and to maintain immunogenicity. Oralair® (the 5-grass tablet) is a sublingual native grass allergen tablet for pre- and co-seasonal treatment. Based on a literature review, meta-analysis, and cost-effectiveness analysis the relative effects and costs of the 5-grass tablet versus a mix of subcutaneous allergoid compounds for grass pollen allergic rhinoconjunctivitis were assessed. A Markov model with a time horizon of nine years was used to assess the costs and effects of three-year immunotherapy treatment. Relative efficacy expressed as standardized mean differences was estimated using an indirect comparison on symptom scores extracted from available clinical trials. The Rhinitis Symptom Utility Index (RSUI) was applied as a proxy to estimate utility values for symptom scores. Drug acquisition and other medical costs were derived from published sources as well as estimates for resource use, immunotherapy persistence, and occurrence of asthma. The analysis was executed from the German payer's perspective, which includes payments of the Statutory Health Insurance (SHI) and additional payments by insurants. Comprehensive deterministic and probabilistic sensitivity analyses and different scenarios were performed to test the uncertainty concerning the incremental model outcomes. The applied model predicted a cost-utility ratio of the 5-grass tablet versus a market mix of injectable allergoid products of € 12,593 per QALY in the base case analysis. Predicted incremental costs and QALYs were € 458 (95% confidence interval, CI: € 220; € 739) and 0.036 (95% CI: 0.002; 0.078), respectively. Compared to the allergoid mix the probability of the 5-grass tablet being the most cost-effective treatment option was predicted to be 76% at a willingness-to-pay threshold of € 20,000. The results were most sensitive to changes in efficacy estimates, duration of the pollen season, and immunotherapy persistence rates. This analysis suggests the sublingual native 5-grass tablet to be cost-effective relative to a mix of subcutaneous allergoid compounds. The robustness of these statements has been confirmed in extensive sensitivity and scenario analyses.

  13. A tutorial on Bayesian bivariate meta-analysis of mixed binary-continuous outcomes with missing treatment effects.

    PubMed

    Gajic-Veljanoski, Olga; Cheung, Angela M; Bayoumi, Ahmed M; Tomlinson, George

    2016-05-30

    Bivariate random-effects meta-analysis (BVMA) is a method of data synthesis that accounts for treatment effects measured on two outcomes. BVMA gives more precise estimates of the population mean and predicted values than two univariate random-effects meta-analyses (UVMAs). BVMA also addresses bias from incomplete reporting of outcomes. A few tutorials have covered technical details of BVMA of categorical or continuous outcomes. Limited guidance is available on how to analyze datasets that include trials with mixed continuous-binary outcomes where treatment effects on one outcome or the other are not reported. Given the advantages of Bayesian BVMA for handling missing outcomes, we present a tutorial for Bayesian BVMA of incompletely reported treatment effects on mixed bivariate outcomes. This step-by-step approach can serve as a model for our intended audience, the methodologist familiar with Bayesian meta-analysis, looking for practical advice on fitting bivariate models. To facilitate application of the proposed methods, we include our WinBUGS code. As an example, we use aggregate-level data from published trials to demonstrate the estimation of the effects of vitamin K and bisphosphonates on two correlated bone outcomes, fracture, and bone mineral density. We present datasets where reporting of the pairs of treatment effects on both outcomes was 'partially' complete (i.e., pairs completely reported in some trials), and we outline steps for modeling the incompletely reported data. To assess what is gained from the additional work required by BVMA, we compare the resulting estimates to those from separate UVMAs. We discuss methodological findings and make four recommendations. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Logit-normal mixed model for Indian monsoon precipitation

    NASA Astrophysics Data System (ADS)

    Dietz, L. R.; Chatterjee, S.

    2014-09-01

    Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.

  15. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    PubMed

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  16. Performance of nonlinear mixed effects models in the presence of informative dropout.

    PubMed

    Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H

    2015-01-01

    Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.

  17. Intuitive Logic Revisited: New Data and a Bayesian Mixed Model Meta-Analysis

    PubMed Central

    Singmann, Henrik; Klauer, Karl Christoph; Kellen, David

    2014-01-01

    Recent research on syllogistic reasoning suggests that the logical status (valid vs. invalid) of even difficult syllogisms can be intuitively detected via differences in conceptual fluency between logically valid and invalid syllogisms when participants are asked to rate how much they like a conclusion following from a syllogism (Morsanyi & Handley, 2012). These claims of an intuitive logic are at odds with most theories on syllogistic reasoning which posit that detecting the logical status of difficult syllogisms requires effortful and deliberate cognitive processes. We present new data replicating the effects reported by Morsanyi and Handley, but show that this effect is eliminated when controlling for a possible confound in terms of conclusion content. Additionally, we reanalyze three studies () without this confound with a Bayesian mixed model meta-analysis (i.e., controlling for participant and item effects) which provides evidence for the null-hypothesis and against Morsanyi and Handley's claim. PMID:24755777

  18. A mathematical model for mixed convective flow of chemically reactive Oldroyd-B fluid between isothermal stretching disks

    NASA Astrophysics Data System (ADS)

    Hashmi, M. S.; Khan, N.; Ullah Khan, Sami; Rashidi, M. M.

    In this study, we have constructed a mathematical model to investigate the heat source/sink effects in mixed convection axisymmetric flow of an incompressible, electrically conducting Oldroyd-B fluid between two infinite isothermal stretching disks. The effects of viscous dissipation and Joule heating are also considered in the heat equation. The governing partial differential equations are converted into ordinary differential equations by using appropriate similarity variables. The series solution of these dimensionless equations is constructed by using homotopy analysis method. The convergence of the obtained solution is carefully examined. The effects of various involved parameters on pressure, velocity and temperature profiles are comprehensively studied. A graphical analysis has been presented for various values of problem parameters. The numerical values of wall shear stress and Nusselt number are computed at both upper and lower disks. Moreover, a graphical and tabular explanation for critical values of Frank-Kamenetskii regarding other flow parameters.

  19. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  20. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    PubMed Central

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  1. Large eddy simulation model for wind-driven sea circulation in coastal areas

    NASA Astrophysics Data System (ADS)

    Petronio, A.; Roman, F.; Nasello, C.; Armenio, V.

    2013-12-01

    In the present paper a state-of-the-art large eddy simulation model (LES-COAST), suited for the analysis of water circulation and mixing in closed or semi-closed areas, is presented and applied to the study of the hydrodynamic characteristics of the Muggia bay, the industrial harbor of the city of Trieste, Italy. The model solves the non-hydrostatic, unsteady Navier-Stokes equations, under the Boussinesq approximation for temperature and salinity buoyancy effects, using a novel, two-eddy viscosity Smagorinsky model for the closure of the subgrid-scale momentum fluxes. The model employs: a simple and effective technique to take into account wind-stress inhomogeneity related to the blocking effect of emerged structures, which, in turn, can drive local-scale, short-term pollutant dispersion; a new nesting procedure to reconstruct instantaneous, turbulent velocity components, temperature and salinity at the open boundaries of the domain using data coming from large-scale circulation models (LCM). Validation tests have shown that the model reproduces field measurement satisfactorily. The analysis of water circulation and mixing in the Muggia bay has been carried out under three typical breeze conditions. Water circulation has been shown to behave as in typical semi-closed basins, with an upper layer moving along the wind direction (apart from the anti-cyclonic veering associated with the Coriolis force) and a bottom layer, thicker and slower than the upper one, moving along the opposite direction. The study has shown that water vertical mixing in the bay is inhibited by a large level of stable stratification, mainly associated with vertical variation in salinity and, to a minor extent, with temperature variation along the water column. More intense mixing, quantified by sub-critical values of the gradient Richardson number, is present in near-coastal regions where upwelling/downwelling phenomena occur. The analysis of instantaneous fields has detected the presence of large cross-sectional eddies spanning the whole water column and contributing to vertical mixing, associated with the presence of sub-surface horizontal turbulent structures. Analysis of water renewal within the bay shows that, under the typical breeze regimes considered in the study, the residence time of water in the bay is of the order of a few days. Finally, vertical eddy viscosity has been calculated and shown to vary by a couple of orders of magnitude along the water column, with larger values near the bottom surface where density stratification is smaller.

  2. Estimating the numerical diapycnal mixing in the GO5.0 ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex; Nurser, George

    2014-05-01

    Constant-depth (or "z-coordinate") ocean models such as MOM and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes (e.g. Hofmann and Maqueda, 2006), and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2013). It uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. Two approaches to quantifying the numerical diapycnal mixing in this model are described: the first is based on the isopycnal watermass analysis of Lee et al (2002), while the second uses a passive tracer to diagnose mixing across density surfaces. Results from these two methods will be compared and contrasted. Hofmann, M. and Maqueda, M. A. M., 2006. Performance of a second-order moments advection scheme in an ocean general circulation model. JGR-Oceans, 111(C5). Lee, M.-M., Coward, A.C., Nurser, A.G., 2002. Spurious diapycnal mixing of deep waters in an eddy-permitting global ocean model. JPO 32, 1522-1535 Megann, A., Storkey, D., Aksenov, Y., Alderson, S., Calvert, D., Graham, T., Hyder, P., Siddorn, J., and Sinha, B., 2013: GO5.0: The joint NERC-Met Office NEMO global ocean model for use in coupled and forced applications, Geosci. Model Dev. Discuss., 6, 5747-5799,.

  3. A numerical study of automotive turbocharger mixed flow turbine inlet geometry for off design performance

    NASA Astrophysics Data System (ADS)

    Leonard, T.; Spence, S.; Early, J.; Filsinger, D.

    2013-12-01

    Mixed flow turbines represent a potential solution to the increasing requirement for high pressure, low velocity ratio operation in turbocharger applications. While literature exists for the use of these turbines at such operating conditions, there is a lack of detailed design guidance for defining the basic geometry of the turbine, in particular, the cone angle - the angle at which the inlet of the mixed flow turbine is inclined to the axis. This investigates the effect and interaction of such mixed flow turbine design parameters. Computational Fluids Dynamics was initially used to investigate the performance of a modern radial turbine to create a baseline for subsequent mixed flow designs. Existing experimental data was used to validate this model. Using the CFD model, a number of mixed flow turbine designs were investigated. These included studies varying the cone angle and the associated inlet blade angle. The results of this analysis provide insight into the performance of a mixed flow turbine with respect to cone and inlet blade angle.

  4. FORMAL UNCERTAINTY ANALYSIS OF A LAGRANGIAN PHOTOCHEMICAL AIR POLLUTION MODEL. (R824792)

    EPA Science Inventory

    This study applied Monte Carlo analysis with Latin
    hypercube sampling to evaluate the effects of uncertainty
    in air parcel trajectory paths, emissions, rate constants,
    deposition affinities, mixing heights, and atmospheric stability
    on predictions from a vertically...

  5. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    PubMed

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  6. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    PubMed Central

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  7. Effects of preheat and mix on the fuel adiabat of an imploding capsule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, B.; Kwan, T. J. T.; Wang, Y. M.

    We demonstrate the effect of preheat, hydrodynamic mix and vorticity on the adiabat of the deuterium-tritium (DT) fuel in fusion capsule experiments. We show that the adiabat of the DT fuel increases resulting from hydrodynamic mixing due to the phenomenon of entropy of mixture. An upper limit of mix, M clean=M DT ≥ 0:98 is found necessary to keep the DT fuel on a low adiabat. We demonstrate in this study that the use of a high adiabat for the DT fuel in theoretical analysis and with the aid of 1D code simulations could explain some aspects of 3D effectsmore » and mix in capsule implosion. Furthermore, we can infer from our physics model and the observed neutron images the adiabat of the DT fuel in the capsule and the amount of mix produced on the hot spot.« less

  8. Effects of preheat and mix on the fuel adiabat of an imploding capsule

    DOE PAGES

    Cheng, B.; Kwan, T. J. T.; Wang, Y. M.; ...

    2016-12-01

    We demonstrate the effect of preheat, hydrodynamic mix and vorticity on the adiabat of the deuterium-tritium (DT) fuel in fusion capsule experiments. We show that the adiabat of the DT fuel increases resulting from hydrodynamic mixing due to the phenomenon of entropy of mixture. An upper limit of mix, M clean=M DT ≥ 0:98 is found necessary to keep the DT fuel on a low adiabat. We demonstrate in this study that the use of a high adiabat for the DT fuel in theoretical analysis and with the aid of 1D code simulations could explain some aspects of 3D effectsmore » and mix in capsule implosion. Furthermore, we can infer from our physics model and the observed neutron images the adiabat of the DT fuel in the capsule and the amount of mix produced on the hot spot.« less

  9. Modelling Kepler red giants in eclipsing binaries: calibrating the mixing-length parameter with asteroseismology

    NASA Astrophysics Data System (ADS)

    Li, Tanda; Bedding, Timothy R.; Huber, Daniel; Ball, Warrick H.; Stello, Dennis; Murphy, Simon J.; Bland-Hawthorn, Joss

    2018-03-01

    Stellar models rely on a number of free parameters. High-quality observations of eclipsing binary stars observed by Kepler offer a great opportunity to calibrate model parameters for evolved stars. Our study focuses on six Kepler red giants with the goal of calibrating the mixing-length parameter of convection as well as the asteroseismic surface term in models. We introduce a new method to improve the identification of oscillation modes that exploits theoretical frequencies to guide the mode identification (`peak-bagging') stage of the data analysis. Our results indicate that the convective mixing-length parameter (α) is ≈14 per cent larger for red giants than for the Sun, in agreement with recent results from modelling the APOGEE stars. We found that the asteroseismic surface term (i.e. the frequency offset between the observed and predicted modes) correlates with stellar parameters (Teff, log g) and the mixing-length parameter. This frequency offset generally decreases as giants evolve. The two coefficients a-1 and a3 for the inverse and cubic terms that have been used to describe the surface term correction are found to correlate linearly. The effect of the surface term is also seen in the p-g mixed modes; however, established methods for correcting the effect are not able to properly correct the g-dominated modes in late evolved stars.

  10. Formation of parametric images using mixed-effects models: a feasibility study.

    PubMed

    Huang, Husan-Ming; Shih, Yi-Yu; Lin, Chieh

    2016-03-01

    Mixed-effects models have been widely used in the analysis of longitudinal data. By presenting the parameters as a combination of fixed effects and random effects, mixed-effects models incorporating both within- and between-subject variations are capable of improving parameter estimation. In this work, we demonstrate the feasibility of using a non-linear mixed-effects (NLME) approach for generating parametric images from medical imaging data of a single study. By assuming that all voxels in the image are independent, we used simulation and animal data to evaluate whether NLME can improve the voxel-wise parameter estimation. For testing purposes, intravoxel incoherent motion (IVIM) diffusion parameters including perfusion fraction, pseudo-diffusion coefficient and true diffusion coefficient were estimated using diffusion-weighted MR images and NLME through fitting the IVIM model. The conventional method of non-linear least squares (NLLS) was used as the standard approach for comparison of the resulted parametric images. In the simulated data, NLME provides more accurate and precise estimates of diffusion parameters compared with NLLS. Similarly, we found that NLME has the ability to improve the signal-to-noise ratio of parametric images obtained from rat brain data. These data have shown that it is feasible to apply NLME in parametric image generation, and the parametric image quality can be accordingly improved with the use of NLME. With the flexibility to be adapted to other models or modalities, NLME may become a useful tool to improve the parametric image quality in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.

    PubMed

    Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng

    2014-06-01

    Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.

  12. Linear mixed-effects modeling approach to FMRI group analysis

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.

    2013-01-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789

  13. Linear mixed-effects modeling approach to FMRI group analysis.

    PubMed

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.

  14. Random Effects Structure for Confirmatory Hypothesis Testing: Keep It Maximal

    ERIC Educational Resources Information Center

    Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J.

    2013-01-01

    Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the…

  15. Mixing in the shear superposition micromixer: three-dimensional analysis.

    PubMed

    Bottausci, Frederic; Mezić, Igor; Meinhart, Carl D; Cardonne, Caroline

    2004-05-15

    In this paper, we analyse mixing in an active chaotic advection micromixer. The micromixer consists of a main rectangular channel and three cross-stream secondary channels that provide ability for time-dependent actuation of the flow stream in the direction orthogonal to the main stream. Three-dimensional motion in the mixer is studied. Numerical simulations and modelling of the flow are pursued in order to understand the experiments. It is shown that for some values of parameters a simple model can be derived that clearly represents the flow nature. Particle image velocimetry measurements of the flow are compared with numerical simulations and the analytical model. A measure for mixing, the mixing variance coefficient (MVC), is analysed. It is shown that mixing is substantially improved with multiple side channels with oscillatory flows, whose frequencies are increasing downstream. The optimization of MVC results for single side-channel mixing is presented. It is shown that dependence of MVC on frequency is not monotone, and a local minimum is found. Residence time distributions derived from the analytical model are analysed. It is shown that, while the average Lagrangian velocity profile is flattened over the steady flow, Taylor-dispersion effects are still present for the current micromixer configuration.

  16. Mixed conditional logistic regression for habitat selection studies.

    PubMed

    Duchesne, Thierry; Fortin, Daniel; Courbin, Nicolas

    2010-05-01

    1. Resource selection functions (RSFs) are becoming a dominant tool in habitat selection studies. RSF coefficients can be estimated with unconditional (standard) and conditional logistic regressions. While the advantage of mixed-effects models is recognized for standard logistic regression, mixed conditional logistic regression remains largely overlooked in ecological studies. 2. We demonstrate the significance of mixed conditional logistic regression for habitat selection studies. First, we use spatially explicit models to illustrate how mixed-effects RSFs can be useful in the presence of inter-individual heterogeneity in selection and when the assumption of independence from irrelevant alternatives (IIA) is violated. The IIA hypothesis states that the strength of preference for habitat type A over habitat type B does not depend on the other habitat types also available. Secondly, we demonstrate the significance of mixed-effects models to evaluate habitat selection of free-ranging bison Bison bison. 3. When movement rules were homogeneous among individuals and the IIA assumption was respected, fixed-effects RSFs adequately described habitat selection by simulated animals. In situations violating the inter-individual homogeneity and IIA assumptions, however, RSFs were best estimated with mixed-effects regressions, and fixed-effects models could even provide faulty conclusions. 4. Mixed-effects models indicate that bison did not select farmlands, but exhibited strong inter-individual variations in their response to farmlands. Less than half of the bison preferred farmlands over forests. Conversely, the fixed-effect model simply suggested an overall selection for farmlands. 5. Conditional logistic regression is recognized as a powerful approach to evaluate habitat selection when resource availability changes. This regression is increasingly used in ecological studies, but almost exclusively in the context of fixed-effects models. Fitness maximization can imply differences in trade-offs among individuals, which can yield inter-individual differences in selection and lead to departure from IIA. These situations are best modelled with mixed-effects models. Mixed-effects conditional logistic regression should become a valuable tool for ecological research.

  17. Analysis of genetic effects of nuclear-cytoplasmic interaction on quantitative traits: genetic model for diploid plants.

    PubMed

    Han, Lide; Yang, Jian; Zhu, Jun

    2007-06-01

    A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.

  18. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  19. Applications of MIDAS regression in analysing trends in water quality

    NASA Astrophysics Data System (ADS)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  20. Examination of turbulent entrainment-mixing mechanisms using a combined approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, C.; Liu, Y.; Niu, S.

    2011-10-01

    Turbulent entrainment-mixing mechanisms are investigated by applying a combined approach to the aircraft measurements of three drizzling and two nondrizzling stratocumulus clouds collected over the U.S. Department of Energy's Atmospheric Radiation Measurement Southern Great Plains site during the March 2000 cloud Intensive Observation Period. Microphysical analysis shows that the inhomogeneous entrainment-mixing process occurs much more frequently than the homogeneous counterpart, and most cases of the inhomogeneous entrainment-mixing process are close to the extreme scenario, having drastically varying cloud droplet concentration but roughly constant volume-mean radius. It is also found that the inhomogeneous entrainment-mixing process can occur both near the cloudmore » top and in the middle level of a cloud, and in both the nondrizzling clouds and nondrizzling legs in the drizzling clouds. A new dimensionless number, the scale number, is introduced as a dynamical measure for different entrainment-mixing processes, with a larger scale number corresponding to a higher degree of homogeneous entrainment mixing. Further empirical analysis shows that the scale number that separates the homogeneous from the inhomogeneous entrainment-mixing process is around 50, and most legs have smaller scale numbers. Thermodynamic analysis shows that sampling average of filament structures finer than the instrumental spatial resolution also contributes to the dominance of inhomogeneous entrainment-mixing mechanism. The combined microphysical-dynamical-thermodynamic analysis sheds new light on developing parameterization of entrainment-mixing processes and their microphysical and radiative effects in large-scale models.« less

  1. Longitudinal analysis of the strengths and difficulties questionnaire scores of the Millennium Cohort Study children in England using M-quantile random-effects regression.

    PubMed

    Tzavidis, Nikos; Salvati, Nicola; Schmid, Timo; Flouri, Eirini; Midouhas, Emily

    2016-02-01

    Multilevel modelling is a popular approach for longitudinal data analysis. Statistical models conventionally target a parameter at the centre of a distribution. However, when the distribution of the data is asymmetric, modelling other location parameters, e.g. percentiles, may be more informative. We present a new approach, M -quantile random-effects regression, for modelling multilevel data. The proposed method is used for modelling location parameters of the distribution of the strengths and difficulties questionnaire scores of children in England who participate in the Millennium Cohort Study. Quantile mixed models are also considered. The analyses offer insights to child psychologists about the differential effects of risk factors on children's outcomes.

  2. Bus accident analysis of routes with/without bus priority.

    PubMed

    Goh, Kelvin Chun Keong; Currie, Graham; Sarvi, Majid; Logan, David

    2014-04-01

    This paper summarises findings on road safety performance and bus-involved accidents in Melbourne along roads where bus priority measures had been applied. Results from an empirical analysis of the accident types revealed significant reduction in the proportion of accidents involving buses hitting stationary objects and vehicles, which suggests the effect of bus priority in addressing manoeuvrability issues for buses. A mixed-effects negative binomial (MENB) regression and back-propagation neural network (BPNN) modelling of bus accidents considering wider influences on accident rates at a route section level also revealed significant safety benefits when bus priority is provided. Sensitivity analyses done on the BPNN model showed general agreement in the predicted accident frequency between both models. The slightly better performance recorded by the MENB model results suggests merits in adopting a mixed effects modelling approach for accident count prediction in practice given its capability to account for unobserved location and time-specific factors. A major implication of this research is that bus priority in Melbourne's context acts to improve road safety and should be a major consideration for road management agencies when implementing bus priority and road schemes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. A Parameter Subset Selection Algorithm for Mixed-Effects Models

    DOE PAGES

    Schmidt, Kathleen L.; Smith, Ralph C.

    2016-01-01

    Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less

  4. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    ERIC Educational Resources Information Center

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  5. Data on copula modeling of mixed discrete and continuous neural time series.

    PubMed

    Hu, Meng; Li, Mingyao; Li, Wu; Liang, Hualou

    2016-06-01

    Copula is an important tool for modeling neural dependence. Recent work on copula has been expanded to jointly model mixed time series in neuroscience ("Hu et al., 2016, Joint Analysis of Spikes and Local Field Potentials using Copula" [1]). Here we present further data for joint analysis of spike and local field potential (LFP) with copula modeling. In particular, the details of different model orders and the influence of possible spike contamination in LFP data from the same and different electrode recordings are presented. To further facilitate the use of our copula model for the analysis of mixed data, we provide the Matlab codes, together with example data.

  6. Probability of atrial fibrillation after ablation: Using a parametric nonlinear temporal decomposition mixed effects model.

    PubMed

    Rajeswaran, Jeevanantham; Blackstone, Eugene H; Ehrlinger, John; Li, Liang; Ishwaran, Hemant; Parides, Michael K

    2018-01-01

    Atrial fibrillation is an arrhythmic disorder where the electrical signals of the heart become irregular. The probability of atrial fibrillation (binary response) is often time varying in a structured fashion, as is the influence of associated risk factors. A generalized nonlinear mixed effects model is presented to estimate the time-related probability of atrial fibrillation using a temporal decomposition approach to reveal the pattern of the probability of atrial fibrillation and their determinants. This methodology generalizes to patient-specific analysis of longitudinal binary data with possibly time-varying effects of covariates and with different patient-specific random effects influencing different temporal phases. The motivation and application of this model is illustrated using longitudinally measured atrial fibrillation data obtained through weekly trans-telephonic monitoring from an NIH sponsored clinical trial being conducted by the Cardiothoracic Surgery Clinical Trials Network.

  7. Impact of Grain Shape and Multiple Black Carbon Internal Mixing on Snow Albedo: Parameterization and Radiative Effect Analysis

    NASA Astrophysics Data System (ADS)

    He, Cenlin; Liou, Kuo-Nan; Takano, Yoshi; Yang, Ping; Qi, Ling; Chen, Fei

    2018-01-01

    We quantify the effects of grain shape and multiple black carbon (BC)-snow internal mixing on snow albedo by explicitly resolving shape and mixing structures. Nonspherical snow grains tend to have higher albedos than spheres with the same effective sizes, while the albedo difference due to shape effects increases with grain size, with up to 0.013 and 0.055 for effective radii of 1,000 μm at visible and near-infrared bands, respectively. BC-snow internal mixing reduces snow albedo at wavelengths < 1.5 μm, with negligible effects at longer wavelengths. Nonspherical snow grains show less BC-induced albedo reductions than spheres with the same effective sizes by up to 0.06 at ultraviolet and visible bands. Compared with external mixing, internal mixing enhances snow albedo reduction by a factor of 1.2-2.0 at visible wavelengths depending on BC concentration and snow shape. The opposite effects on albedo reductions due to snow grain nonsphericity and BC-snow internal mixing point toward a careful investigation of these two factors simultaneously in climate modeling. We further develop parameterizations for snow albedo and its reduction by accounting for grain shape and BC-snow internal/external mixing. Combining the parameterizations with BC-in-snow measurements in China, North America, and the Arctic, we estimate that nonspherical snow grains reduce BC-induced albedo radiative effects by up to 50% compared with spherical grains. Moreover, BC-snow internal mixing enhances the albedo effects by up to 30% (130%) for spherical (nonspherical) grains relative to external mixing. The overall uncertainty induced by snow shape and BC-snow mixing state is about 21-32%.

  8. Development and Validation of a 3-Dimensional CFB Furnace Model

    NASA Astrophysics Data System (ADS)

    Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti

    At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.

  9. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  10. A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models.

    PubMed

    Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S; Wu, Xiaowei; Müller, Rolf

    2018-01-01

    Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design.

  11. A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models

    PubMed Central

    Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S.; Wu, Xiaowei; Müller, Rolf

    2017-01-01

    Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design. PMID:29749977

  12. CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS

    EPA Science Inventory

    Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...

  13. Mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects for longitudinal-competing risks data analysis.

    PubMed

    Lu, Tao; Wang, Min; Liu, Guangying; Dong, Guang-Hui; Qian, Feng

    2016-01-01

    It is well known that there is strong relationship between HIV viral load and CD4 cell counts in AIDS studies. However, the relationship between them changes during the course of treatment and may vary among individuals. During treatments, some individuals may experience terminal events such as death. Because the terminal event may be related to the individual's viral load measurements, the terminal mechanism is non-ignorable. Furthermore, there exists competing risks from multiple types of events, such as AIDS-related death and other death. Most joint models for the analysis of longitudinal-survival data developed in literatures have focused on constant coefficients and assume symmetric distribution for the endpoints, which does not meet the needs for investigating the nature of varying relationship between HIV viral load and CD4 cell counts in practice. We develop a mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects to deal with varying relationship between the two endpoints for longitudinal-competing risks survival data. A fully Bayesian inference procedure is established to estimate parameters in the joint model. The proposed method is applied to a multicenter AIDS cohort study. Various scenarios-based potential models that account for partial data features are compared. Some interesting findings are presented.

  14. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  15. Parametrics on 2D Navier-Stokes analysis of a Mach 2.68 bifurcated rectangular mixed-compression inlet

    NASA Technical Reports Server (NTRS)

    Mizukami, M.; Saunders, J. D.

    1995-01-01

    The supersonic diffuser of a Mach 2.68 bifurcated, rectangular, mixed-compression inlet was analyzed using a two-dimensional (2D) Navier-Stokes flow solver. Parametric studies were performed on turbulence models, computational grids and bleed models. The computer flowfield was substantially different from the original inviscid design, due to interactions of shocks, boundary layers, and bleed. Good agreement with experimental data was obtained in many aspects. Many of the discrepancies were thought to originate primarily from 3D effects. Therefore, a balance should be struck between expending resources on a high fidelity 2D simulation, and the inherent limitations of 2D analysis. The solutions were fairly insensitive to turbulence models, grids and bleed models. Overall, the k-e turbulence model, and the bleed models based on unchoked bleed hole discharge coefficients or uniform velocity are recommended. The 2D Navier-Stokes methods appear to be a useful tool for the design and analysis of supersonic inlets, by providing a higher fidelity simulation of the inlet flowfield than inviscid methods, in a reasonable turnaround time.

  16. Drug awareness in adolescents attending a mental health service: analysis of longitudinal data.

    PubMed

    Arnau, Jaume; Bono, Roser; Díaz, Rosa; Goti, Javier

    2011-11-01

    One of the procedures used most recently with longitudinal data is linear mixed models. In the context of health research the increasing number of studies that now use these models bears witness to the growing interest in this type of analysis. This paper describes the application of linear mixed models to a longitudinal study of a sample of Spanish adolescents attending a mental health service, the aim being to investigate their knowledge about the consumption of alcohol and other drugs. More specifically, the main objective was to compare the efficacy of a motivational interviewing programme with a standard approach to drug awareness. The models used to analyse the overall indicator of drug awareness were as follows: (a) unconditional linear growth curve model; (b) growth model with subject-associated variables; and (c) individual curve model with predictive variables. The results showed that awareness increased over time and that the variable 'schooling years' explained part of the between-subjects variation. The effect of motivational interviewing was also significant.

  17. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  18. Stand level height-diameter mixed effects models: parameters fitted using loblolly pine but calibrated for sweetgum

    Treesearch

    Curtis L. Vanderschaaf

    2008-01-01

    Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...

  19. Characterisation and modelling of mixing processes in groundwaters of a potential geological repository for nuclear wastes in crystalline rocks of Sweden.

    PubMed

    Gómez, Javier B; Gimeno, María J; Auqué, Luis F; Acero, Patricia

    2014-01-15

    This paper presents the mixing modelling results for the hydrogeochemical characterisation of groundwaters in the Laxemar area (Sweden). This area is one of the two sites that have been investigated, under the financial patronage of the Swedish Nuclear Waste and Management Co. (SKB), as possible candidates for hosting the proposed repository for the long-term storage of spent nuclear fuel. The classical geochemical modelling, interpreted in the light of the palaeohydrogeological history of the system, has shown that the driving process in the geochemical evolution of this groundwater system is the mixing between four end-member waters: a deep and old saline water, a glacial meltwater, an old marine water, and a meteoric water. In this paper we put the focus on mixing and its effects on the final chemical composition of the groundwaters using a comprehensive methodology that combines principal component analysis with mass balance calculations. This methodology allows us to test several combinations of end member waters and several combinations of compositional variables in order to find optimal solutions in terms of mixing proportions. We have applied this methodology to a dataset of 287 groundwater samples from the Laxemar area collected and analysed by SKB. The best model found uses four conservative elements (Cl, Br, oxygen-18 and deuterium), and computes mixing proportions with respect to three end member waters (saline, glacial and meteoric). Once the first order effect of mixing has been taken into account, water-rock interaction can be used to explain the remaining variability. In this way, the chemistry of each water sample can be obtained by using the mixing proportions for the conservative elements, only affected by mixing, or combining the mixing proportions and the chemical reactions for the non-conservative elements in the system, establishing the basis for predictive calculations. © 2013 Elsevier B.V. All rights reserved.

  20. Inverse Demographic Analysis of Compensatory Responses to Resource Limitation in the Mysid Crustacean Americamysis bahia

    EPA Science Inventory

    Most observations of stressor effects on marine crustaceans are made on individuals or even-aged cohorts. Results of these studies are difficult to translate into ecological predictions, either because life cycle models are incomplete, or because stressor effects on mixed age po...

  1. Functional linear models for association analysis of quantitative traits.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY PERIODICALS, INC.

  2. Visualized analysis of mixed numeric and categorical data via extended self-organizing map.

    PubMed

    Hsu, Chung-Chian; Lin, Shu-Han

    2012-01-01

    Many real-world datasets are of mixed types, having numeric and categorical attributes. Even though difficult, analyzing mixed-type datasets is important. In this paper, we propose an extended self-organizing map (SOM), called MixSOM, which utilizes a data structure distance hierarchy to facilitate the handling of numeric and categorical values in a direct, unified manner. Moreover, the extended model regularizes the prototype distance between neighboring neurons in proportion to their map distance so that structures of the clusters can be portrayed better on the map. Extensive experiments on several synthetic and real-world datasets are conducted to demonstrate the capability of the model and to compare MixSOM with several existing models including Kohonen's SOM, the generalized SOM and visualization-induced SOM. The results show that MixSOM is superior to the other models in reflecting the structure of the mixed-type data and facilitates further analysis of the data such as exploration at various levels of granularity.

  3. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  4. Linear Instability Analysis of non-uniform Bubbly Mixing layer with Two-Fluid model

    NASA Astrophysics Data System (ADS)

    Sharma, Subash; Chetty, Krishna; Lopez de Bertodano, Martin

    We examine the inviscid instability of a non-uniform adiabatic bubbly shear layer with a Two-Fluid model. The Two-Fluid model is made well-posed with the closure relations for interfacial forces. First, a characteristic analysis is carried out to study the well posedness of the model over range of void fraction with interfacial forces for virtual mass, interfacial drag, interfacial pressure. A dispersion analysis then allow us to obtain growth rate and wavelength. Then, the well-posed two-fluid model is solved using CFD to validate the results obtained with the linear stability analysis. The effect of the void fraction and the distribution profile on stability is analyzed.

  5. Determining vehicle operating speed and lateral position along horizontal curves using linear mixed-effects models.

    PubMed

    Fitzsimmons, Eric J; Kvam, Vanessa; Souleyrette, Reginald R; Nambisan, Shashi S; Bonett, Douglas G

    2013-01-01

    Despite recent improvements in highway safety in the United States, serious crashes on curves remain a significant problem. To assist in better understanding causal factors leading to this problem, this article presents and demonstrates a methodology for collection and analysis of vehicle trajectory and speed data for rural and urban curves using Z-configured road tubes. For a large number of vehicle observations at 2 horizontal curves located in Dexter and Ames, Iowa, the article develops vehicle speed and lateral position prediction models for multiple points along these curves. Linear mixed-effects models were used to predict vehicle lateral position and speed along the curves as explained by operational, vehicle, and environmental variables. Behavior was visually represented for an identified subset of "risky" drivers. Linear mixed-effect regression models provided the means to predict vehicle speed and lateral position while taking into account repeated observations of the same vehicle along horizontal curves. Speed and lateral position at point of entry were observed to influence trajectory and speed profiles. Rural horizontal curve site models are presented that indicate that the following variables were significant and influenced both vehicle speed and lateral position: time of day, direction of travel (inside or outside lane), and type of vehicle.

  6. Scale model performance test investigation of mixed flow exhaust systems for an energy efficient engine /E3/ propulsion system

    NASA Technical Reports Server (NTRS)

    Kuchar, A. P.; Chamberlin, R.

    1983-01-01

    As part of the NASA Energy Efficient Engine program, scale-model performance tests of a mixed flow exhaust system were conducted. The tests were used to evaluate the performance of exhaust system mixers for high-bypass, mixed-flow turbofan engines. The tests indicated that: (1) mixer penetration has the most significant affect on both mixing effectiveness and mixer pressure loss; (2) mixing/tailpipe length improves mixing effectiveness; (3) gap reduction between the mixer and centerbody increases high mixing effectiveness; (4) mixer cross-sectional shape influences mixing effectiveness; (5) lobe number affects mixing degree; and (6) mixer aerodynamic pressure losses are a function of secondary flows inherent to the lobed mixer concept.

  7. The modelling of dispersion in 2-D tidal flow over an uneven bed

    NASA Astrophysics Data System (ADS)

    Kalkwijk, Jan P. Th.

    This paper deals with the effective mixing by topographic induced velocity variations in 2-D tidal flow. This type of mixing is characterized by tidally-averaged dispersion coefficients, which depend on the magnitude of the depth variations with respect to a mean depth, the velocity variations and the basic dispersion coefficients. The analysis is principally based on a Taylor type approximation (large clouds, small concentration variations) of the 2-D advection diffusion equation and a 2-D velocity field that behaves harmonically both in time and in space. Neglecting transient phenomena and applying time and space averaging the effective dispersion coefficients can be derived. Under certain circumstances it is possible to relate the velocity variations to the depth variations, so that finally effective dispersion coefficients can be determined using the power spectrum of the depth variations. In a special paragraph attention is paid to the modelling of sub-grid mixing in case of numerical integration of the advection-diffusion equation. It appears that the dispersion coefficients taking account of the sub-grid mixing are not only determined by the velocity variations within a certain grid cell, but also by the velocity variations at a larger scale.

  8. Modeling reactive transport with particle tracking and kernel estimators

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-04-01

    Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.

  9. Modeling optimal treatment strategies in a heterogeneous mixing model.

    PubMed

    Choe, Seoyun; Lee, Sunmi

    2015-11-25

    Many mathematical models assume random or homogeneous mixing for various infectious diseases. Homogeneous mixing can be generalized to mathematical models with multi-patches or age structure by incorporating contact matrices to capture the dynamics of the heterogeneously mixing populations. Contact or mixing patterns are difficult to measure in many infectious diseases including influenza. Mixing patterns are considered to be one of the critical factors for infectious disease modeling. A two-group influenza model is considered to evaluate the impact of heterogeneous mixing on the influenza transmission dynamics. Heterogeneous mixing between two groups with two different activity levels includes proportionate mixing, preferred mixing and like-with-like mixing. Furthermore, the optimal control problem is formulated in this two-group influenza model to identify the group-specific optimal treatment strategies at a minimal cost. We investigate group-specific optimal treatment strategies under various mixing scenarios. The characteristics of the two-group influenza dynamics have been investigated in terms of the basic reproduction number and the final epidemic size under various mixing scenarios. As the mixing patterns become proportionate mixing, the basic reproduction number becomes smaller; however, the final epidemic size becomes larger. This is due to the fact that the number of infected people increases only slightly in the higher activity level group, while the number of infected people increases more significantly in the lower activity level group. Our results indicate that more intensive treatment of both groups at the early stage is the most effective treatment regardless of the mixing scenario. However, proportionate mixing requires more treated cases for all combinations of different group activity levels and group population sizes. Mixing patterns can play a critical role in the effectiveness of optimal treatments. As the mixing becomes more like-with-like mixing, treating the higher activity group in the population is almost as effective as treating the entire populations since it reduces the number of disease cases effectively but only requires similar treatments. The gain becomes more pronounced as the basic reproduction number increases. This can be a critical issue which must be considered for future pandemic influenza interventions, especially when there are limited resources available.

  10. Simultaneous injection-effective mixing analysis of palladium.

    PubMed

    Teshima, Norio; Noguchi, Daisuke; Joichi, Yasutaka; Lenghor, Narong; Ohno, Noriko; Sakai, Tadao; Motomizu, Shoji

    2010-01-01

    A novel concept of simultaneous injection-effective mixing analysis (SIEMA) is proposed, and a SIEMA method applied to the spectrophotometric determination of palladium using a water-soluble chromogenic reagent has been demonstrated. The flow configuration of SIEMA is a hybrid format of flow injection analysis (FIA), sequential injection analysis (SIA) and multicommutation in flow-based analysis. Sample and reagent solutions are aspirated into each holding coil through each solenoid valve by a syringe pump, and then the zones are simultaneously dispensed (injected) into a mixing coil by reversed flow toward a detector through a confluence point. This results in effective mixing and rapid detection with low reagent consumption.

  11. Simulation analysis of rectifying microfluidic mixing with field-effect-tunable electrothermal induced flow.

    PubMed

    Liu, Weiyu; Ren, Yukun; Tao, Ye; Yao, Bobin; Li, You

    2018-03-01

    We report herein field-effect control on in-phase electrothermal streaming from a theoretical point of view, a phenomenon termed "alternating-current electrothermal-flow field effect transistor" (ACET-FFET), in the context of a new technology for handing analytes in microfluidics. Field-effect control through a gate terminal endows ACET-FFET the ability to generate arbitrary symmetry breaking in the transverse vortex flow pattern, which makes it attractive for mixing microfluidic samples. A computational model is developed to study the feasibility of this new microfluidic device design for micromixing. The influence of various parameters on developing an efficient mixer is investigated, and an integrated layout of discrete electrode array is suggested for achieving high-throughput mixing. Our physical demonstration with field-effect electrothermal flow control using a simple electrode structure proves invaluable for designing active micromixers for modern micro total analytical system. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Improving Mixed-phase Cloud Parameterization in Climate Model with the ACRF Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhien

    Mixed-phase cloud microphysical and dynamical processes are still poorly understood, and their representation in GCMs is a major source of uncertainties in overall cloud feedback in GCMs. Thus improving mixed-phase cloud parameterizations in climate models is critical to reducing the climate forecast uncertainties. This study aims at providing improved knowledge of mixed-phase cloud properties from the long-term ACRF observations and improving mixed-phase clouds simulations in the NCAR Community Atmosphere Model version 5 (CAM5). The key accomplishments are: 1) An improved retrieval algorithm was developed to provide liquid droplet concentration for drizzling or mixed-phase stratiform clouds. 2) A new ice concentrationmore » retrieval algorithm for stratiform mixed-phase clouds was developed. 3) A strong seasonal aerosol impact on ice generation in Arctic mixed-phase clouds was identified, which is mainly attributed to the high dust occurrence during the spring season. 4) A suite of multi-senor algorithms was applied to long-term ARM observations at the Barrow site to provide a complete dataset (LWC and effective radius profile for liquid phase, and IWC, Dge profiles and ice concentration for ice phase) to characterize Arctic stratiform mixed-phase clouds. This multi-year stratiform mixed-phase cloud dataset provides necessary information to study related processes, evaluate model stratiform mixed-phase cloud simulations, and improve model stratiform mixed-phase cloud parameterization. 5). A new in situ data analysis method was developed to quantify liquid mass partition in convective mixed-phase clouds. For the first time, we reliably compared liquid mass partitions in stratiform and convective mixed-phase clouds. Due to the different dynamics in stratiform and convective mixed-phase clouds, the temperature dependencies of liquid mass partitions are significantly different due to much higher ice concentrations in convective mixed phase clouds. 6) Systematic evaluations of mixed-phase cloud simulations by CAM5 were performed. Measurement results indicate that ice concentrations control stratiform mixed-phase cloud properties. The improvement of ice concentration parameterization in the CAM5 was done in close collaboration with Dr. Xiaohong Liu, PNNL (now at University of Wyoming).« less

  13. PDF turbulence modeling and DNS

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.

    1992-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in probability density function (pdf). A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models. The effect of Coriolis forces on compressible homogeneous turbulence is studied using direct numerical simulation (DNS). The numerical method used in this study is an eight order compact difference scheme. Contrary to the conclusions reached by previous DNS studies on incompressible isotropic turbulence, the present results show that the Coriolis force increases the dissipation rate of turbulent kinetic energy, and that anisotropy develops as the Coriolis force increases. The Taylor-Proudman theory does apply since the derivatives in the direction of the rotation axis vanishes rapidly. A closer analysis reveals that the dissipation rate of the incompressible component of the turbulent kinetic energy indeed decreases with a higher rotation rate, consistent with incompressible flow simulations (Bardina), while the dissipation rate of the compressible part increases; the net gain is positive. Inertial waves are observed in the simulation results.

  14. The effects of mixed layer dynamics on ice growth in the central Arctic

    NASA Astrophysics Data System (ADS)

    Kitchen, Bruce R.

    1992-09-01

    The thermodynamic model of Thorndike (1992) is coupled to a one dimensional, two layer ocean entrainment model to study the effect of mixed layer dynamics on ice growth and the variation in the ocean heat flux into the ice due to mixed layer entrainment. Model simulations show the existence of a negative feedback between the ice growth and the mixed layer entrainment, and that the underlying ocean salinity has a greater effect on the ocean beat flux than does variations in the underlying ocean temperature. Model simulations for a variety of surface forcings and initial conditions demonstrate the need to include mixed layer dynamics for realistic ice prediction in the arctic.

  15. Modelling of upper ocean mixing by wave-induced turbulence

    NASA Astrophysics Data System (ADS)

    Ghantous, Malek; Babanin, Alexander

    2013-04-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into models, and real gains have been made in terms of increased fidelity to observational data. However our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing models need refinement and propose an alternative model. We use two of the models to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  16. Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2008-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  17. Panel-Stiffener Debonding and Analysis Using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2007-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer foot, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  18. Creep analysis of silicone for podiatry applications.

    PubMed

    Janeiro-Arocas, Julia; Tarrío-Saavedra, Javier; López-Beceiro, Jorge; Naya, Salvador; López-Canosa, Adrián; Heredia-García, Nicolás; Artiaga, Ramón

    2016-10-01

    This work shows an effective methodology to characterize the creep-recovery behavior of silicones before their application in podiatry. The aim is to characterize, model and compare the creep-recovery properties of different types of silicone used in podiatry orthotics. Creep-recovery phenomena of silicones used in podiatry orthotics is characterized by dynamic mechanical analysis (DMA). Silicones provided by Herbitas are compared by observing their viscoelastic properties by Functional Data Analysis (FDA) and nonlinear regression. The relationship between strain and time is modeled by fixed and mixed effects nonlinear regression to compare easily and intuitively podiatry silicones. Functional ANOVA and Kohlrausch-Willians-Watts (KWW) model with fixed and mixed effects allows us to compare different silicones observing the values of fitting parameters and their physical meaning. The differences between silicones are related to the variations of breadth of creep-recovery time distribution and instantaneous deformation-permanent strain. Nevertheless, the mean creep-relaxation time is the same for all the studied silicones. Silicones used in palliative orthoses have higher instantaneous deformation-permanent strain and narrower creep-recovery distribution. The proposed methodology based on DMA, FDA and nonlinear regression is an useful tool to characterize and choose the proper silicone for each podiatry application according to their viscoelastic properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Uncertainty in mixing models: a blessing in disguise?

    NASA Astrophysics Data System (ADS)

    Delsman, J. R.; Oude Essink, G. H. P.

    2012-04-01

    Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.

  20. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  1. Mixed effects versus fixed effects modelling of binary data with inter-subject variability.

    PubMed

    Murphy, Valda; Dunne, Adrian

    2005-04-01

    The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.

  2. Detecting a periodic signal in the terrestrial cratering record

    NASA Technical Reports Server (NTRS)

    Grieve, Richard A. F.; Rupert, James D.; Goodacre, Alan K.; Sharpton, Virgil L.

    1988-01-01

    A time-series analysis of model periodic data, where the period and phase are known, has been performed in order to investigate whether a significant period can be detected consistently from a mix of random and periodic impacts. Special attention is given to the effect of age uncertainties and random ages in the detection of a periodic signal. An equivalent analysis is performed with observed data on crater ages and compared with the model data, and the effects of the temporal distribution of crater ages on the results from the time-series analysis are studied. Evidence for a consistent 30-m.y. period is found to be weak.

  3. The Moderating Effects of Cluster B Personality Traits on Violence Reduction Training: A Mixed-Model Analysis

    ERIC Educational Resources Information Center

    Gerhart, James I.; Ronan, George F.; Russ, Eric; Seymour, Bailey

    2013-01-01

    Cognitive behavioral therapies have positive effects on anger and aggression; however, individuals differ in their response to treatment. The authors previously found that dynamic factors, such as increases in readiness to change, are associated with enhanced outcomes for violence reduction training. This study investigated how less dynamic…

  4. Ascorbyl palmitate/d-α-tocopheryl polyethylene glycol 1000 succinate monoester mixed micelles for prolonged circulation and targeted delivery of compound K for antilung cancer therapy in vitro and in vivo

    PubMed Central

    Zhang, Youwen; Tong, Deyin; Che, Daobiao; Pei, Bing; Xia, Xiaodong; Yuan, Gaofeng; Jin, Xin

    2017-01-01

    The roles of ginsenoside compound K (CK) in inhibiting tumor have been widely recognized in recent years. However, low water solubility and significant P-gp efflux have restricted its application. In this study, CK ascorbyl palmitate (AP)/d-α-tocopheryl polyethylene glycol 1000 succinate monoester (TPGS) mixed micelles were prepared as a delivery system to increase the absorption and targeted antitumor effect of CK. Consequently, the solubility of CK increased from 35.2±4.3 to 1,463.2±153.3 μg/mL. Furthermore, in an in vitro A549 cell model, CK AP/TPGS mixed micelles significantly inhibited cell growth, induced G0/G1 phase cell cycle arrest, induced cell apoptosis, and inhibited cell migration compared to free CK, all indicating that the developed micellar delivery system could increase the antitumor effect of CK in vitro. Both in vitro cellular fluorescence uptake and in vivo near-infrared imaging studies indicated that AP/TPGS mixed micelles can promote cellular uptake and enhance tumor targeting. Moreover, studies in the A549 lung cancer xenograft mouse model showed that CK AP/TPGS mixed micelles are an efficient tumor-targeted drug delivery system with an effective antitumor effect. Western blot analysis further confirmed that the marked antitumor effect in vivo could likely be due to apoptosis promotion and P-gp efflux inhibition. Therefore, these findings suggest that the AP/TPGS mixed micellar delivery system could be an efficient delivery strategy for enhanced tumor targeting and antitumor effects. PMID:28144142

  5. Regulation mechanisms in mixed and pure culture microbial fermentation.

    PubMed

    Hoelzle, Robert D; Virdis, Bernardino; Batstone, Damien J

    2014-11-01

    Mixed-culture fermentation is a key central process to enable next generation biofuels and biocommodity production due to economic and process advantages over application of pure cultures. However, a key limitation to the application of mixed-culture fermentation is predicting culture product response, related to metabolic regulation mechanisms. This is also a limitation in pure culture bacterial fermentation. This review evaluates recent literature in both pure and mixed culture studies with a focus on understanding how regulation and signaling mechanisms interact with metabolic routes and activity. In particular, we focus on how microorganisms balance electron sinking while maximizing catabolic energy generation. Analysis of these mechanisms and their effect on metabolism dynamics is absent in current models of mixed-culture fermentation. This limits process prediction and control, which in turn limits industrial application of mixed-culture fermentation. A key mechanism appears to be the role of internal electron mediating cofactors, and related regulatory signaling. This may determine direction of electrons towards either hydrogen or reduced organics as end-products and may form the basis for future mechanistic models. © 2014 Wiley Periodicals, Inc.

  6. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  7. Stochastic parameterization for light absorption by internally mixed BC/dust in snow grains for application to climate models

    NASA Astrophysics Data System (ADS)

    Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.

    2014-06-01

    A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.

  8. Stochastic Parameterization for Light Absorption by Internally Mixed BC/dust in Snow Grains for Application to Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liou, K. N.; Takano, Y.; He, Cenlin

    2014-06-27

    A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less

  9. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  10. Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets

    NASA Technical Reports Server (NTRS)

    Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.

    1978-01-01

    A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.

  11. The use of simple reparameterizations to improve the efficiency of Markov chain Monte Carlo estimation for multilevel models with applications to discrete time survival models.

    PubMed

    Browne, William J; Steele, Fiona; Golalizadeh, Mousa; Green, Martin J

    2009-06-01

    We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.

  12. Mixing state of regionally transported soot particles and the coating effect on their size and shape at a mountain site in Japan

    NASA Astrophysics Data System (ADS)

    Adachi, Kouji; Zaizen, Yuji; Kajino, Mizuo; Igarashi, Yasuhito

    2014-05-01

    Soot particles influence the global climate through interactions with sunlight. A coating on soot particles increases their light absorption by increasing their absorption cross section and cloud condensation nuclei activity when mixed with other hygroscopic aerosol components. Therefore, it is important to understand how soot internally mixes with other materials to accurately simulate its effects in climate models. In this study, we used a transmission electron microscope (TEM) with an auto particle analysis system, which enables more particles to be analyzed than a conventional TEM. Using the TEM, soot particle size and shape (shape factor) were determined with and without coating from samples collected at a remote mountain site in Japan. The results indicate that ~10% of aerosol particles between 60 and 350 nm in aerodynamic diameters contain or consist of soot particles and ~75% of soot particles were internally mixed with nonvolatile ammonium sulfate or other materials. In contrast to an assumption that coatings change soot shape, both internally and externally mixed soot particles had similar shape and size distributions. Larger aerosol particles had higher soot mixing ratios, i.e., more than 40% of aerosol particles with diameters >1 µm had soot inclusions, whereas <20% of aerosol particles with diameters <1 µm included soot. Our results suggest that climate models may use the same size distributions and shapes for both internally and externally mixed soot; however, changing the soot mixing ratios in the different aerosol size bins is necessary.

  13. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  14. Application of mixing-controlled combustion models to gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung Lee

    1990-01-01

    Gas emissions were studied from a staged Rich Burn/Quick-Quench Mix/Lean Burn combustor were studied under test conditions encountered in High Speed Research engines. The combustor was modeled at conditions corresponding to different engine power settings, and the effect of primary dilution airflow split on emissions, flow field, flame size and shape, and combustion intensity, as well as mixing, was investigated. A mathematical model was developed from a two-equation model of turbulence, a quasi-global kinetics mechanism for the oxidation of propane, and the Zeldovich mechanism for nitric oxide formation. A mixing-controlled combustion model was used to account for turbulent mixing effects on the chemical reaction rate. This model assumes that the chemical reaction rate is much faster than the turbulent mixing rate.

  15. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model.

    PubMed

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.

  17. Statistical inference methods for sparse biological time series data.

    PubMed

    Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita

    2011-04-25

    Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.

  18. A longitudinal analysis of the influence of the neighborhood built environment on walking for transportation: the RESIDE study.

    PubMed

    Knuiman, Matthew W; Christian, Hayley E; Divitini, Mark L; Foster, Sarah A; Bull, Fiona C; Badland, Hannah M; Giles-Corti, Billie

    2014-09-01

    The purpose of the present analysis was to use longitudinal data collected over 7 years (from 4 surveys) in the Residential Environments (RESIDE) Study (Perth, Australia, 2003-2012) to more carefully examine the relationship of neighborhood walkability and destination accessibility with walking for transportation that has been seen in many cross-sectional studies. We compared effect estimates from 3 types of logistic regression models: 2 that utilize all available data (a population marginal model and a subject-level mixed model) and a third subject-level conditional model that exclusively uses within-person longitudinal evidence. The results support the evidence that neighborhood walkability (especially land-use mix and street connectivity), local access to public transit stops, and variety in the types of local destinations are important determinants of walking for transportation. The similarity of subject-level effect estimates from logistic mixed models and those from conditional logistic models indicates that there is little or no bias from uncontrolled time-constant residential preference (self-selection) factors; however, confounding by uncontrolled time-varying factors, such as health status, remains a possibility. These findings provide policy makers and urban planners with further evidence that certain features of the built environment may be important in the design of neighborhoods to increase walking for transportation and meet the health needs of residents. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials

    PubMed Central

    Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2016-01-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885

  20. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    PubMed

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2017-06-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  1. Black carbon mixing state impacts on cloud microphysical properties: effects of aerosol plume and environmental conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ching, Ping Pui; Riemer, Nicole; West, Matthew

    2016-05-27

    Black carbon (BC) is usually mixed with other aerosol species within individual aerosol particles. This mixture, along with the particles' size and morphology, determines the particles' optical and cloud condensation nuclei properties, and hence black carbon's climate impacts. In this study the particle-resolved aerosol model PartMC-MOSAIC was used to quantify the importance of black carbon mixing state for predicting cloud microphysical quantities. Based on a set of about 100 cloud parcel simulations a process level analysis framework was developed to attribute the response in cloud microphysical properties to changes in the underlying aerosol population ("plume effect") and the cloud parcelmore » cooling rate ("parcel effect"). It shows that the response of cloud droplet number concentration to changes in BC emissions depends on the BC mixing state. When the aerosol population contains mainly aged BC particles an increase in BC emission results in increasing cloud droplet number concentrations ("additive effect"). In contrast, when the aerosol population contains mainly fresh BC particles they act as sinks for condensable gaseous species, resulting in a decrease in cloud droplet number concentration as BC emissions are increased ("competition effect"). Additionally, we quantified the error in cloud microphysical quantities when neglecting the information on BC mixing state, which is often done in aerosol models. The errors ranged from -12% to +45% for the cloud droplet number fraction, from 0% to +1022% for the nucleation-scavenged black carbon (BC) mass fraction, from -12% to +4% for the effective radius, and from -30% to +60% for the relative dispersion.« less

  2. Towards understanding turbulent scalar mixing

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1992-01-01

    In an effort towards understanding turbulent scalar mixing, we study the effect of molecular mixing, first in isolation and then by accounting for the effects of the velocity field. The chief motivation for this approach stems from the strong resemblance of the scalar probability density function (PDF) obtained from the scalar field evolving from the heat conduction equation that arises in a turbulent velocity field. However, the evolution of the scalar dissipation is different for the two cases. We attempt to account for these differences, which are due to the velocity field, using a Lagrangian frame analysis. After establishing the usefulness of this approach, we use the heat-conduction simulations (HCS), in lieu of the more expensive direct numerical simulations (DNS), to study many of the less understood aspects of turbulent mixing. Comparison between the HCS data and available models are made whenever possible. It is established that the beta PDF characterizes the evolution of the scalar PDF during mixing from all types of non-premixed initial conditions.

  3. Analyzing Health-Related Quality of Life Data to Estimate Parameters for Cost-Effectiveness Models: An Example Using Longitudinal EQ-5D Data from the SHIFT Randomized Controlled Trial.

    PubMed

    Griffiths, Alison; Paracha, Noman; Davies, Andrew; Branscombe, Neil; Cowie, Martin R; Sculpher, Mark

    2017-03-01

    The aim of this article is to discuss methods used to analyze health-related quality of life (HRQoL) data from randomized controlled trials (RCTs) for decision analytic models. The analysis presented in this paper was used to provide HRQoL data for the ivabradine health technology assessment (HTA) submission in chronic heart failure. We have used a large, longitudinal EuroQol five-dimension questionnaire (EQ-5D) dataset from the Systolic Heart Failure Treatment with the I f Inhibitor Ivabradine Trial (SHIFT) (clinicaltrials.gov: NCT02441218) to illustrate issues and methods. HRQoL weights (utility values) were estimated from a mixed regression model developed using SHIFT EQ-5D data (n = 5313 patients). The regression model was used to predict HRQoL outcomes according to treatment, patient characteristics, and key clinical outcomes for patients with a heart rate ≥75 bpm. Ivabradine was associated with an HRQoL weight gain of 0.01. HRQoL weights differed according to New York Heart Association (NYHA) class (NYHA I-IV, no hospitalization: standard care 0.82-0.46; ivabradine 0.84-0.47). A reduction in HRQoL weight was associated with hospitalizations within 30 days of an HRQoL assessment visit, with this reduction varying by NYHA class [-0.07 (NYHA I) to -0.21 (NYHA IV)]. The mixed model explained variation in EQ-5D data according to key clinical outcomes and patient characteristics, providing essential information for long-term predictions of patient HRQoL in the cost-effectiveness model. This model was also used to estimate the loss in HRQoL associated with hospitalizations. In SHIFT many hospitalizations did not occur close to EQ-5D visits; hence, any temporary changes in HRQoL associated with such events would not be captured fully in observed RCT evidence, but could be predicted in our cost-effectiveness analysis using the mixed model. Given the large reduction in hospitalizations associated with ivabradine this was an important feature of the analysis. The Servier Research Group.

  4. Investigation of micromixing by acoustically oscillated sharp-edges

    PubMed Central

    Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco

    2016-01-01

    Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel. PMID:27158292

  5. Investigation of micromixing by acoustically oscillated sharp-edges.

    PubMed

    Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco

    2016-03-01

    Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel.

  6. Modeled Health Economic Impact of a Hypothetical Certolizumab Pegol Risk-Sharing Scheme for Patients with Moderate-to-Severe Rheumatoid Arthritis in Finland.

    PubMed

    Soini, Erkki; Asseburg, Christian; Taiha, Maarit; Puolakka, Kari; Purcaru, Oana; Luosujärvi, Riitta

    2017-10-01

    To model the American College of Rheumatology (ACR) outcomes, cost-effectiveness, and budget impact of certolizumab pegol (CZP) (with and without a hypothetical risk-sharing scheme at treatment initiation for biologic-naïve patients) versus the current mix of reimbursed biologics for treatment of moderate-to-severe rheumatoid arthritis (RA) in Finland. A probabilistic model with 12-week cycles and a societal approach was developed for the years 2015-2019, accounting for differences in ACR responses (meta-analysis), mortality, and persistence. The risk-sharing scheme included a treatment switch and refund of the costs associated with CZP acquisition if patients failed to achieve ACR20 response at week 12. For the current treatment mix, ACR20 at week 24 determined treatment continuation. Quality-adjusted life years were derived on the basis of the Health Utilities Index. In the Finnish target population, CZP treatment with a risk-sharing scheme led to a estimated annual net expenditure decrease ranging from 1.7% in 2015 to 5.6% in 2019 compared with the current treatment mix. Per patient over the 5 years, CZP risk sharing was estimated to decrease the time without ACR response by 5%-units, decrease work absenteeism by 24 days, and increase the time with ACR20, ACR50, and ACR70 responses by 5%-, 6%-, and 1%-units, respectively, with a gain of 0.03 quality-adjusted life years. The modeled risk-sharing scheme showed reduced costs of €7866 per patient, with a more than 95% probability of cost-effectiveness when compared with the current treatment mix. The present analysis estimated that CZP, with or without the risk-sharing scheme, is a cost-effective alternative treatment for RA patients in Finland. The surplus provided by the CZP risk-sharing scheme could fund treatment for 6% more Finnish RA patients. UCB Pharma.

  7. Constrained inference in mixed-effects models for longitudinal data with application to hearing loss.

    PubMed

    Davidov, Ori; Rosen, Sophia

    2011-04-01

    In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.

  8. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    ERIC Educational Resources Information Center

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  9. A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.

    PubMed

    Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon

    2007-02-01

    Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.

  10. Mapping nighttime PM2.5 from VIIRS DNB using a linear mixed-effect model

    NASA Astrophysics Data System (ADS)

    Fu, D.; Xia, X.; Duan, M.; Zhang, X.; Li, X.; Wang, J.; Liu, J.

    2018-04-01

    Estimation of particulate matter with aerodynamic diameter less than 2.5 μm (PM2.5) from daytime satellite aerosol products is widely reported in the literature; however, remote sensing of nighttime surface PM2.5 from space is very limited. PM2.5 shows a distinct diurnal cycle and PM2.5 concentration at 1:00 local standard time (LST) has a linear correlation coefficient (R) of 0.80 with daily-mean PM2.5. Therefore, estimation of nighttime PM2.5 is required toward an improved understanding of temporal variation of PM2.5 and its effects on air quality. Using data from the Day/Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) and hourly PM2.5 data at 35 stations in Beijing, a mixed-effect model is developed here to estimate nighttime PM2.5 from nighttime light radiance measurements based on the assumption that the DNB-PM2.5 relationship is constant spatially but varies temporally. Cross-validation showed that the model developed using all stations predict daily PM2.5 with mean determination coefficient (R2) of 0.87 ± 0.12, 0.83 ± 0.10 , 0.87 ± 0.09, 0.83 ± 0.10 in spring, summer, autumn and winter. Further analysis showed that the best model performance was achieved in urban stations with average cross-validation R2 of 0.92. In rural stations, DNB light signal is weak and was likely smeared by lunar illuminance that resulted in relatively poor estimation of PM2.5. The fixed and random parameters of the mixed-effect model in urban stations differed from those in suburban stations, which indicated that the assumption of the mixed-effect model should be carefully evaluated when used at a regional scale.

  11. Fuzzy interval Finite Element/Statistical Energy Analysis for mid-frequency analysis of built-up systems with mixed fuzzy and interval parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2016-10-01

    This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.

  12. Type I Error Inflation in the Traditional By-Participant Analysis to Metamemory Accuracy: A Generalized Mixed-Effects Model Perspective

    ERIC Educational Resources Information Center

    Murayama, Kou; Sakaki, Michiko; Yan, Veronica X.; Smith, Garry M.

    2014-01-01

    In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are…

  13. Effects of friction reduction of micro-patterned array of rough slider bearing

    NASA Astrophysics Data System (ADS)

    Kim, M.; Lee, D. W.; Jeong, J. H.; Chung, W. S.; Park, J. K.

    2017-08-01

    Complex micro-scale patterns have attracted interest because of the functionality that can be created using this type of patterning. This study evaluates the frictional reduction effects of various micro patterns on a slider bearing surface which is operating under mixed lubrication. Due to the rapid growth of contact area under mixed lubrication, it has become important to study the phenomenon of asperity contact in bearings with a heavy load. New analysis using the modified Reynolds equation with both the average flow model and the contact model of asperities is conducted for the rough slider bearing. A numerical analysis is performed to determine the effects of surface roughness on a lubricated bearing. Several dented patterns such as, dot pattern, dashed line patterns are used to evaluate frictional reduction effects. To verify the analytical results, friction test for the micro-patterned samples are performed. From comparing the frictional reduction effects of patterned arrays, the design of them can control the frictional loss of bearings. Our results showed that the design of pattern array on the bearing surface was important to the friction reduction of bearings. To reduce frictional loss, the longitudinal direction of them was better than the transverse direction.

  14. Early treatment of posterior crossbite - a randomised clinical trial

    PubMed Central

    2013-01-01

    Background The aim of this randomised clinical trial was to assess the effect of early orthodontic treatment in contrast to normal growth effects for functional unilateral posterior crossbite in the late deciduous and early mixed dentition by means of three-dimensional digital model analysis. Methods This randomised clinical trial was assessed to analyse the orthodontic treatment effects for patients with functional unilateral posterior crossbite in the late deciduous and early mixed dentition using a two-step procedure: initial maxillary expansion followed by a U-bow activator therapy. In the treatment group 31 patients and in the control group 35 patients with a mean age of 7.3 years (SD 2.1) were monitored. The time between the initial assessment (T1) and the follow-up (T2) was one year. The orthodontic analysis was done by a three-dimensional digital model analysis. Using the ‘Digimodel’ software, the orthodontic measurements in the maxilla and mandible and for the midline deviation, the overjet and overbite were recorded. Results Significant differences between the control and the therapy group at T2 were detected for the anterior, median and posterior transversal dimensions of the maxilla, the palatal depth, the palatal base arch length, the maxillary arch length and inclination, the midline deviation, the overjet and the overbite. Conclusions Orthodontic treatment of a functional unilateral posterior crossbite with a bonded maxillary expansion device followed by U-bow activator therapy in the late deciduous and early mixed dentition is an effective therapeutic method, as evidenced by the results of this RCT. It leads to three-dimensional therapeutically induced maxillary growth effects. Dental occlusion is significantly improved, and the prognosis for normal craniofacial growth is enhanced. Trial registration Registration trial DRKS00003497 on DRKS PMID:23339736

  15. Analysis of crash proportion by vehicle type at traffic analysis zone level: A mixed fractional split multinomial logit modeling approach with spatial effects.

    PubMed

    Lee, Jaeyoung; Yasmin, Shamsunnahar; Eluru, Naveen; Abdel-Aty, Mohamed; Cai, Qing

    2018-02-01

    In traffic safety literature, crash frequency variables are analyzed using univariate count models or multivariate count models. In this study, we propose an alternative approach to modeling multiple crash frequency dependent variables. Instead of modeling the frequency of crashes we propose to analyze the proportion of crashes by vehicle type. A flexible mixed multinomial logit fractional split model is employed for analyzing the proportions of crashes by vehicle type at the macro-level. In this model, the proportion allocated to an alternative is probabilistically determined based on the alternative propensity as well as the propensity of all other alternatives. Thus, exogenous variables directly affect all alternatives. The approach is well suited to accommodate for large number of alternatives without a sizable increase in computational burden. The model was estimated using crash data at Traffic Analysis Zone (TAZ) level from Florida. The modeling results clearly illustrate the applicability of the proposed framework for crash proportion analysis. Further, the Excess Predicted Proportion (EPP)-a screening performance measure analogous to Highway Safety Manual (HSM), Excess Predicted Average Crash Frequency is proposed for hot zone identification. Using EPP, a statewide screening exercise by the various vehicle types considered in our analysis was undertaken. The screening results revealed that the spatial pattern of hot zones is substantially different across the various vehicle types considered. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Multivariate Longitudinal Analysis with Bivariate Correlation Test.

    PubMed

    Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory

    2016-01-01

    In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.

  17. Decoupling or nondecoupling: Is that the {ital R}{sub {ital b}} question?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comelli, D.; Silva, J.P.

    1996-07-01

    The top quark is well known for the nondecoupling effects it implies in {rho} and {ital R}{sub {ital b}}. The recent experimental {ital R}{sub {ital b}} data exhibit a disagreement with the SM prediction at more than the 3{sigma} level. It is tempting to explore whether this might be due to nondecoupling new physics effects, opposite to those of the top quark. We investigate this issue in the context of models with an extra family of right- or left-handed singlet or doublet quarks. It is shown that, contrary to what one might naively expect, the nondecoupling properties of a mirrormore » {ital t}{sup {prime}} do not have an impact on {ital R}{sub {ital b}}, due to a conspiracy of the mixing angles, imposed by the requirement that there be no {ital b}-{ital b}{sup {prime}} mixing. Our analysis agrees with an analysis performed independently, which includes this model as a particular case. {copyright} {ital 1996 The American Physical Society.}« less

  18. Mixed-Effects Models for Count Data with Applications to Educational Research

    ERIC Educational Resources Information Center

    Shin, Jihyung

    2012-01-01

    This research is motivated by an analysis of reading research data. We are interested in modeling the test outcome of ability to fluently recode letters into sounds of kindergarten children aged between 5 and 7. The data showed excessive zero scores (more than 30% of children) on the test. In this dissertation, we carefully examine the models…

  19. Effect of monospecific and mixed sea-buckthorn (Hippophae rhamnoides) plantations on the structure and activity of soil microbial communities.

    PubMed

    Yu, Xuan; Liu, Xu; Zhao, Zhong; Liu, Jinliang; Zhang, Shunxiang

    2015-01-01

    This study aims to evaluate the effect of different afforestation models on soil microbial composition in the Loess Plateau in China. In particular, we determined soil physicochemical properties, enzyme activities, and microbial community structures in the top 0 cm to 10 cm soil underneath a pure Hippophae rhamnoides (SS) stand and three mixed stands, namely, H. rhamnoides and Robinia pseucdoacacia (SC), H. rhamnoides and Pinus tabulaeformis (SY), and H. rhamnoides and Platycladus orientalis (SB). Results showed that total organic carbon (TOC), total nitrogen, and ammonium (NH4(+)) contents were higher in SY and SB than in SS. The total microbial biomass, bacterial biomass, and Gram+ biomass of the three mixed stands were significantly higher than those of the pure stand. However, no significant difference was found in fungal biomass. Correlation analysis suggested that soil microbial communities are significantly and positively correlated with some chemical parameters of soil, such as TOC, total phosphorus, total potassium, available phosphorus, NH4(+) content, nitrate content (NH3(-)), and the enzyme activities of urease, peroxidase, and phosphatase. Principal component analysis showed that the microbial community structures of SB and SS could clearly be discriminated from each other and from the others, whereas SY and SC were similar. In conclusion, tree species indirectly but significantly affect soil microbial communities and enzyme activities through soil physicochemical properties. In addition, mixing P. tabulaeformis or P. orientalis in H. rhamnoides plantations is a suitable afforestation model in the Loess Plateau, because of significant positive effects on soil nutrient conditions, microbial community, and enzyme activities over pure plantations.

  20. Analysis of rocket engine injection combustion processes

    NASA Technical Reports Server (NTRS)

    Salmon, J. W.; Saltzman, D. H.

    1977-01-01

    Mixing methodology improvement for the JANNAF DER and CICM injection/combustion analysis computer programs was accomplished. ZOM plane prediction model development was improved for installation into the new standardized DER computer program. An intra-element mixing model developing approach was recommended for gas/liquid coaxial injection elements for possible future incorporation into the CICM computer program.

  1. A new method for fingerprinting sediments source contributions using distances from discriminant function analysis

    USDA-ARS?s Scientific Manuscript database

    Mixing models have been used to predict sediment source contributions. The inherent problem of the mixing models limited the number of sediment sources. The objective of this study is to develop and evaluate a new method using Discriminant Function Analysis (DFA) to fingerprint sediment source contr...

  2. Flood analysis in mixed-urban areas reflecting interactions with the complete water cycle through coupled hydrologic-hydraulic modelling.

    PubMed

    Sto Domingo, N D; Refsgaard, A; Mark, O; Paludan, B

    2010-01-01

    The potential devastating effects of urban flooding have given high importance to thorough understanding and management of water movement within catchments, and computer modelling tools have found widespread use for this purpose. The state-of-the-art in urban flood modelling is the use of a coupled 1D pipe and 2D overland flow model to simultaneously represent pipe and surface flows. This method has been found to be accurate for highly paved areas, but inappropriate when land hydrology is important. The objectives of this study are to introduce a new urban flood modelling procedure that is able to reflect system interactions with hydrology, verify that the new procedure operates well, and underline the importance of considering the complete water cycle in urban flood analysis. A physically-based and distributed hydrological model was linked to a drainage network model for urban flood analysis, and the essential components and concepts used were described in this study. The procedure was then applied to a catchment previously modelled with the traditional 1D-2D procedure to determine if the new method performs similarly well. Then, results from applying the new method in a mixed-urban area were analyzed to determine how important hydrologic contributions are to flooding in the area.

  3. Analysis of the Dielectric constant of saline-alkali soils and the effect on radar backscattering coefficient: a case study of soda alkaline saline soils in Western Jilin Province using RADARSAT-2 data.

    PubMed

    Li, Yang-yang; Zhao, Kai; Ren, Jian-hua; Ding, Yan-ling; Wu, Li-li

    2014-01-01

    Soil salinity is a global problem, especially in developing countries, which affects the environment and productivity of agriculture areas. Salt has a significant effect on the complex dielectric constant of wet soil. However, there is no suitable model to describe the variation in the backscattering coefficient due to changes in soil salinity content. The purpose of this paper is to use backscattering models to understand behaviors of the backscattering coefficient in saline soils based on the analysis of its dielectric constant. The effects of moisture and salinity on the dielectric constant by combined Dobson mixing model and seawater dielectric constant model are analyzed, and the backscattering coefficient is then simulated using the AIEM. Simultaneously, laboratory measurements were performed on ground samples. The frequency effect of the laboratory results was not the same as the simulated results. The frequency dependence of the ionic conductivity of an electrolyte solution is influenced by the ion's components. Finally, the simulated backscattering coefficients measured from the dielectric constant with the AIEM were analyzed using the extracted backscattering coefficient from the RADARSAT-2 image. The results show that RADARSAT-2 is potentially able to measure soil salinity; however, the mixed pixel problem needs to be more thoroughly considered.

  4. Effect of electrode positions on the mixing characteristics of an electroosmotic micromixer.

    PubMed

    Seo, H S; Kim, Y J

    2014-08-01

    In this study, an electrokinetic microchannel with a ring-type mixing chamber is introduced for fast mixing. The modeled micromixer that is used for the study of the electroosmotic effect takes two fluids from different inlets and combines them in a ring-type mixing chamber and, then, they are mixed by the electric fields at the electrodes. In order to compare the mixing performance in the modeled micromixer, we numerically investigated the flow characteristics with different positions of the electrodes in the mixing chamber using the commercial code, COMSOL. In addition, we discussed the concentration distributions of the dissolved substances in the flow fields and compared the mixing efficiency in the modeled micromixer with different electrode positions and operating conditions, such as the frequencies and electric potentials at the electrodes.

  5. One-dimensional modelling of upper ocean mixing by turbulence due to wave orbital motion

    NASA Astrophysics Data System (ADS)

    Ghantous, M.; Babanin, A. V.

    2014-02-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into ocean mixing models, and real gains have been made in terms of increased fidelity to observational data. However, our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing parameterisations need refinement and propose an alternative one. We use two of the parameterisations to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  6. On the Choice of Variable for Atmospheric Moisture Analysis

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; DaSilva, Arlindo M.; Atlas, Robert (Technical Monitor)

    2002-01-01

    The implications of using different control variables for the analysis of moisture observations in a global atmospheric data assimilation system are investigated. A moisture analysis based on either mixing ratio or specific humidity is prone to large extrapolation errors, due to the high variability in space and time of these parameters and to the difficulties in modeling their error covariances. Using the logarithm of specific humidity does not alleviate these problems, and has the further disadvantage that very dry background estimates cannot be effectively corrected by observations. Relative humidity is a better choice from a statistical point of view, because this field is spatially and temporally more coherent and error statistics are therefore easier to obtain. If, however, the analysis is designed to preserve relative humidity in the absence of moisture observations, then the analyzed specific humidity field depends entirely on analyzed temperature changes. If the model has a cool bias in the stratosphere this will lead to an unstable accumulation of excess moisture there. A pseudo-relative humidity can be defined by scaling the mixing ratio by the background saturation mixing ratio. A univariate pseudo-relative humidity analysis will preserve the specific humidity field in the absence of moisture observations. A pseudorelative humidity analysis is shown to be equivalent to a mixing ratio analysis with flow-dependent covariances. In the presence of multivariate (temperature-moisture) observations it produces analyzed relative humidity values that are nearly identical to those produced by a relative humidity analysis. Based on a time series analysis of radiosonde observed-minus-background differences it appears to be more justifiable to neglect specific humidity-temperature correlations (in a univariate pseudo-relative humidity analysis) than to neglect relative humidity-temperature correlations (in a univariate relative humidity analysis). A pseudo-relative humidity analysis is easily implemented in an existing moisture analysis system, by simply scaling observed-minus background moisture residuals prior to solving the analysis equation, and rescaling the analyzed increments afterward.

  7. Determining the impact of cell mixing on signaling during development.

    PubMed

    Uriu, Koichiro; Morelli, Luis G

    2017-06-01

    Cell movement and intercellular signaling occur simultaneously to organize morphogenesis during embryonic development. Cell movement can cause relative positional changes between neighboring cells. When intercellular signals are local such cell mixing may affect signaling, changing the flow of information in developing tissues. Little is known about the effect of cell mixing on intercellular signaling in collective cellular behaviors and methods to quantify its impact are lacking. Here we discuss how to determine the impact of cell mixing on cell signaling drawing an example from vertebrate embryogenesis: the segmentation clock, a collective rhythm of interacting genetic oscillators. We argue that comparing cell mixing and signaling timescales is key to determining the influence of mixing. A signaling timescale can be estimated by combining theoretical models with cell signaling perturbation experiments. A mixing timescale can be obtained by analysis of cell trajectories from live imaging. After comparing cell movement analyses in different experimental settings, we highlight challenges in quantifying cell mixing from embryonic timelapse experiments, especially a reference frame problem due to embryonic motions and shape changes. We propose statistical observables characterizing cell mixing that do not depend on the choice of reference frames. Finally, we consider situations in which both cell mixing and signaling involve multiple timescales, precluding a direct comparison between single characteristic timescales. In such situations, physical models based on observables of cell mixing and signaling can simulate the flow of information in tissues and reveal the impact of observed cell mixing on signaling. © 2017 Japanese Society of Developmental Biologists.

  8. Experimental study of stratified jet by simultaneous measurements of velocity and density fields

    NASA Astrophysics Data System (ADS)

    Xu, Duo; Chen, Jun

    2012-07-01

    Stratified flows with small density difference commonly exist in geophysical and engineering applications, which often involve interaction of turbulence and buoyancy effect. A combined particle image velocimetry (PIV) and planar laser-induced fluorescence (PLIF) system is developed to measure the velocity and density fields in a dense jet discharged horizontally into a tank filled with light fluid. The illumination of PIV particles and excitation of PLIF dye are achieved by a dual-head pulsed Nd:YAG laser and two CCD cameras with a set of optical filters. The procedure for matching refractive indexes of two fluids and calibration of the combined system are presented, as well as a quantitative analysis of the measurement uncertainties. The flow structures and mixing dynamics within the central vertical plane are studied by examining the averaged parameters, turbulent kinetic energy budget, and modeling of momentum flux and buoyancy flux. At downstream, profiles of velocity and density display strong asymmetry with respect to its center. This is attributed to the fact that stable stratification reduces mixing and unstable stratification enhances mixing. In stable stratification region, most of turbulence production is consumed by mean-flow convection, whereas in unstable stratification region, turbulence production is nearly balanced by viscous dissipation. Experimental data also indicate that at downstream locations, mixing length model performs better in mixing zone of stable stratification regions, whereas in other regions, eddy viscosity/diffusivity models with static model coefficients represent effectively momentum and buoyancy flux terms. The measured turbulent Prandtl number displays strong spatial variation in the stratified jet.

  9. Primary Student-Teachers' Conceptual Understanding of the Greenhouse Effect: A mixed method study

    NASA Astrophysics Data System (ADS)

    Ratinen, Ilkka Johannes

    2013-04-01

    The greenhouse effect is a reasonably complex scientific phenomenon which can be used as a model to examine students' conceptual understanding in science. Primary student-teachers' understanding of global environmental problems, such as climate change and ozone depletion, indicates that they have many misconceptions. The present mixed method study examines Finnish primary student-teachers' understanding of the greenhouse effect based on the results obtained via open-ended and closed-form questionnaires. The open-ended questionnaire considers primary student-teachers' spontaneous ideas about the greenhouse effect depicted by concept maps. The present study also uses statistical analysis to reveal respondents' conceptualization of the greenhouse effect. The concept maps and statistical analysis reveal that the primary student-teachers' factual knowledge and their conceptual understanding of the greenhouse effect are incomplete and even misleading. In the light of the results of the present study, proposals for modifying the instruction of climate change in science, especially in geography, are presented.

  10. Upper Ocean Response to Hurricanes Katrina and Rita (2005) from Multi-sensor Satellites

    NASA Astrophysics Data System (ADS)

    Gierach, M. M.; Bulusu, S.

    2006-12-01

    Analysis of satellite observations and model simulations of the mixed layer provided an opportunity to assess the biological and physical effects of hurricanes Katrina and Rita (2005) in the Gulf of Mexico. Oceanic cyclonic circulation was intensified by the hurricanes' wind field, maximizing upwelling, surface cooling, and deepening the mixed layer. Two areas of maximum surface chlorophyll-a concentration and sea surface cooling were detected with peak intensities ranging from 2-3 mg m-3 and 4-6°C, along the tracks of Katrina and Rita. The temperature of the mixed layer cooled approximately 2°C and the depth of the mixed layer deepened by approximately 33-52 m. The forced deepening of the mixed layer injected nutrients into the euphotic zone, generating phytoplankton blooms 3-5 days after the passage of Katrina and Rita (2005).

  11. Efficiency of circulant diallels via mixed models in the selection of papaya genotypes resistant to foliar fungal diseases.

    PubMed

    Vivas, M; Silveira, S F; Viana, A P; Amaral, A T; Cardoso, D L; Pereira, M G

    2014-07-02

    Diallel crossing methods provide information regarding the performance of genitors between themselves and their hybrid combinations. However, with a large number of parents, the number of hybrid combinations that can be obtained and evaluated become limited. One option regarding the number of parents involved is the adoption of circulant diallels. However, information is lacking regarding diallel analysis using mixed models. This study aimed to evaluate the efficacy of the method of linear mixed models to estimate, for variable resistance to foliar fungal diseases, components of general and specific combining ability in a circulant table with different s values. Subsequently, 50 diallels were simulated for each s value, and the correlations and estimates of the combining abilities of the different diallel combinations were analyzed. The circulant diallel method using mixed modeling was effective in the classification of genitors regarding their combining abilities relative to the complete diallels. The numbers of crosses in which each genitor(s) will compose the circulant diallel and the estimated heritability affect the combining ability estimates. With three crosses per parent, it is possible to obtain good concordance (correlation above 0.8) between the combining ability estimates.

  12. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  13. Forecasting carbon dioxide emissions based on a hybrid of mixed data sampling regression model and back propagation neural network in the USA.

    PubMed

    Zhao, Xin; Han, Meng; Ding, Lili; Calin, Adrian Cantemir

    2018-01-01

    The accurate forecast of carbon dioxide emissions is critical for policy makers to take proper measures to establish a low carbon society. This paper discusses a hybrid of the mixed data sampling (MIDAS) regression model and BP (back propagation) neural network (MIDAS-BP model) to forecast carbon dioxide emissions. Such analysis uses mixed frequency data to study the effects of quarterly economic growth on annual carbon dioxide emissions. The forecasting ability of MIDAS-BP is remarkably better than MIDAS, ordinary least square (OLS), polynomial distributed lags (PDL), autoregressive distributed lags (ADL), and auto-regressive moving average (ARMA) models. The MIDAS-BP model is suitable for forecasting carbon dioxide emissions for both the short and longer term. This research is expected to influence the methodology for forecasting carbon dioxide emissions by improving the forecast accuracy. Empirical results show that economic growth has both negative and positive effects on carbon dioxide emissions that last 15 quarters. Carbon dioxide emissions are also affected by their own change within 3 years. Therefore, there is a need for policy makers to explore an alternative way to develop the economy, especially applying new energy policies to establish a low carbon society.

  14. Development of a Reduced-Order Three-Dimensional Flow Model for Thermal Mixing and Stratification Simulation during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    2017-09-03

    Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less

  15. A review of some problems in global-local stress analysis

    NASA Technical Reports Server (NTRS)

    Nelson, Richard B.

    1989-01-01

    The various types of local-global finite-element problems point out the need to develop a new generation of software. First, this new software needs to have a complete analysis capability, encompassing linear and nonlinear analysis of 1-, 2-, and 3-dimensional finite-element models, as well as mixed dimensional models. The software must be capable of treating static and dynamic (vibration and transient response) problems, including the stability effects of initial stress, and the software should be able to treat both elastic and elasto-plastic materials. The software should carry a set of optional diagnostics to assist the program user during model generation in order to help avoid obvious structural modeling errors. In addition, the program software should be well documented so the user has a complete technical reference for each type of element contained in the program library, including information on such topics as the type of numerical integration, use of underintegration, and inclusion of incompatible modes, etc. Some packaged information should also be available to assist the user in building mixed-dimensional models. An important advancement in finite-element software should be in the development of program modularity, so that the user can select from a menu various basic operations in matrix structural analysis.

  16. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model

    PubMed Central

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639

  17. Evaluating significance in linear mixed-effects models in R.

    PubMed

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  18. Effect of Secondary Jet-flow Angle on Performance of Turbine Inter-guide-vane Burner Based on Jet-vortex Flow

    NASA Astrophysics Data System (ADS)

    Zheng, Haifei; Tang, Hao; Xu, Xingya; Li, Ming

    2014-08-01

    Four different secondary airflow angles for the turbine inter-guide-vane burners with trapped vortex cavity were designed. Comparative analysis between combustion performances influenced by the variation of secondary airflow angle was carried out by using numerical simulation method. The turbulence was modeled using the Scale-Adaptive Simulation (SAS) turbulence model. Four cases with different secondary jet-flow angles (-45°, 0°, 30°, 60°) were studied. It was observed that the case with secondary jet-flows at 60° angle directed upwards (1) has good mixing effect; (2) mixing effect is the best although the flow field distributions inside both of the cavity and the main flow passage for the four models are very similar; (3) has complete combustion and symmetric temperature distribution on the exit section of guide vane (X = 70 mm), with uniform temperature distribution, less temperature gradient, and shrank local high temperature regions in the notch located on the guide vane.

  19. Shear-flexible finite-element models of laminated composite plates and shells

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Mathers, M. D.

    1975-01-01

    Several finite-element models are applied to the linear static, stability, and vibration analysis of laminated composite plates and shells. The study is based on linear shallow-shell theory, with the effects of shear deformation, anisotropic material behavior, and bending-extensional coupling included. Both stiffness (displacement) and mixed finite-element models are considered. Discussion is focused on the effects of shear deformation and anisotropic material behavior on the accuracy and convergence of different finite-element models. Numerical studies are presented which show the effects of increasing the order of the approximating polynomials, adding internal degrees of freedom, and using derivatives of generalized displacements as nodal parameters.

  20. Growth and inactivation of Salmonella at low refrigerated storage temperatures and thermal inactivation on raw chicken meat and laboratory media: mixed effect meta-analysis.

    PubMed

    Smadi, Hanan; Sargeant, Jan M; Shannon, Harry S; Raina, Parminder

    2012-12-01

    Growth and inactivation regression equations were developed to describe the effects of temperature on Salmonella concentration on chicken meat for refrigerated temperatures (⩽10°C) and for thermal treatment temperatures (55-70°C). The main objectives were: (i) to compare Salmonella growth/inactivation in chicken meat versus laboratory media; (ii) to create regression equations to estimate Salmonella growth in chicken meat that can be used in quantitative risk assessment (QRA) modeling; and (iii) to create regression equations to estimate D-values needed to inactivate Salmonella in chicken meat. A systematic approach was used to identify the articles, critically appraise them, and pool outcomes across studies. Growth represented in density (Log10CFU/g) and D-values (min) as a function of temperature were modeled using hierarchical mixed effects regression models. The current meta-analysis analysis found a significant difference (P⩽0.05) between the two matrices - chicken meat and laboratory media - for both growth at refrigerated temperatures and inactivation by thermal treatment. Growth and inactivation were significantly influenced by temperature after controlling for other variables; however, no consistent pattern in growth was found. Validation of growth and inactivation equations against data not used in their development is needed. Copyright © 2012 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.

  1. Model free simulations of a high speed reacting mixing layer

    NASA Technical Reports Server (NTRS)

    Steinberger, Craig J.

    1992-01-01

    The effects of compressibility, chemical reaction exothermicity and non-equilibrium chemical modeling in a combusting plane mixing layer were investigated by means of two-dimensional model free numerical simulations. It was shown that increased compressibility generally had a stabilizing effect, resulting in reduced mixing and chemical reaction conversion rate. The appearance of 'eddy shocklets' in the flow was observed at high convective Mach numbers. Reaction exothermicity was found to enhance mixing at the initial stages of the layer's growth, but had a stabilizing effect at later times. Calculations were performed for a constant rate chemical rate kinetics model and an Arrhenius type kinetics prototype. The Arrhenius model was found to cause a greater temperature increase due to reaction than the constant kinetics model. This had the same stabilizing effect as increasing the exothermicity of the reaction. Localized flame quenching was also observed when the Zeldovich number was relatively large.

  2. Proceedings of the U.S. Air Force and The Federal Republic of Germany Data Exchange Agreement Meeting (9th), Viscous and Interacting Flow Field Effects Held at Silver Spring, Maryland on 9-10 May 1984,

    DTIC Science & Technology

    1984-08-01

    found in References 1-3. 2. Modeling of Roughness Effects on Turbulent Flow In turbulent flow analysis , use of time-averaged equations leads to the...eddy viscosity and the mixing length which are important parameters used in current algebraic modeling of the turbulence shear term. Two different ...surfaces with three-dimensional (distributed) roughness elements. Calculations using the present model have been compared with experimental data from

  3. Fluid dynamic analysis of a continuous stirred tank reactor for technical optimization of wastewater digestion.

    PubMed

    Hurtado, F J; Kaiser, A S; Zamora, B

    2015-03-15

    Continuous stirred tank reactors (CSTR) are widely used in wastewater treatment plants to reduce the organic matter and microorganism present in sludge by anaerobic digestion. The present study carries out a numerical analysis of the fluid dynamic behaviour of a CSTR in order to optimize the process energetically. The characterization of the sludge flow inside the digester tank, the residence time distribution and the active volume of the reactor under different criteria are determined. The effects of design and power of the mixing system on the active volume of the CSTR are analyzed. The numerical model is solved under non-steady conditions by examining the evolution of the flow during the stop and restart of the mixing system. An intermittent regime of the mixing system, which kept the active volume between 94% and 99%, is achieved. The results obtained can lead to the eventual energy optimization of the mixing system of the CSTR. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  5. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    ERIC Educational Resources Information Center

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  6. Scaling laws and reduced-order models for mixing and reactive-transport in heterogeneous anisotropic porous media

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Karra, S.; Nakshatrala, K. B.

    2016-12-01

    Fundamental to enhancement and control of the macroscopic spreading, mixing, and dilution of solute plumes in porous media structures is the topology of flow field and underlying heterogeneity and anisotropy contrast of porous media. Traditionally, in literature, the main focus was limited to the shearing effects of flow field (i.e., flow has zero helical density, meaning that flow is always perpendicular to vorticity vector) on scalar mixing [2]. However, the combined effect of anisotropy of the porous media and the helical structure (or chaotic nature) of the flow field on the species reactive-transport and mixing has been rarely studied. Recently, it has been experimentally shown that there is an irrefutable evidence that chaotic advection and helical flows are inherent in porous media flows [1,2]. In this poster presentation, we present a non-intrusive physics-based model-order reduction framework to quantify the effects of species mixing in-terms of reduced-order models (ROMs) and scaling laws. The ROM framework is constructed based on the recent advancements in non-negative formulations for reactive-transport in heterogeneous anisotropic porous media [3] and non-intrusive ROM methods [4]. The objective is to generate computationally efficient and accurate ROMs for species mixing for different values of input data and reactive-transport model parameters. This is achieved by using multiple ROMs, which is a way to determine the robustness of the proposed framework. Sensitivity analysis is performed to identify the important parameters. Representative numerical examples from reactive-transport are presented to illustrate the importance of the proposed ROMs to accurately describe mixing process in porous media. [1] Lester, Metcalfe, and Trefry, "Is chaotic advection inherent to porous media flow?," PRL, 2013. [2] Ye, Chiogna, Cirpka, Grathwohl, and Rolle, "Experimental evidence of helical flow in porous media," PRL, 2015. [3] Mudunuru, and Nakshatrala, "On enforcing maximum principles and achieving element-wise species balance for advection-diffusion-reaction equations under the finite element method," JCP, 2016. [4] Quarteroni, Manzoni, and Negri. "Reduced Basis Methods for Partial Differential Equations: An Introduction," Springer, 2016.

  7. Solvency supervision based on a total balance sheet approach

    NASA Astrophysics Data System (ADS)

    Pitselis, Georgios

    2009-11-01

    In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.

  8. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    NASA Astrophysics Data System (ADS)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  9. Convective Overshoot in Stellar Interior

    NASA Astrophysics Data System (ADS)

    Zhang, Q. S.

    2015-07-01

    In stellar interiors, the turbulent thermal convection transports matters and energy, and dominates the structure and evolution of stars. The convective overshoot, which results from the non-local convective transport from the convection zone to the radiative zone, is one of the most uncertain and difficult factors in stellar physics at present. The classical method for studying the convective overshoot is the non-local mixing-length theory (NMLT). However, the NMLT bases on phenomenological assumptions, and leads to contradictions, thus the NMLT was criticized in literature. At present, the helioseismic studies have shown that the NMLT cannot satisfy the helioseismic requirements, and have pointed out that only the turbulent convection models (TCMs) can be accepted. In the first part of this thesis, models and derivations of both the NMLT and the TCM were introduced. In the second part, i.e., the work part, the studies on the TCM (theoretical analysis and applications), and the development of a new model of the convective overshoot mixing were described in detail. In the work of theoretical analysis on the TCM, the approximate solution and the asymptotic solution were obtained based on some assumptions. The structure of the overshoot region was discussed. In a large space of the free parameters, the approximate/asymptotic solutions are in good agreement with the numerical results. We found an important result that the scale of the overshoot region in which the thermal energy transport is effective is 1 HK (HK is the scale height of turbulence kinetic energy), which does not depend on the free parameters of the TCM. We applied the TCM and a simple overshoot mixing model in three cases. In the solar case, it was found that the temperature gradient in the overshoot region is in agreement with the helioseismic requirements, and the profiles of the solar lithium abundance, sound speed, and density of the solar models are also improved. In the low-mass stars of open clusters Hyades, Praesepe, NGC6633, NGC752, NGC3680, and M67, using the model and parameter same to the solar case to deal with the convective envelope overshoot mixing, the lithium abundances on the surface of the stellar models were consistent with the observations. In the case of the binary HY Vir, the same model and parameter also make the radii and effective temperatures of HY Vir stars with convective cores be consistent with the observations. Based on the implications of the above results, we found that the simple overshoot mixing model may need to be improved significantly. Motivated by those implications, we established a new model of the overshoot mixing based on the fluid dynamic equations, and worked out the diffusion coefficient of convective mixing. The diffusion coefficient shows different behaviors in convection zone and overshoot region. In the overshoot region, the buoyancy does negative works on flows, thus the fluid flows around the equilibrium location, which leads to a small scale and low efficiency of overshoot mixing. The physical properties are significantly different from the classical NMLT, and consistent with the helioseismic studies and numerical simulations. The new model was tested in stellar evolution, and its parameter was calibrated.

  10. Application of Mixed Effects Limits of Agreement in the Presence of Multiple Sources of Variability: Exemplar from the Comparison of Several Devices to Measure Respiratory Rate in COPD Patients

    PubMed Central

    Weir, Christopher J.; Rubio, Noah; Rabinovich, Roberto; Pinnock, Hilary; Hanley, Janet; McCloughan, Lucy; Drost, Ellen M.; Mantoani, Leandro C.; MacNee, William; McKinstry, Brian

    2016-01-01

    Introduction The Bland-Altman limits of agreement method is widely used to assess how well the measurements produced by two raters, devices or systems agree with each other. However, mixed effects versions of the method which take into account multiple sources of variability are less well described in the literature. We address the practical challenges of applying mixed effects limits of agreement to the comparison of several devices to measure respiratory rate in patients with chronic obstructive pulmonary disease (COPD). Methods Respiratory rate was measured in 21 people with a range of severity of COPD. Participants were asked to perform eleven different activities representative of daily life during a laboratory-based standardised protocol of 57 minutes. A mixed effects limits of agreement method was used to assess the agreement of five commercially available monitors (Camera, Photoplethysmography (PPG), Impedance, Accelerometer, and Chest-band) with the current gold standard device for measuring respiratory rate. Results Results produced using mixed effects limits of agreement were compared to results from a fixed effects method based on analysis of variance (ANOVA) and were found to be similar. The Accelerometer and Chest-band devices produced the narrowest limits of agreement (-8.63 to 4.27 and -9.99 to 6.80 respectively) with mean bias -2.18 and -1.60 breaths per minute. These devices also had the lowest within-participant and overall standard deviations (3.23 and 3.29 for Accelerometer and 4.17 and 4.28 for Chest-band respectively). Conclusions The mixed effects limits of agreement analysis enabled us to answer the question of which devices showed the strongest agreement with the gold standard device with respect to measuring respiratory rates. In particular, the estimated within-participant and overall standard deviations of the differences, which are easily obtainable from the mixed effects model results, gave a clear indication that the Accelerometer and Chest-band devices performed best. PMID:27973556

  11. Analytical modeling of operating characteristics of premixing-prevaporizing fuel-air mixing passages. Volume 1: Analysis and results

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.

    1982-01-01

    A model for predicting the distribution of liquid fuel droplets and fuel vapor in premixing-prevaporizing fuel-air mixing passages of the direct injection type is reported. This model consists of three computer programs; a calculation of the two dimensional or axisymmetric air flow field neglecting the effects of fuel; a calculation of the three dimensional fuel droplet trajectories and evaporation rates in a known, moving air flow; a calculation of fuel vapor diffusing into a moving three dimensional air flow with source terms dependent on the droplet evaporation rates. The fuel droplets are treated as individual particle classes each satisfying Newton's law, a heat transfer, and a mass transfer equation. This fuel droplet model treats multicomponent fuels and incorporates the physics required for the treatment of elastic droplet collisions, droplet shattering, droplet coalescence and droplet wall interactions. The vapor diffusion calculation treats three dimensional, gas phase, turbulent diffusion processes. The analysis includes a model for the autoignition of the fuel air mixture based upon the rate of formation of an important intermediate chemical species during the preignition period.

  12. On the validity of effective formulations for transport through heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    de Dreuzy, Jean-Raynald; Carrera, Jesus

    2016-04-01

    Geological heterogeneity enhances spreading of solutes and causes transport to be anomalous (i.e., non-Fickian), with much less mixing than suggested by dispersion. This implies that modeling transport requires adopting either stochastic approaches that model heterogeneity explicitly or effective transport formulations that acknowledge the effects of heterogeneity. A number of such formulations have been developed and tested as upscaled representations of enhanced spreading. However, their ability to represent mixing has not been formally tested, which is required for proper reproduction of chemical reactions and which motivates our work. We propose that, for an effective transport formulation to be considered a valid representation of transport through heterogeneous porous media (HPM), it should honor mean advection, mixing and spreading. It should also be flexible enough to be applicable to real problems. We test the capacity of the multi-rate mass transfer (MRMT) model to reproduce mixing observed in HPM, as represented by the classical multi-Gaussian log-permeability field with a Gaussian correlation pattern. Non-dispersive mixing comes from heterogeneity structures in the concentration fields that are not captured by macrodispersion. These fine structures limit mixing initially, but eventually enhance it. Numerical results show that, relative to HPM, MRMT models display a much stronger memory of initial conditions on mixing than on dispersion because of the sensitivity of the mixing state to the actual values of concentration. Because MRMT does not restitute the local concentration structures, it induces smaller non-dispersive mixing than HPM. However long-lived trapping in the immobile zones may sustain the deviation from dispersive mixing over much longer times. While spreading can be well captured by MRMT models, in general non-dispersive mixing cannot.

  13. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  14. Analyzing Mixed-Dyadic Data Using Structural Equation Models

    ERIC Educational Resources Information Center

    Peugh, James L.; DiLillo, David; Panuzio, Jillian

    2013-01-01

    Mixed-dyadic data, collected from distinguishable (nonexchangeable) or indistinguishable (exchangeable) dyads, require statistical analysis techniques that model the variation within dyads and between dyads appropriately. The purpose of this article is to provide a tutorial for performing structural equation modeling analyses of cross-sectional…

  15. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  16. A Growth Curve Analysis of the Course of Dysthymic Disorder: The Effects of Chronic Stress and Moderation by Adverse Parent-Child Relationships and Family History

    ERIC Educational Resources Information Center

    Dougherty, Lea R.; Klein, Daniel N.; Davila, Joanne

    2004-01-01

    Using mixed effects models, the authors examined the effects of chronic stress, adverse parent-child relationships, and family history on the 7.5-year course of dysthymic disorder. Participants included 97 outpatients with early-onset dysthymia who were assessed with semistructured interviews at baseline and 3 additional times at 30-month…

  17. Robust and Sensitive Analysis of Mouse Knockout Phenotypes

    PubMed Central

    Karp, Natasha A.; Melvin, David; Mott, Richard F.

    2012-01-01

    A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student’s t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene’s function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained. PMID:23300663

  18. Evidence of a major gene from Bayesian segregation analyses of liability to osteochondral diseases in pigs.

    PubMed

    Kadarmideen, Haja N; Janss, Luc L G

    2005-11-01

    Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384-37.81), compared to the polygenic variance (sigmau2). Consequently, heritabilities for a mixed inheritance (range 0.65-0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38-0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on sigmau2, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a "reduced polygenic model" for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an "individual polygenic model." In all cases, "shrinkage estimators" for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans.

  19. Mixed Single/Double Precision in OpenIFS: A Detailed Study of Energy Savings, Scaling Effects, Architectural Effects, and Compilation Effects

    NASA Astrophysics Data System (ADS)

    Fagan, Mike; Dueben, Peter; Palem, Krishna; Carver, Glenn; Chantry, Matthew; Palmer, Tim; Schlacter, Jeremy

    2017-04-01

    It has been shown that a mixed precision approach that judiciously replaces double precision with single precision calculations can speed-up global simulations. In particular, a mixed precision variation of the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) showed virtually the same quality model results as the standard double precision version (Vana et al., Single precision in weather forecasting models: An evaluation with the IFS, Monthly Weather Review, in print). In this study, we perform detailed measurements of savings in computing time and energy using a mixed precision variation of the -OpenIFS- model. The mixed precision variation of OpenIFS is analogous to the IFS variation used in Vana et al. We (1) present results for energy measurements for simulations in single and double precision using Intel's RAPL technology, (2) conduct a -scaling- study to quantify the effects that increasing model resolution has on both energy dissipation and computing cycles, (3) analyze the differences between single core and multicore processing, and (4) compare the effects of different compiler technologies on the mixed precision OpenIFS code. In particular, we compare intel icc/ifort with gnu gcc/gfortran.

  20. Verification of Orthogrid Finite Element Modeling Techniques

    NASA Technical Reports Server (NTRS)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  1. Modelling ventricular fibrillation coarseness during cardiopulmonary resuscitation by mixed effects stochastic differential equations.

    PubMed

    Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo

    2015-10-15

    For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.

    PubMed

    Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano

    2017-11-08

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.

  3. The Promotion Strategy of Green Construction Materials: A Path Analysis Approach.

    PubMed

    Huang, Chung-Fah; Chen, Jung-Lu

    2015-10-14

    As one of the major materials used in construction, cement can be very resource-consuming and polluting to produce and use. Compared with traditional cement processing methods, dry-mix mortar is more environmentally friendly by reducing waste production or carbon emissions. Despite the continuous development and promotion of green construction materials, only a few of them are accepted or widely used in the market. In addition, the majority of existing research on green construction materials focuses more on their physical or chemical characteristics than on their promotion. Without effective promotion, their benefits cannot be fully appreciated and realized. Therefore, this study is conducted to explore the promotion of dry-mix mortars, one of the green materials. This study uses both qualitative and quantitative methods. First, through a case study, the potential of reducing carbon emission is verified. Then a path analysis is conducted to verify the validity and predictability of the samples based on the technology acceptance model (TAM) in this study. According to the findings of this research, to ensure better promotion results and wider application of dry-mix mortar, it is suggested that more systematic efforts be invested in promoting the usefulness and benefits of dry-mix mortar. The model developed in this study can provide helpful references for future research and promotion of other green materials.

  4. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  5. A brief introduction to mixed effects modelling and multi-model inference in ecology

    PubMed Central

    Donaldson, Lynda; Correa-Cano, Maria Eugenia; Goodwin, Cecily E.D.

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions. PMID:29844961

  6. A brief introduction to mixed effects modelling and multi-model inference in ecology.

    PubMed

    Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.

  7. Disturbance, A Mechanism for Increased Microbial Diversity in a Yellowstone National Park Hot Spring Mixing Zone

    NASA Astrophysics Data System (ADS)

    Howells, A. E.; Oiler, J.; Fecteau, K.; Boyd, E. S.; Shock, E.

    2014-12-01

    The parameters influencing species diversity in natural ecosystems are difficult to assess due to the long and experimentally prohibitive timescales needed to develop causative relationships among measurements. Ecological diversity-disturbance models suggest that disturbance is a mechanism for increased species diversity, allowing for coexistence of species at an intermediate level of disturbance. Observing this mechanism often requires long timescales, such as the succession of a forest after a fire. In this study we evaluated the effect of mixing of two end member hydrothermal fluids on the diversity and structure of a microbial community where disturbance occurs on small temporal and spatial scales. Outflow channels from two hot springs of differing geochemical composition in Yellowstone National Park, one pH 3.3 and 36 °C and the other pH 7.6 and 61 °C flow together to create a mixing zone on the order of a few meters. Geochemical measurements were made at both in-coming streams and at a site of complete mixing downstream of the mixing zone, at pH 6.5 and 46 °C. Compositions were estimated across the mixing zone at 1 cm intervals using microsensor temperature and conductivity measurements and a mixing model. Qualitatively, there are four distinct ecotones existing over ranges in temperature and pH across the mixing zone. Community analysis of the 16S rRNA genes of these ecotones show a peak in diversity at maximal mixing. Principle component analysis of community 16S rRNA genes reflects coexistence of species with communities at maximal mixing plotting intermediate to communities at distal ends of the mixing zone. These spatial biological and geochemical observations suggest that the mixing zone is a dynamic ecosystem where geochemistry and biological diversity are governed by changes in the flow rate and geochemical composition of the two hot spring sources. In ecology, understanding how environmental disruption increases species diversity is a foundation for ecosystem conservation. By studying a hot spring environment where detailed measurements of geochemical variation and community diversity can be made at small spatial scales, the mechanisms by which maximal diversity is achieved can be tested and may assist in applications of diversity-disturbance models for larger ecosystems.

  8. Simulating the Cyclone Induced Turbulent Mixing in the Bay of Bengal using COAWST Model

    NASA Astrophysics Data System (ADS)

    Prakash, K. R.; Nigam, T.; Pant, V.

    2017-12-01

    Mixing in the upper oceanic layers (up to a few tens of meters from surface) is an important process to understand the evolution of sea surface properties. Enhanced mixing due to strong wind forcing at surface leads to deepening of mixed layer that affects the air-sea exchange of heat and momentum fluxes and modulates sea surface temperature (SST). In the present study, we used Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) model to demonstrate and quantify the enhanced cyclone induced turbulent mixing in case of a severe cyclonic storm. The COAWST model was configured over the Bay of Bengal (BoB) and used to simulate the atmospheric and oceanic conditions prevailing during the tropical cyclone (TC) Phailin that occurred over the BoB during 10-15 October 2013. The model simulated cyclone track was validated with IMD best-track and model SST validated with daily AVHRR SST data. Validation shows that model simulated track & intensity, SST and salinity were in good agreement with observations and the cyclone induced cooling of the sea surface was well captured by the model. Model simulations show a considerable deepening (by 10-15 m) of the mixed layer and shoaling of thermocline during TC Phailin. The power spectrum analysis was performed on the zonal and meridional baroclinic current components, which shows strongest energy at 14 m depth. Model results were analyzed to investigate the non-uniform energy distribution in the water column from surface up to the thermocline depth. The rotary spectra analysis highlights the downward direction of turbulent mixing during the TC Phailin period. Model simulations were used to quantify and interpret the near-inertial mixing, which were generated by cyclone induced strong wind stress and the near-inertial energy. These near-inertial oscillations are responsible for the enhancement of the mixing operative in the strong post-monsoon (October-November) stratification in the BoB.

  9. Pharmaceutical Price Controls and Minimum Efficacy Regulation: Evidence from the United States and Italy

    PubMed Central

    Atella, Vincenzo; Bhattacharya, Jay; Carbonari, Lorenzo

    2012-01-01

    Objective This article examines the relationship between drug price and drug quality and how it varies across two of the most common regulatory regimes in the pharmaceutical market: minimum efficacy standards (MES) and a mix of MES and price control mechanisms (MES + PC). Data Sources Our primary data source is the Tufts-New England Medical Center-Cost Effectiveness Analysis Registry which have been merged with price data taken from MEPS (for the United States) and AIFA (for Italy). Study Design Through a simple model of adverse selection we model the interaction between firms, heterogeneous buyers, and the regulator. Principal Findings The theoretical analysis provides two results. First, an MES regime provides greater incentives to produce high-quality drugs. Second, an MES + PC mix reduces the difference in price between the highest and lowest quality drugs on the market. Conclusion The empirical analysis based on United States and Italian data corroborates these results. PMID:22091623

  10. Heavy quarkonium hybrids: Spectrum, decay, and mixing

    NASA Astrophysics Data System (ADS)

    Oncala, Ruben; Soto, Joan

    2017-07-01

    We present a largely model-independent analysis of the lighter heavy quarkonium hybrids based on the strong coupling regime of potential nonrelativistic QCD. We calculate the spectrum at leading order, including the mixing of static hybrid states. We use potentials that fulfill the required short and long distance theoretical constraints and fit well the available lattice data. We argue that the decay width to the lower lying heavy quarkonia can be reliably estimated in some cases and provide results for a selected set of decays. We also consider the mixing with heavy quarkonium states. We establish the form of the mixing potential at O (1 /mQ) , mQ being the mass of the heavy quarks, and work out its short and long distance constraints. The weak coupling regime of potential nonrelativistic QCD and the effective string theory of QCD are used for that goal. We show that the mixing effects may indeed be important and produce large spin symmetry violations. Most of the isospin zero XYZ states fit well in our spectrum, either as a hybrid or standard quarkonium candidate.

  11. Real medical benefit assessed by indirect comparison.

    PubMed

    Falissard, Bruno; Zylberman, Myriam; Cucherat, Michel; Izard, Valérie; Meyer, François

    2009-01-01

    Frequently, in data packages submitted for Marketing Approval to the CHMP, there is a lack of relevant head-to-head comparisons of medicinal products that could enable national authorities responsible for the approval of reimbursement to assess the Added Therapeutic Value (ASMR) of new clinical entities or line extensions of existing therapies.Indirect or mixed treatment comparisons (MTC) are methods stemming from the field of meta-analysis that have been designed to tackle this problem. Adjusted indirect comparisons, meta-regressions, mixed models, Bayesian network analyses pool results of randomised controlled trials (RCTs), enabling a quantitative synthesis.The REAL procedure, recently developed by the HAS (French National Authority for Health), is a mixture of an MTC and effect model based on expert opinions. It is intended to translate the efficacy observed in the trials into effectiveness expected in day-to-day clinical practice in France.

  12. Transport theory and the WKB approximation for interplanetary MHD fluctuations

    NASA Technical Reports Server (NTRS)

    Matthaeus, William H.; Zhou, YE; Zank, G. P.; Oughton, S.

    1994-01-01

    An alternative approach, based on a multiple scale analysis, is presented in order to reconcile the traditional Wentzel-Kramer-Brillouin (WKB) approach to the modeling of interplanetary fluctuations in a mildly inhomogeneous large-scale flow with a more recently developed transport theory. This enables us to compare directly, at a formal level, the inherent structure of the two models. In the case of noninteracting, incompressible (Alven) waves, the principle difference between the two models is the presence of leading-order couplings (called 'mixing effects') in the non-WKB turbulence model which are absent in a WKB development. Within the context of linearized MHD, two cases have been identified for which the leading order non-WJB 'mixing term' does not vanish at zero wavelength. For these cases the WKB expansion is divergent, whereas the multiple-scale theory is well behaved. We have thus established that the WKB results are contained within the multiple-scale theory, but leading order mixing effects, which are likely to have important observational consequences, can never be recovered in the WKB style expansion. Properties of the higher-order terms in each expansion are also discussed, leading to the conclusion that the non-WKB hierarchy may be applicable even when the scale separation parameter is not small.

  13. Mixed Convection Flow of Nanofluid in Presence of an Inclined Magnetic Field

    PubMed Central

    Noreen, Saima; Ahmed, Bashir; Hayat, Tasawar

    2013-01-01

    This research is concerned with the mixed convection peristaltic flow of nanofluid in an inclined asymmetric channel. The fluid is conducting in the presence of inclined magnetic field. The governing equations are modelled. Mathematical formulation is completed through long wavelength and low Reynolds number approach. Numerical solution to the nonlinear analysis is made by shooting technique. Attention is mainly focused to the effects of Brownian motion and thermophoretic diffusion of nanoparticle. Results for velocity, temperature, concentration, pumping and trapping are obtained and analyzed in detail. PMID:24086276

  14. Multivariate Longitudinal Analysis with Bivariate Correlation Test

    PubMed Central

    Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory

    2016-01-01

    In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692

  15. A hybrid predictive model for acoustic noise in urban areas based on time series analysis and artificial neural network

    NASA Astrophysics Data System (ADS)

    Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine

    2017-06-01

    The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.

  16. Investigation of Turbulent Entrainment-Mixing Processes With a New Particle-Resolved Direct Numerical Simulation Model

    DOE PAGES

    Gao, Zheng; Liu, Yangang; Li, Xiaolin; ...

    2018-02-19

    Here, a new particle-resolved three dimensional direct numerical simulation (DNS) model is developed that combines Lagrangian droplet tracking with the Eulerian field representation of turbulence near the Kolmogorov microscale. Six numerical experiments are performed to investigate the processes of entrainment of clear air and subsequent mixing with cloudy air and their interactions with cloud microphysics. The experiments are designed to represent different combinations of three configurations of initial cloudy area and two turbulence modes (decaying and forced turbulence). Five existing measures of microphysical homogeneous mixing degree are examined, modified, and compared in terms of their ability as a unifying measuremore » to represent the effect of various entrainment-mixing mechanisms on cloud microphysics. Also examined and compared are the conventional Damköhler number and transition scale number as a dynamical measure of different mixing mechanisms. Relationships between the various microphysical measures and dynamical measures are investigated in search for a unified parameterization of entrainment-mixing processes. The results show that even with the same cloud water fraction, the thermodynamic and microphysical properties are different, especially for the decaying cases. Further analysis confirms that despite the detailed differences in cloud properties among the six simulation scenarios, the variety of turbulent entrainment-mixing mechanisms can be reasonably represented with power-law relationships between the microphysical homogeneous mixing degrees and the dynamical measures.« less

  17. Investigation of Turbulent Entrainment-Mixing Processes With a New Particle-Resolved Direct Numerical Simulation Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zheng; Liu, Yangang; Li, Xiaolin

    Here, a new particle-resolved three dimensional direct numerical simulation (DNS) model is developed that combines Lagrangian droplet tracking with the Eulerian field representation of turbulence near the Kolmogorov microscale. Six numerical experiments are performed to investigate the processes of entrainment of clear air and subsequent mixing with cloudy air and their interactions with cloud microphysics. The experiments are designed to represent different combinations of three configurations of initial cloudy area and two turbulence modes (decaying and forced turbulence). Five existing measures of microphysical homogeneous mixing degree are examined, modified, and compared in terms of their ability as a unifying measuremore » to represent the effect of various entrainment-mixing mechanisms on cloud microphysics. Also examined and compared are the conventional Damköhler number and transition scale number as a dynamical measure of different mixing mechanisms. Relationships between the various microphysical measures and dynamical measures are investigated in search for a unified parameterization of entrainment-mixing processes. The results show that even with the same cloud water fraction, the thermodynamic and microphysical properties are different, especially for the decaying cases. Further analysis confirms that despite the detailed differences in cloud properties among the six simulation scenarios, the variety of turbulent entrainment-mixing mechanisms can be reasonably represented with power-law relationships between the microphysical homogeneous mixing degrees and the dynamical measures.« less

  18. An Analysis of Results of a High-Resolution World Ocean Circulation Model.

    DTIC Science & Technology

    1988-03-01

    Level Experiments ............... 16 a. Baseline (Laplacian Mixing) Integration ........ 16 b. Isopycnal Mixing Integration ................... 18 3...One-Half Degree, Twenty Level Experiments .......... 18 a. Baseline (Three Year Interior Restoring) Integration...TWENTY LEVEL EXPERIMENTS .................... 21 1. Baseline (Laplacian Mixing) Integration ............. 21 2. Isopycnal Mixing Integration

  19. THE ROLE OF THE MAGNETOROTATIONAL INSTABILITY IN MASSIVE STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, J. Craig; Kagan, Daniel; Chatzopoulos, Emmanouil, E-mail: wheel@astro.as.utexas.edu

    2015-01-20

    The magnetorotational instability (MRI) is key to physics in accretion disks and is widely considered to play some role in massive star core collapse. Models of rotating massive stars naturally develop very strong shear at composition boundaries, a necessary condition for MRI instability, and the MRI is subject to triply diffusive destabilizing effects in radiative regions. We have used the MESA stellar evolution code to compute magnetic effects due to the Spruit-Tayler (ST) mechanism and the MRI, separately and together, in a sample of massive star models. We find that the MRI can be active in the later stages ofmore » massive star evolution, leading to mixing effects that are not captured in models that neglect the MRI. The MRI and related magnetorotational effects can move models of given zero-age main sequence mass across ''boundaries'' from degenerate CO cores to degenerate O/Ne/Mg cores and from degenerate O/Ne/Mg cores to iron cores, thus affecting the final evolution and the physics of core collapse. The MRI acting alone can slow the rotation of the inner core in general agreement with the observed ''initial'' rotation rates of pulsars. The MRI analysis suggests that localized fields ∼10{sup 12} G may exist at the boundary of the iron core. With both the ST and MRI mechanisms active in the 20 M {sub ☉} model, we find that the helium shell mixes entirely out into the envelope. Enhanced mixing could yield a population of yellow or even blue supergiant supernova progenitors that would not be standard SN IIP.« less

  20. Generalized functional linear models for gene-based case-control association studies.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.

  1. Generalized Functional Linear Models for Gene-based Case-Control Association Studies

    PubMed Central

    Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao

    2014-01-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  2. Effect of Monospecific and Mixed Sea-Buckthorn (Hippophae rhamnoides) Plantations on the Structure and Activity of Soil Microbial Communities

    PubMed Central

    Yu, Xuan; Liu, Xu; Zhao, Zhong; Liu, Jinliang; Zhang, Shunxiang

    2015-01-01

    This study aims to evaluate the effect of different afforestation models on soil microbial composition in the Loess Plateau in China. In particular, we determined soil physicochemical properties, enzyme activities, and microbial community structures in the top 0 cm to 10 cm soil underneath a pure Hippophae rhamnoides (SS) stand and three mixed stands, namely, H. rhamnoides and Robinia pseucdoacacia (SC), H. rhamnoides and Pinus tabulaeformis (SY), and H. rhamnoides and Platycladus orientalis (SB). Results showed that total organic carbon (TOC), total nitrogen, and ammonium (NH4 +) contents were higher in SY and SB than in SS. The total microbial biomass, bacterial biomass, and Gram+ biomass of the three mixed stands were significantly higher than those of the pure stand. However, no significant difference was found in fungal biomass. Correlation analysis suggested that soil microbial communities are significantly and positively correlated with some chemical parameters of soil, such as TOC, total phosphorus, total potassium, available phosphorus, NH4 + content, nitrate content (NH3 −), and the enzyme activities of urease, peroxidase, and phosphatase. Principal component analysis showed that the microbial community structures of SB and SS could clearly be discriminated from each other and from the others, whereas SY and SC were similar. In conclusion, tree species indirectly but significantly affect soil microbial communities and enzyme activities through soil physicochemical properties. In addition, mixing P. tabulaeformis or P. orientalis in H. rhamnoides plantations is a suitable afforestation model in the Loess Plateau, because of significant positive effects on soil nutrient conditions, microbial community, and enzyme activities over pure plantations. PMID:25658843

  3. Global analysis of fermion mixing with exotics

    NASA Technical Reports Server (NTRS)

    Nardi, Enrico; Roulet, Esteban; Tommasini, Daniele

    1991-01-01

    The limits are analyzed on deviation of the lepton and quark weak-couplings from their standard model values in a general class of models where the known fermions are allowed to mix with new heavy particles with exotic SU(2) x U(1) quantum number assignments (left-handed singlets or right-handed doublets). These mixings appear in many extensions of the electroweak theory such as models with mirror fermions, E(sub 6) models, etc. The results update previous analyses and improve considerably the existing bounds.

  4. Formulation of Water Quality Models for Streams, Lakes and Reservoirs: Modeler’s Perspective

    DTIC Science & Technology

    1989-07-01

    dilution of efflu- ent plumes . These mixing models also address the question of whether a pol- lutant has been sufficiently diluted to meet discharge...PS releases, e.g. DISPER or TADPOL (Almquist et al. 1977) for passive mixing in the far field, and various jet and plume mixing models in uniform or...Experiment Station, Vicksburg, MS. Harleman, D. R. F. 1982 (Mar). " Hydrothermal Analysis of Lakes and Reser- voirs, Journal of Hydraulics Division

  5. Numerical Study of Buoyancy and Different Diffusion Effects on the Structure and Dynamics of Triple Flames

    NASA Technical Reports Server (NTRS)

    Chen, Jyh-Yuan; Echekki, Tarek

    2001-01-01

    Numerical simulations of 2-D triple flames under gravity force have been implemented to identify the effects of gravity on triple flame structure and propagation properties and to understand the mechanisms of instabilities resulting from both heat release and buoyancy effects. A wide range of gravity conditions, heat release, and mixing widths for a scalar mixing layer are computed for downward-propagating (in the same direction with the gravity vector) and upward-propagating (in the opposite direction of the gravity vector) triple flames. Results of numerical simulations show that gravity strongly affects the triple flame speed through its contribution to the overall flow field. A simple analytical model for the triple flame speed, which accounts for both buoyancy and heat release, is developed. Comparisons of the proposed model with the numerical results for a wide range of gravity, heat release and mixing width conditions, yield very good agreement. The analysis shows that under neutral diffusion, downward propagation reduces the triple flame speed, while upward propagation enhances it. For the former condition, a critical Froude number may be evaluated, which corresponds to a vanishing triple flame speed. Downward-propagating triple flames at relatively strong gravity effects have exhibited instabilities. These instabilities are generated without any artificial forcing of the flow. Instead disturbances are initiated by minute round-off errors in the numerical simulations, and subsequently amplified by instabilities. A linear stability analysis on mean profiles of stable triple flame configurations have been performed to identify the most amplified frequency in spatially developed flows. The eigenfunction equations obtained from the linearized disturbance equations are solved using the shooting method. The linear stability analysis yields reasonably good agreements with the observed frequencies of the unstable triple flames. The frequencies and amplitudes of disturbances increase with the magnitude of the gravity vector. Moreover, disturbances appear to be most amplified just downstream of the premixed branches. The effects of mixing width and differential diffusion are investigated and their roles on the flame stability are studied.

  6. Transient Ejector Analysis (TEA) code user's guide

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1993-01-01

    A FORTRAN computer program for the semi analytic prediction of unsteady thrust augmenting ejector performance has been developed, based on a theoretical analysis for ejectors. That analysis blends classic self-similar turbulent jet descriptions with control-volume mixing region elements. Division of the ejector into an inlet, diffuser, and mixing region allowed flexibility in the modeling of the physics for each region. In particular, the inlet and diffuser analyses are simplified by a quasi-steady-analysis, justified by the assumption that pressure is the forcing function in those regions. Only the mixing region is assumed to be dominated by viscous effects. The present work provides an overview of the code structure, a description of the required input and output data file formats, and the results for a test case. Since there are limitations to the code for applications outside the bounds of the test case, the user should consider TEA as a research code (not as a production code), designed specifically as an implementation of the proposed ejector theory. Program error flags are discussed, and some diagnostic routines are presented.

  7. An overview of longitudinal data analysis methods for neurological research.

    PubMed

    Locascio, Joseph J; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.

  8. Effective Stochastic Model for Reactive Transport

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A. M.; Zheng, B.; Barajas-Solano, D. A.

    2017-12-01

    We propose an effective stochastic advection-diffusion-reaction (SADR) model. Unlike traditional advection-dispersion-reaction models, the SADR model describes mechanical and diffusive mixing as two separate processes. In the SADR model, the mechanical mixing is driven by random advective velocity with the variance given by the coefficient of mechanical dispersion. The diffusive mixing is modeled as a fickian diffusion with the effective diffusion coefficient. Both coefficients are given in terms of Peclet number (Pe) and the coefficient of molecular diffusion. We use the experimental results of to demonstrate that for transport and bimolecular reactions in porous media the SADR model is significantly more accurate than the traditional dispersion model, which overestimates the mass of the reaction product by as much as 25%.

  9. Strategic Analysis of Terrorism

    NASA Astrophysics Data System (ADS)

    Arce, Daniel G.; Sandler, Todd

    Two areas that are increasingly studied in the game-theoretic literature on terrorism and counterterrorism are collective action and asymmetric information. One contribution of this chapter is a survey and extension of continuous policy models with differentiable payoff functions. In this way, policies can be characterized as strategic substitutes (e. g., proactive measures), or strategic complements (e. g., defensive measures). Mixed substitute-complement models are also introduced. We show that the efficiency of counterterror policy depends upon (i) the strategic substitutes-complements characterization, and (ii) who initiates the action. Surprisingly, in mixed-models the dichotomy between individual and collective action may disappear. A second contribution is the consideration of a signaling model where indiscriminant spectacular terrorist attacks may erode terrorists’ support among its constituency, and proactive government responses can create a backlash effect in favor of terrorists. A novel equilibrium of this model reflects the well-documented ineffectiveness of terrorism in achieving its stated goals.

  10. Fully-coupled analysis of jet mixing problems. Part 1. Shock-capturing model, SCIPVIS

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Wolf, D. E.

    1984-01-01

    A computational model, SCIPVIS, is described which predicts the multiple cell shock structure in imperfectly expanded, turbulent, axisymmetric jets. The model spatially integrates the parabolized Navier-Stokes jet mixing equations using a shock-capturing approach in supersonic flow regions and a pressure-split approximation in subsonic flow regions. The regions are coupled using a viscous-characteristic procedure. Turbulence processes are represented via the solution of compressibility-corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive analysis of the wake-like mixing process occurring behind Mach discs is handled in a rigorous manner. Calculations are presented exhibiting the fundamental interactive processes occurring in supersonic jets and the model is assessed via comparisons with detailed laboratory data for a variety of under- and overexpanded jets.

  11. Measuring the individual benefit of a medical or behavioral treatment using generalized linear mixed-effects models.

    PubMed

    Diaz, Francisco J

    2016-10-15

    We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Analysis and testing of high entrainment single nozzle jet pumps with variable mixing tubes

    NASA Technical Reports Server (NTRS)

    Hickman, K. E.; Hill, P. G.; Gilbert, G. B.

    1972-01-01

    An analytical model was developed to predict the performance characteristics of axisymmetric single-nozzle jet pumps with variable area mixing tubes. The primary flow may be subsonic or supersonic. The computer program uses integral techniques to calculate the velocity profiles and the wall static pressures that result from the mixing of the supersonic primary jet and the subsonic secondary flow. An experimental program was conducted to measure mixing tube wall static pressure variations, velocity profiles, and temperature profiles in a variable area mixing tube with a supersonic primary jet. Static pressure variations were measured at four different secondary flow rates. These test results were used to evaluate the analytical model. The analytical results compared well to the experimental data. Therefore, the analysis is believed to be ready for use to relate jet pump performance characteristics to mixing tube design.

  13. The influence of a wind tunnel on helicopter rotational noise: Formulation of analysis

    NASA Technical Reports Server (NTRS)

    Mosher, M.

    1984-01-01

    An analytical model is discussed that can be used to examine the effects of wind tunnel walls on helicopter rotational noise. A complete physical model of an acoustic source in a wind tunnel is described and a simplified version is then developed. This simplified model retains the important physical processes involved, yet it is more amenable to analysis. The simplified physical model is then modeled as a mathematical problem. An inhomogeneous partial differential equation with mixed boundary conditions is set up and then transformed into an integral equation. Details of generating a suitable Green's function and integral equation are included and the equation is discussed and also given for a two-dimensional case.

  14. Statistical power calculations for mixed pharmacokinetic study designs using a population approach.

    PubMed

    Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel

    2014-09-01

    Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.

  15. Perspectives On Dilution Jet Mixing

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.; Srinivasan, R.

    1990-01-01

    NASA recently completed program of measurements and modeling of mixing of transverse jets with ducted crossflow, motivated by need to design or tailor temperature pattern at combustor exit in gas turbine engines. Objectives of program to identify dominant physical mechanisms governing mixing, extend empirical models to provide near-term predictive capability, and compare numerical code calculations with data to guide future analysis improvement efforts.

  16. A microphysical pathway analysis to investigate aerosol effects on convective clouds

    NASA Astrophysics Data System (ADS)

    Heikenfeld, Max; White, Bethan; Labbouz, Laurent; Stier, Philip

    2017-04-01

    The impact of aerosols on ice- and mixed-phase processes in convective clouds remains highly uncertain, which has strong implications for estimates of the role of aerosol-cloud interactions in the climate system. The wide range of interacting microphysical processes are still poorly understood and generally not resolved in global climate models. To understand and visualise these processes and to conduct a detailed pathway analysis, we have added diagnostic output of all individual process rates for number and mass mixing ratios to two commonly-used cloud microphysics schemes (Thompson and Morrison) in WRF. This allows us to investigate the response of individual processes to changes in aerosol conditions and the propagation of perturbations throughout the development of convective clouds. Aerosol effects on cloud microphysics could strongly depend on the representation of these interactions in the model. We use different model complexities with regard to aerosol-cloud interactions ranging from simulations with different levels of fixed cloud droplet number concentration (CDNC) as a proxy for aerosol, to prognostic CDNC with fixed modal aerosol distributions. Furthermore, we have implemented the HAM aerosol model in WRF-chem to also perform simulations with a fully interactive aerosol scheme. We employ a hierarchy of simulation types to understand the evolution of cloud microphysical perturbations in atmospheric convection. Idealised supercell simulations are chosen to present and test the analysis methods for a strongly confined and well-studied case. We then extend the analysis to large case study simulations of tropical convection over the Amazon rainforest. For both cases we apply our analyses to individually tracked convective cells. Our results show the impact of model uncertainties on the understanding of aerosol-convection interactions and have implications for improving process representation in models.

  17. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  18. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    PubMed

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  19. Effect of correlation on covariate selection in linear and nonlinear mixed effect models.

    PubMed

    Bonate, Peter L

    2017-01-01

    The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Analysis of longitudinal diffusion-weighted images in healthy and pathological aging: An ADNI study.

    PubMed

    Kruggel, Frithjof; Masaki, Fumitaro; Solodkin, Ana

    2017-02-15

    The widely used framework of voxel-based morphometry for analyzing neuroimages is extended here to model longitudinal imaging data by exchanging the linear model with a linear mixed-effects model. The new approach is employed for analyzing a large longitudinal sample of 756 diffusion-weighted images acquired in 177 subjects of the Alzheimer's Disease Neuroimaging initiative (ADNI). While sample- and group-level results from both approaches are equivalent, the mixed-effect model yields information at the single subject level. Interestingly, the neurobiological relevance of the relevant parameter at the individual level describes specific differences associated with aging. In addition, our approach highlights white matter areas that reliably discriminate between patients with Alzheimer's disease and healthy controls with a predictive power of 0.99 and include the hippocampal alveus, the para-hippocampal white matter, the white matter of the posterior cingulate, and optic tracts. In this context, notably the classifier includes a sub-population of patients with minimal cognitive impairment into the pathological domain. Our classifier offers promising features for an accessible biomarker that predicts the risk of conversion to Alzheimer's disease. Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp-content/uploads/how to apply/ADNI Acknowledgement List.pdf. Significance statement This study assesses neuro-degenerative processes in the brain's white matter as revealed by diffusion-weighted imaging, in order to discriminate healthy from pathological aging in a large sample of elderly subjects. The analysis of time-series examinations in a linear mixed effects model allowed the discrimination of population-based aging processes from individual determinants. We demonstrate that a simple classifier based on white matter imaging data is able to predict the conversion to Alzheimer's disease with a high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A comprehensive guide to fuel management practices for dry mixed conifer forests in the northwestern United States: Inventory and model-based economic analysis of mechanical fuel treatments

    Treesearch

    Theresa B. Jain; Mike A. Battaglia; Han-Sup Han; Russell T. Graham; Christopher R. Keyes; Jeremy S. Fried; Jonathan E. Sandquist

    2014-01-01

    Implementing fuel treatments in every place where it could be beneficial to do so is impractical and not cost effective under any plausible specification of objectives. Only some of the many possible kinds of treatments will be effective in any particular stand and there are some stands that seem to defy effective treatment. In many more, effective treatment costs far...

  2. Water sources and mixing in riparian wetlands revealed by tracers and geospatial analysis.

    PubMed

    Lessels, Jason S; Tetzlaff, Doerthe; Birkel, Christian; Dick, Jonathan; Soulsby, Chris

    2016-01-01

    Mixing of waters within riparian zones has been identified as an important influence on runoff generation and water quality. Improved understanding of the controls on the spatial and temporal variability of water sources and how they mix in riparian zones is therefore of both fundamental and applied interest. In this study, we have combined topographic indices derived from a high-resolution Digital Elevation Model (DEM) with repeated spatially high-resolution synoptic sampling of multiple tracers to investigate such dynamics of source water mixing. We use geostatistics to estimate concentrations of three different tracers (deuterium, alkalinity, and dissolved organic carbon) across an extended riparian zone in a headwater catchment in NE Scotland, to identify spatial and temporal influences on mixing of source waters. The various biogeochemical tracers and stable isotopes helped constrain the sources of runoff and their temporal dynamics. Results show that spatial variability in all three tracers was evident in all sampling campaigns, but more pronounced in warmer dryer periods. The extent of mixing areas within the riparian area reflected strong hydroclimatic controls and showed large degrees of expansion and contraction that was not strongly related to topographic indices. The integrated approach of using multiple tracers, geospatial statistics, and topographic analysis allowed us to classify three main riparian source areas and mixing zones. This study underlines the importance of the riparian zones for mixing soil water and groundwater and introduces a novel approach how this mixing can be quantified and the effect on the downstream chemistry be assessed.

  3. VISUALIZATION-BASED ANALYSIS FOR A MIXED-INHIBITION BINARY PBPK MODEL: DETERMINATION OF INHIBITION MECHANISM

    EPA Science Inventory

    A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine mechanism of the metabolic interactions occurring during simultaneous inhalation exposures to the organic solvents chloroform and trichloroethylene (TCE).

    V...

  4. VISUALIZATION-BASED ANALYSIS FOR A MIXED-INHIBITION BINARY PBPK MODEL: DETERMINATION OF INHIBITION MECHANISM

    EPA Science Inventory

    A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine the mechanism of metabolic interactions occurring during simultaneous exposures to the organic solvents chloroform and trichloroethylene (TCE). Visualization-based se...

  5. Comparison of CTT and Rasch-based approaches for the analysis of longitudinal Patient Reported Outcomes.

    PubMed

    Blanchin, Myriam; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Blanchard, Claire; Mirallié, Eric; Sébille, Véronique

    2011-04-15

    Health sciences frequently deal with Patient Reported Outcomes (PRO) data for the evaluation of concepts, in particular health-related quality of life, which cannot be directly measured and are often called latent variables. Two approaches are commonly used for the analysis of such data: Classical Test Theory (CTT) and Item Response Theory (IRT). Longitudinal data are often collected to analyze the evolution of an outcome over time. The most adequate strategy to analyze longitudinal latent variables, which can be either based on CTT or IRT models, remains to be identified. This strategy must take into account the latent characteristic of what PROs are intended to measure as well as the specificity of longitudinal designs. A simple and widely used IRT model is the Rasch model. The purpose of our study was to compare CTT and Rasch-based approaches to analyze longitudinal PRO data regarding type I error, power, and time effect estimation bias. Four methods were compared: the Score and Mixed models (SM) method based on the CTT approach, the Rasch and Mixed models (RM), the Plausible Values (PV), and the Longitudinal Rasch model (LRM) methods all based on the Rasch model. All methods have shown comparable results in terms of type I error, all close to 5 per cent. LRM and SM methods presented comparable power and unbiased time effect estimations, whereas RM and PV methods showed low power and biased time effect estimations. This suggests that RM and PV methods should be avoided to analyze longitudinal latent variables. Copyright © 2010 John Wiley & Sons, Ltd.

  6. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.

  7. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Multiple component end-member mixing model of dilution: hydrochemical effects of construction water at Yucca Mountain, Nevada, USA

    NASA Astrophysics Data System (ADS)

    Lu, Guoping; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.

    2008-12-01

    The standard dual-component and two-member linear mixing model is often used to quantify water mixing of different sources. However, it is no longer applicable whenever actual mixture concentrations are not exactly known because of dilution. For example, low-water-content (low-porosity) rock samples are leached for pore-water chemical compositions, which therefore are diluted in the leachates. A multicomponent, two-member mixing model of dilution has been developed to quantify mixing of water sources and multiple chemical components experiencing dilution in leaching. This extended mixing model was used to quantify fracture-matrix interaction in construction-water migration tests along the Exploratory Studies Facility (ESF) tunnel at Yucca Mountain, Nevada, USA. The model effectively recovers the spatial distribution of water and chemical compositions released from the construction water, and provides invaluable data on the matrix fracture interaction. The methodology and formulations described here are applicable to many sorts of mixing-dilution problems, including dilution in petroleum reservoirs, hydrospheres, chemical constituents in rocks and minerals, monitoring of drilling fluids, and leaching, as well as to environmental science studies.

  9. Effects of plasticization and shear stress on phase structure development and properties of soy protein blends.

    PubMed

    Chen, Feng; Zhang, Jinwen

    2010-11-01

    In this study, soy protein concentrate (SPC) was used as a plastic component to blend with poly(butylene adipate-co-terephthalate) (PBAT). Effects of SPC plasticization and blend composition on its deformation during mixing were studied in detail. Influence of using water as the major plasticizer and glycerol as the co-plasticizer on the deformation of the SPC phase during mixing was explored. The effect of shear stress, as affected by SPC loading level, on the phase structure of SPC in the blends was also investigated. Quantitative analysis of the aspect ratio of SPC particles was conducted by using ImageJ software, and an empirical model predicting the formation of percolated structure was applied. The experimental results and the model prediction showed a fairly good agreement. The experimental results and statistic analysis suggest that both SPC loading level and its water content prior to compounding had significant influences on development of the SPC phase structure and were correlated in determining the morphological structures of the resulting blends. Consequently, physical and mechanical properties of the blends greatly depended on the phase morphology and PBAT/SPC ratio of the blends.

  10. Mixing and non-equilibrium chemical reaction in a compressible mixing layer. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Steinberger, Craig J.

    1991-01-01

    The effects of compressibility, chemical reaction exothermicity, and non-equilibrium chemical modeling in a reacting plane mixing layer were investigated by means of two dimensional direct numerical simulations. The chemical reaction was irreversible and second order of the type A + B yields Products + Heat. The general governing fluid equations of a compressible reacting flow field were solved by means of high order finite difference methods. Physical effects were then determined by examining the response of the mixing layer to variation of the relevant non-dimensionalized parameters. The simulations show that increased compressibility generally results in a suppressed mixing, and consequently a reduced chemical reaction conversion rate. Reaction heat release was found to enhance mixing at the initial stages of the layer growth, but had a stabilizing effect at later times. The increased stability manifested itself in the suppression or delay of the formation of large coherent structures within the flow. Calculations were performed for a constant rate chemical kinetics model and an Arrhenius type kinetic prototype. The choice of the model was shown to have an effect on the development of the flow. The Arrhenius model caused a greater temperature increase due to reaction than the constant kinetic model. This had the same effect as increasing the exothermicity of the reaction. Localized flame quenching was also observed when the Zeldovich number was relatively large.

  11. A mixed methods approach to assess animal vaccination programmes: The case of rabies control in Bamako, Mali.

    PubMed

    Mosimann, Laura; Traoré, Abdallah; Mauti, Stephanie; Léchenne, Monique; Obrist, Brigit; Véron, René; Hattendorf, Jan; Zinsstag, Jakob

    2017-01-01

    In the framework of the research network on integrated control of zoonoses in Africa (ICONZ) a dog rabies mass vaccination campaign was carried out in two communes of Bamako (Mali) in September 2014. A mixed method approach, combining quantitative and qualitative tools, was developed to evaluate the effectiveness of the intervention towards optimization for future scale-up. Actions to control rabies occur on one level in households when individuals take the decision to vaccinate their dogs. However, control also depends on provision of vaccination services and community participation at the intermediate level of social resilience. Mixed methods seem necessary as the problem-driven transdisciplinary project includes epidemiological components in addition to social dynamics and cultural, political and institutional issues. Adapting earlier effectiveness models for health intervention to rabies control, we propose a mixed method assessment of individual effectiveness parameters like availability, affordability, accessibility, adequacy or acceptability. Triangulation of quantitative methods (household survey, empirical coverage estimation and spatial analysis) with qualitative findings (participant observation, focus group discussions) facilitate a better understanding of the weight of each effectiveness determinant, and the underlying reasons embedded in the local understandings, cultural practices, and social and political realities of the setting. Using this method, a final effectiveness of 33% for commune Five and 28% for commune Six was estimated, with vaccination coverage of 27% and 20%, respectively. Availability was identified as the most sensitive effectiveness parameter, attributed to lack of information about the campaign. We propose a mixed methods approach to optimize intervention design, using an "intervention effectiveness optimization cycle" with the aim of maximizing effectiveness. Empirical vaccination coverage estimation is compared to the effectiveness model with its determinants. In addition, qualitative data provide an explanatory framework for deeper insight, validation and interpretation of results which should improve the intervention design while involving all stakeholders and increasing community participation. This work contributes vital information for the optimization and scale-up of future vaccination campaigns in Bamako, Mali. The proposed mixed method, although incompletely applied in this case study, should be applicable to similar rabies interventions targeting elimination in other settings. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Effect of Blockage and Location on Mixing of Swirling Coaxial Jets in a Non-expanding Circular Confinement

    NASA Astrophysics Data System (ADS)

    Patel, V. K.; Singh, S. N.; Seshadri, V.

    2013-06-01

    A study is conducted to evolve an effective design concept to improve mixing in a combustor chamber to reduce the amount of intake air. The geometry used is that of a gas turbine combustor model. For simplicity, both the jets have been considered as air jets and effect of heat release and chemical reaction has not been modeled. Various contraction shapes and blockage have been investigated by placing them downstream at different locations with respect to inlet to obtain better mixing. A commercial CFD code `Fluent 6.3' which is based on finite volume method has been used to solve the flow in the combustor model. Validation is done with the experimental data available in literature using standard k-ω turbulence model. The study has shown that contraction and blockage at optimum location enhances the mixing process. Further, the effect of swirl in the jets has also investigated.

  13. Genomic selection for slaughter age in pigs using the Cox frailty model.

    PubMed

    Santos, V S; Martins Filho, S; Resende, M D V; Azevedo, C F; Lopes, P S; Guimarães, S E F; Glória, L S; Silva, F F

    2015-10-19

    The aim of this study was to compare genomic selection methodologies using a linear mixed model and the Cox survival model. We used data from an F2 population of pigs, in which the response variable was the time in days from birth to the culling of the animal and the covariates were 238 markers [237 single nucleotide polymorphism (SNP) plus the halothane gene]. The data were corrected for fixed effects, and the accuracy of the method was determined based on the correlation of the ranks of predicted genomic breeding values (GBVs) in both models with the corrected phenotypic values. The analysis was repeated with a subset of SNP markers with largest absolute effects. The results were in agreement with the GBV prediction and the estimation of marker effects for both models for uncensored data and for normality. However, when considering censored data, the Cox model with a normal random effect (S1) was more appropriate. Since there was no agreement between the linear mixed model and the imputed data (L2) for the prediction of genomic values and the estimation of marker effects, the model S1 was considered superior as it took into account the latent variable and the censored data. Marker selection increased correlations between the ranks of predicted GBVs by the linear and Cox frailty models and the corrected phenotypic values, and 120 markers were required to increase the predictive ability for the characteristic analyzed.

  14. Progress Report on SAM Reduced-Order Model Development for Thermal Stratification and Mixing during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, R.

    This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.

  15. Should measures of patient experience in primary care be adjusted for case mix? Evidence from the English General Practice Patient Survey

    PubMed Central

    Paddison, Charlotte; Elliott, Marc; Parker, Richard; Staetsky, Laura; Lyratzopoulos, Georgios; Campbell, John L

    2012-01-01

    Objectives Uncertainties exist about when and how best to adjust performance measures for case mix. Our aims are to quantify the impact of case-mix adjustment on practice-level scores in a national survey of patient experience, to identify why and when it may be useful to adjust for case mix, and to discuss unresolved policy issues regarding the use of case-mix adjustment in performance measurement in health care. Design/setting Secondary analysis of the 2009 English General Practice Patient Survey. Responses from 2 163 456 patients registered with 8267 primary care practices. Linear mixed effects models were used with practice included as a random effect and five case-mix variables (gender, age, race/ethnicity, deprivation, and self-reported health) as fixed effects. Main outcome measures Primary outcome was the impact of case-mix adjustment on practice-level means (adjusted minus unadjusted) and changes in practice percentile ranks for questions measuring patient experience in three domains of primary care: access; interpersonal care; anticipatory care planning, and overall satisfaction with primary care services. Results Depending on the survey measure selected, case-mix adjustment changed the rank of between 0.4% and 29.8% of practices by more than 10 percentile points. Adjusting for case-mix resulted in large increases in score for a small number of practices and small decreases in score for a larger number of practices. Practices with younger patients, more ethnic minority patients and patients living in more socio-economically deprived areas were more likely to gain from case-mix adjustment. Age and race/ethnicity were the most influential adjustors. Conclusions While its effect is modest for most practices, case-mix adjustment corrects significant underestimation of scores for a small proportion of practices serving vulnerable patients and may reduce the risk that providers would ‘cream-skim’ by not enrolling patients from vulnerable socio-demographic groups. PMID:22626735

  16. Should measures of patient experience in primary care be adjusted for case mix? Evidence from the English General Practice Patient Survey.

    PubMed

    Paddison, Charlotte; Elliott, Marc; Parker, Richard; Staetsky, Laura; Lyratzopoulos, Georgios; Campbell, John L; Roland, Martin

    2012-08-01

    Uncertainties exist about when and how best to adjust performance measures for case mix. Our aims are to quantify the impact of case-mix adjustment on practice-level scores in a national survey of patient experience, to identify why and when it may be useful to adjust for case mix, and to discuss unresolved policy issues regarding the use of case-mix adjustment in performance measurement in health care. Secondary analysis of the 2009 English General Practice Patient Survey. Responses from 2 163 456 patients registered with 8267 primary care practices. Linear mixed effects models were used with practice included as a random effect and five case-mix variables (gender, age, race/ethnicity, deprivation, and self-reported health) as fixed effects. Primary outcome was the impact of case-mix adjustment on practice-level means (adjusted minus unadjusted) and changes in practice percentile ranks for questions measuring patient experience in three domains of primary care: access; interpersonal care; anticipatory care planning, and overall satisfaction with primary care services. Depending on the survey measure selected, case-mix adjustment changed the rank of between 0.4% and 29.8% of practices by more than 10 percentile points. Adjusting for case-mix resulted in large increases in score for a small number of practices and small decreases in score for a larger number of practices. Practices with younger patients, more ethnic minority patients and patients living in more socio-economically deprived areas were more likely to gain from case-mix adjustment. Age and race/ethnicity were the most influential adjustors. While its effect is modest for most practices, case-mix adjustment corrects significant underestimation of scores for a small proportion of practices serving vulnerable patients and may reduce the risk that providers would 'cream-skim' by not enrolling patients from vulnerable socio-demographic groups.

  17. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex

    PubMed Central

    Lindsay, Grace W.

    2017-01-01

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463

  18. On the validity of effective formulations for transport through heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    de Dreuzy, J.-R.; Carrera, J.

    2015-11-01

    Geological heterogeneity enhances spreading of solutes, and causes transport to be anomalous (i.e., non-Fickian), with much less mixing than suggested by dispersion. This implies that modeling transport requires adopting either stochastic approaches that model heterogeneity explicitly or effective transport formulations that acknowledge the effects of heterogeneity. A number of such formulations have been developed and tested as upscaled representations of enhanced spreading. However, their ability to represent mixing has not been formally tested, which is required for proper reproduction of chemical reactions and which motivates our work. We propose that, for an effective transport formulation to be considered a valid representation of transport through Heterogeneous Porous Media (HPM), it should honor mean advection, mixing and spreading. It should also be flexible enough to be applicable to real problems. We test the capacity of the Multi-Rate Mass Transfer (MRMT) to reproduce mixing observed in HPM, as represented by the classical multi-Gaussian log-permeability field with a Gaussian correlation pattern. Non-dispersive mixing comes from heterogeneity structures in the concentration fields that are not captured by macrodispersion. These fine structures limit mixing initially, but eventually enhance it. Numerical results show that, relative to HPM, MRMT models display a much stronger memory of initial conditions on mixing than on dispersion because of the sensitivity of the mixing state to the actual values of concentration. Because MRMT does not restitute the local concentration structures, it induces smaller non-dispersive mixing than HPM. However long-lived trapping in the immobile zones may sustain the deviation from dispersive mixing over much longer times. While spreading can be well captured by MRMT models, non-dispersive mixing cannot.

  19. Prediction of hemoglobin in blood donors using a latent class mixed-effects transition model.

    PubMed

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Rizopoulos, Dimitris; Lesaffre, Emmanuel

    2016-02-20

    Blood donors experience a temporary reduction in their hemoglobin (Hb) value after donation. At each visit, the Hb value is measured, and a too low Hb value leads to a deferral for donation. Because of the recovery process after each donation as well as state dependence and unobserved heterogeneity, longitudinal data of Hb values of blood donors provide unique statistical challenges. To estimate the shape and duration of the recovery process and to predict future Hb values, we employed three models for the Hb value: (i) a mixed-effects models; (ii) a latent-class mixed-effects model; and (iii) a latent-class mixed-effects transition model. In each model, a flexible function was used to model the recovery process after donation. The latent classes identify groups of donors with fast or slow recovery times and donors whose recovery time increases with the number of donations. The transition effect accounts for possible state dependence in the observed data. All models were estimated in a Bayesian way, using data of new entrant donors from the Donor InSight study. Informative priors were used for parameters of the recovery process that were not identified using the observed data, based on results from the clinical literature. The results show that the latent-class mixed-effects transition model fits the data best, which illustrates the importance of modeling state dependence, unobserved heterogeneity, and the recovery process after donation. The estimated recovery time is much longer than the current minimum interval between donations, suggesting that an increase of this interval may be warranted. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Modeling of molecular diffusion and thermal conduction with multi-particle interaction in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Tai, Y.; Watanabe, T.; Nagata, K.

    2018-03-01

    A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.

  1. A 1H NMR-based metabolomics approach to evaluate the geographical authenticity of herbal medicine and its application in building a model effectively assessing the mixing proportion of intentional admixtures: A case study of Panax ginseng: Metabolomics for the authenticity of herbal medicine.

    PubMed

    Nguyen, Huy Truong; Lee, Dong-Kyu; Choi, Young-Geun; Min, Jung-Eun; Yoon, Sang Jun; Yu, Yun-Hyun; Lim, Johan; Lee, Jeongmi; Kwon, Sung Won; Park, Jeong Hill

    2016-05-30

    Ginseng, the root of Panax ginseng has long been the subject of adulteration, especially regarding its origins. Here, 60 ginseng samples from Korea and China initially displayed similar genetic makeup when investigated by DNA-based technique with 23 chloroplast intergenic space regions. Hence, (1)H NMR-based metabolomics with orthogonal projections on the latent structure-discrimination analysis (OPLS-DA) were applied and successfully distinguished between samples from two countries using seven primary metabolites as discrimination markers. Furthermore, to recreate adulteration in reality, 21 mixed samples of numerous Korea/China ratios were tested with the newly built OPLS-DA model. The results showed satisfactory separation according to the proportion of mixing. Finally, a procedure for assessing mixing proportion of intentionally blended samples that achieved good predictability (adjusted R(2)=0.8343) was constructed, thus verifying its promising application to quality control of herbal foods by pointing out the possible mixing ratio of falsified samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Analysis of lithology: Vegetation mixes in multispectral images

    NASA Technical Reports Server (NTRS)

    Adams, J. B.; Smith, M.; Adams, J. D.

    1982-01-01

    Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.

  3. Software engineering the mixed model for genome-wide association studies on large samples.

    PubMed

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  4. Regression analysis using dependent Polya trees.

    PubMed

    Schörgendorfer, Angela; Branscum, Adam J

    2013-11-30

    Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data-driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed-effects and mixed-effects linear and nonlinear models for cross-sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  6. The Mixed Effects Trend Vector Model

    ERIC Educational Resources Information Center

    de Rooij, Mark; Schouteden, Martijn

    2012-01-01

    Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…

  7. Development of stable isotope mixing models in ecology - Dublin

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  8. Historical development of stable isotope mixing models in ecology

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  9. Development of stable isotope mixing models in ecology - Perth

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  10. Development of stable isotope mixing models in ecology - Fremantle

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  11. Development of stable isotope mixing models in ecology - Sydney

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  12. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  13. Analysis of categorical moderators in mixed-effects meta-analysis: Consequences of using pooled versus separate estimates of the residual between-studies variances.

    PubMed

    Rubio-Aparicio, María; Sánchez-Meca, Julio; López-López, José Antonio; Botella, Juan; Marín-Martínez, Fulgencio

    2017-11-01

    Subgroup analyses allow us to examine the influence of a categorical moderator on the effect size in meta-analysis. We conducted a simulation study using a dichotomous moderator, and compared the impact of pooled versus separate estimates of the residual between-studies variance on the statistical performance of the Q B (P) and Q B (S) tests for subgroup analyses assuming a mixed-effects model. Our results suggested that similar performance can be expected as long as there are at least 20 studies and these are approximately balanced across categories. Conversely, when subgroups were unbalanced, the practical consequences of having heterogeneous residual between-studies variances were more evident, with both tests leading to the wrong statistical conclusion more often than in the conditions with balanced subgroups. A pooled estimate should be preferred for most scenarios, unless the residual between-studies variances are clearly different and there are enough studies in each category to obtain precise separate estimates. © 2017 The British Psychological Society.

  14. Discovering human germ cell mutagens with whole genome sequencing: Insights from power calculations reveal the importance of controlling for between-family variability.

    PubMed

    Webster, R J; Williams, A; Marchetti, F; Yauk, C L

    2018-07-01

    Mutations in germ cells pose potential genetic risks to offspring. However, de novo mutations are rare events that are spread across the genome and are difficult to detect. Thus, studies in this area have generally been under-powered, and no human germ cell mutagen has been identified. Whole Genome Sequencing (WGS) of human pedigrees has been proposed as an approach to overcome these technical and statistical challenges. WGS enables analysis of a much wider breadth of the genome than traditional approaches. Here, we performed power analyses to determine the feasibility of using WGS in human families to identify germ cell mutagens. Different statistical models were compared in the power analyses (ANOVA and multiple regression for one-child families, and mixed effect model sampling between two to four siblings per family). Assumptions were made based on parameters from the existing literature, such as the mutation-by-paternal age effect. We explored two scenarios: a constant effect due to an exposure that occurred in the past, and an accumulating effect where the exposure is continuing. Our analysis revealed the importance of modeling inter-family variability of the mutation-by-paternal age effect. Statistical power was improved by models accounting for the family-to-family variability. Our power analyses suggest that sufficient statistical power can be attained with 4-28 four-sibling families per treatment group, when the increase in mutations ranges from 40 to 10% respectively. Modeling family variability using mixed effect models provided a reduction in sample size compared to a multiple regression approach. Much larger sample sizes were required to detect an interaction effect between environmental exposures and paternal age. These findings inform study design and statistical modeling approaches to improve power and reduce sequencing costs for future studies in this area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  15. Accuracy Enhancement of Raman Spectroscopy Using Complementary Laser-Induced Breakdown Spectroscopy (LIBS) with Geologically Mixed Samples.

    PubMed

    Choi, Soojin; Kim, Dongyoung; Yang, Junho; Yoh, Jack J

    2017-04-01

    Quantitative Raman analysis was carried out with geologically mixed samples that have various matrices. In order to compensate the matrix effect in Raman shift, laser-induced breakdown spectroscopy (LIBS) analysis was performed. Raman spectroscopy revealed the geological materials contained in the mixed samples. However, the analysis of a mixture containing different matrices was inaccurate due to the weak signal of the Raman shift, interference, and the strong matrix effect. On the other hand, the LIBS quantitative analysis of atomic carbon and calcium in mixed samples showed high accuracy. In the case of the calcite and gypsum mixture, the coefficient of determination of atomic carbon using LIBS was 0.99, while the signal using Raman was less than 0.9. Therefore, the geological composition of the mixed samples is first obtained using Raman and the LIBS-based quantitative analysis is then applied to the Raman outcome in order to construct highly accurate univariate calibration curves. The study also focuses on a method to overcome matrix effects through the two complementary spectroscopic techniques of Raman spectroscopy and LIBS.

  16. Minimum number of clusters and comparison of analysis methods for cross sectional stepped wedge cluster randomised trials with binary outcomes: A simulation study.

    PubMed

    Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick

    2017-03-09

    Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.

  17. Mixed Beam Murine Harderian Gland Tumorigenesis: Predicted Dose-Effect Relationships if neither Synergism nor Antagonism Occurs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranart, Nopphon; Blakely, Eleanor A.; Cheng, Alden

    Complex mixed radiation fields exist in interplanetary space, and not much is known about their latent effects on space travelers. In silico synergy analysis default predictions are useful when planning relevant mixed-ion-beam experiments and interpreting their results. These predictions are based on individual dose-effect relationships (IDER) for each component of the mixed-ion beam, assuming no synergy or antagonism. For example, a default hypothesis of simple effect additivity has often been used throughout the study of biology. However, for more than a century pharmacologists interested in mixtures of therapeutic drugs have analyzed conceptual, mathematical and practical questions similar to those thatmore » arise when analyzing mixed radiation fields, and have shown that simple effect additivity often gives unreasonable predictions when the IDER are curvilinear. Various alternatives to simple effect additivity proposed in radiobiology, pharmacometrics, toxicology and other fields are also known to have important limitations. In this work, we analyze upcoming murine Harderian gland (HG) tumor prevalence mixed-beam experiments, using customized open-source software and published IDER from past single-ion experiments. The upcoming experiments will use acute irradiation and the mixed beam will include components of high atomic number and energy (HZE). We introduce a new alternative to simple effect additivity, "incremental effect additivity", which is more suitable for the HG analysis and perhaps for other end points. We use incremental effect additivity to calculate default predictions for mixture dose-effect relationships, including 95% confidence intervals. We have drawn three main conclusions from this work. 1. It is important to supplement mixed-beam experiments with single-ion experiments, with matching end point(s), shielding and dose timing. 2. For HG tumorigenesis due to a mixed beam, simple effect additivity and incremental effect additivity sometimes give default predictions that are numerically close. However, if nontargeted effects are important and the mixed beam includes a number of different HZE components, simple effect additivity becomes unusable and another method is needed such as incremental effect additivity. 3. Eventually, synergy analysis default predictions of the effects of mixed radiation fields will be replaced by more mechanistic, biophysically-based predictions. However, optimizing synergy analyses is an important first step. If mixed-beam experiments indicate little synergy or antagonism, plans by NASA for further experiments and possible missions beyond low earth orbit will be substantially simplified.« less

  18. Mixed-treatment comparison of anabolic (teriparatide and PTH 1-84) therapies in women with severe osteoporosis.

    PubMed

    Migliore, A; Broccoli, S; Massafra, U; Bizzi, E; Frediani, B

    2012-03-01

    The recent development of compounds with anabolic action on bone have increased the range of therapeutic options for the treatment of osteoporosis and the prevention of fractures. Two major PTH analogs, the synthetic full-length 1-84 PTH molecule and the recombinant 1-34 N-terminal fragment (teriparatide), are available for the treatment of osteoporosis in many countries. There have bee no comparative trials on the bone anabolic effects of these compounds. In this study we applied a mixed treatment comparison (MTC) to compare the efficacy of teriparatide versus PTH 1-84 for the prevention of vertebral and non-vertebral fractures in women with severe osteoporosis. With this approach the relative treatment effect of one intervention over another can be obtained in the absence of head-to-head comparison. Among the candidate papers selected for analysis, two randomized controlled trials investigating the effects of teriparatide and PTH 1-84 met the selection criteria and underwent MTC analysis. Based on a fixed-effect MTC model analysis of data from two RCTs, teriparatide (20 µg/day) showed a 70% and 94% probability of being the best treatment for the prevention of vertebral and non-vertebral fractures, respectively. Together with a lack of statistical significance, this study has additional limitations. Some differences in trial procedures and populations exist; another limitation concerns the impossibility of carrying out a randomized-effect model MTC, due to sample exiguity. Furthermore, in order to consider unknown or unmeasured differences of covariates across trials, a random-effects approach would be preferred in order to assess the presence of heterogeneity across comparisons. In contrast, in our analysis a fixed-effect MTC model only was used. Teriparatide is expected to provide a greater efficacy over PTH 1-84 with both vertebral and non-vertebral fracture prevention in postmenopausal women with severe osteoporosis.

  19. [Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].

    PubMed

    Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun

    2007-09-01

    Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.

  20. Numerical analysis of mixing by sharp-edge-based acoustofluidic micromixer

    NASA Astrophysics Data System (ADS)

    Nama, Nitesh; Huang, Po-Hsun; Jun Huang, Tony; Costanzo, Francesco

    2015-11-01

    Recently, acoustically oscillated sharp-edges have been employed to realize rapid and homogeneous mixing at microscales (Huang, Lab on a Chip, 13, 2013). Here, we present a numerical model, qualitatively validated by experimental results, to analyze the acoustic mixing inside a sharp-edge-based micromixer. We extend our previous numerical model (Nama, Lab on a Chip, 14, 2014) to combine the Generalized Lagrangian Mean (GLM) theory with the convection-diffusion equation, while also allowing for the presence of a background flow as observed in a typical sharp-edge-based micromixer. We employ a perturbation approach to divide the flow variables into zeroth-, first- and second-order fields which are successively solved to obtain the Lagrangian mean velocity. The Langrangian mean velocity and the background flow velocity are further employed with the convection-diffusion equation to obtain the concentration profile. We characterize the effects of various operational and geometrical parameters to suggest potential design changes for improving the mixing performance of the sharp-edge-based micromixer. Lastly, we investigate the possibility of generation of a spatio-temporally controllable concentration gradient by placing sharp-edge structures inside the microchannel.

  1. Heterogeneous reactions in aircraft gas turbine engines

    NASA Astrophysics Data System (ADS)

    Brown, R. C.; Miake-Lye, R. C.; Lukachko, S. P.; Waitz, I. A.

    2002-05-01

    One-dimensional flow models and unity probability heterogeneous rate parameters are used to estimate the maximum effect of heterogeneous reactions on trace species evolution in aircraft gas turbines. The analysis includes reactions on soot particulates and turbine/nozzle material surfaces. Results for a representative advanced subsonic engine indicate the net change in reactant mixing ratios due to heterogeneous reactions is <10-6 for O2, CO2, and H2O, and <10-10 for minor combustion products such as SO2 and NO2. The change in the mixing ratios relative to the initial values is <0.01%. Since these estimates are based on heterogeneous reaction probabilities of unity, the actual changes will be even lower. Thus, heterogeneous chemistry within the engine cannot explain the high conversion of SO2 to SO3 which some wake models require to explain the observed levels of volatile aerosols. Furthermore, turbine heterogeneous processes will not effect exhaust NOx or NOy levels.

  2. Analysis of cigarette purchase task instrument data with a left-censored mixed effects model.

    PubMed

    Liao, Wenjie; Luo, Xianghua; Le, Chap T; Chu, Haitao; Epstein, Leonard H; Yu, Jihnhee; Ahluwalia, Jasjit S; Thomas, Janet L

    2013-04-01

    The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. Although a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug's RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, for example, 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method, and future directions of research are also discussed.

  3. Analysis of Cigarette Purchase Task Instrument Data with a Left-Censored Mixed Effects Model

    PubMed Central

    Liao, Wenjie; Luo, Xianghua; Le, Chap; Chu, Haitao; Epstein, Leonard H.; Yu, Jihnhee; Ahluwalia, Jasjit S.; Thomas, Janet L.

    2015-01-01

    The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. While a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug’s RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, e.g. 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method and future directions of research are also discussed. PMID:23356731

  4. A Priori Analysis of Subgrid-Scale Models for Large Eddy Simulations of Supercritical Binary-Species Mixing Layers

    NASA Technical Reports Server (NTRS)

    Okong'o, Nora; Bellan, Josette

    2005-01-01

    Models for large eddy simulation (LES) are assessed on a database obtained from direct numerical simulations (DNS) of supercritical binary-species temporal mixing layers. The analysis is performed at the DNS transitional states for heptane/nitrogen, oxygen/hydrogen and oxygen/helium mixing layers. The incorporation of simplifying assumptions that are validated on the DNS database leads to a set of LES equations that requires only models for the subgrid scale (SGS) fluxes, which arise from filtering the convective terms in the DNS equations. Constant-coefficient versions of three different models for the SGS fluxes are assessed and calibrated. The Smagorinsky SGS-flux model shows poor correlations with the SGS fluxes, while the Gradient and Similarity models have high correlations, as well as good quantitative agreement with the SGS fluxes when the calibrated coefficients are used.

  5. Effects of mixing states on the multiple-scattering properties of soot aerosols.

    PubMed

    Cheng, Tianhai; Wu, Yu; Gu, Xingfa; Chen, Hao

    2015-04-20

    The radiative properties of soot aerosols are highly sensitive to the mixing states of black carbon particles and other aerosol components. Light absorption properties are enhanced by the mixing state of soot aerosols. Quantification of the effects of mixing states on the scattering properties of soot aerosol are still not completely resolved, especially for multiple-scattering properties. This study focuses on the effects of the mixing state on the multiple scattering of soot aerosols using the vector radiative transfer model. Two types of soot aerosols with different mixing states such as external mixture soot aerosols and internal mixture soot aerosols are studied. Upward radiance/polarization and hemispheric flux are studied with variable soot aerosol loadings for clear and haze scenarios. Our study showed dramatic changes in upward radiance/polarization due to the effects of the mixing state on the multiple scattering of soot aerosols. The relative difference in upward radiance due to the different mixing states can reach 16%, whereas the relative difference of upward polarization can reach 200%. The effects of the mixing state on the multiple-scattering properties of soot aerosols increase with increasing soot aerosol loading. The effects of the soot aerosol mixing state on upwelling hemispheric flux are much smaller than in upward radiance/polarization, which increase with increasing solar zenith angle. The relative difference in upwelling hemispheric flux due to the different soot aerosol mixing states can reach 18% when the solar zenith angle is 75°. The findings should improve our understanding of the effects of mixing states on the optical properties of soot aerosols and their effects on climate. The mixing mechanism of soot aerosols is of critical importance in evaluating the climate effects of soot aerosols, which should be explicitly included in radiative forcing models and aerosol remote sensing.

  6. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  7. Hawaii Ocean Mixing Experiment: Program Summary

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Chao, Benjamin F. (Technical Monitor)

    2002-01-01

    It is becoming apparent that insufficient mixing occurs in the pelagic ocean to maintain the large scale thermohaline circulation. Observed mixing rates fall a factor of ten short of classical indices such as Munk's "Abyssal Recipe." The growing suspicion is that most of the mixing in the sea occurs near topography. Exciting recent observations by Polzin et al., among others, fuel this speculation. If topographic mixing is indeed important, it must be acknowledged that its geographic distribution, both laterally and vertically, is presently unknown. The vertical distribution of mixing plays a critical role in the Stommel Arons model of the ocean interior circulation. In recent numerical studies, Samelson demonstrates the extreme sensitivity of flow in the abyssal ocean to the spatial distribution of mixing. We propose to study the topographic mixing problem through an integrated program of modeling and observation. We focus on tidally forced mixing as the global energetics of this process have received (and are receiving) considerable study. Also, the well defined frequency of the forcing and the unique geometry of tidal scattering serve to focus the experiment design. The Hawaiian Ridge is selected as a study site. Strong interaction between the barotropic tide and the Ridge is known to take place. The goals of the Hawaiian Ocean Mixing Experiment (HOME) are to quantify the rate of tidal energy loss to mixing at the Ridge and to identify the mechanisms by which energy is lost and mixing generated. We are challenged to develop a sufficiently comprehensive picture that results can be generalized from Hawaii to the global ocean. To achieve these goals, investigators from five institutions have designed HOME, a program of historic data analysis, modeling and field observation. The Analysis and Modeling efforts support the design of the field experiments. As the program progresses, a global model of the barotropic (depth independent) tide, and two models of the baroclinic (depth varying) tide, all validated with near-Ridge data, will be applied, to reveal the mechanisms of tidal energy conversion along the Ridge, and allow spatial and temporal integration of the rate of conversion. Field experiments include a survey to identify "hot spots" of enhanced mixing and barotropic to baroclinic conversion, a Nearfield study identifying the dominant mechanisms responsible for topographic mixing, and a Farfield program which quantifies the barotropic energy flux convergence at the Ridge and the flux divergence associated with low mode baroclinic waves radiation. The difference is a measure of the tidal power available for mixing at the Ridge. Field work is planned from years 2000 through 2002, with analysis and modeling efforts extending through early 2006. If successful, HOME will yield an understanding of the dominant topographic mixing processes applicable throughout the global ocean. It will advance understanding of two central problems in ocean science, the maintenance of the abyssal stratification, and the dissipation of the tides. HOME data will be used to improve the parameterization of dissipation in models which presently assimilate TOPEX-POSEIDON observations. The improved understanding of the dynamics and spatial distribution of mixing processes will benefit future long-term programs such as CLIVAR.

  8. CFD simulation of gas and non-Newtonian fluid two-phase flow in anaerobic digesters.

    PubMed

    Wu, Binxin

    2010-07-01

    This paper presents an Eulerian multiphase flow model that characterizes gas mixing in anaerobic digesters. In the model development, liquid manure is assumed to be water or a non-Newtonian fluid that is dependent on total solids (TS) concentration. To establish the appropriate models for different TS levels, twelve turbulence models are evaluated by comparing the frictional pressure drops of gas and non-Newtonian fluid two-phase flow in a horizontal pipe obtained from computational fluid dynamics (CFD) with those from a correlation analysis. The commercial CFD software, Fluent12.0, is employed to simulate the multiphase flow in the digesters. The simulation results in a small-sized digester are validated against the experimental data from literature. Comparison of two gas mixing designs in a medium-sized digester demonstrates that mixing intensity is insensitive to the TS in confined gas mixing, whereas there are significant decreases with increases of TS in unconfined gas mixing. Moreover, comparison of three mixing methods indicates that gas mixing is more efficient than mixing by pumped circulation while it is less efficient than mechanical mixing.

  9. Non-linear mixing effects on mass-47 CO2 clumped isotope thermometry: Patterns and implications.

    PubMed

    Defliese, William F; Lohmann, Kyger C

    2015-05-15

    Mass-47 CO(2) clumped isotope thermometry requires relatively large (~20 mg) samples of carbonate minerals due to detection limits and shot noise in gas source isotope ratio mass spectrometry (IRMS). However, it is unreasonable to assume that natural geologic materials are homogenous on the scale required for sampling. We show that sample heterogeneities can cause offsets from equilibrium Δ(47) values that are controlled solely by end member mixing and are independent of equilibrium temperatures. A numerical model was built to simulate and quantify the effects of end member mixing on Δ(47). The model was run in multiple possible configurations to produce a dataset of mixing effects. We verified that the model accurately simulated real phenomena by comparing two artificial laboratory mixtures measured using IRMS to model output. Mixing effects were found to be dependent on end member isotopic composition in δ(13)C and δ(18)O values, and independent of end member Δ(47) values. Both positive and negative offsets from equilibrium Δ(47) can occur, and the sign is dependent on the interaction between end member isotopic compositions. The overall magnitude of mixing offsets is controlled by the amount of variability within a sample; the larger the disparity between end member compositions, the larger the mixing offset. Samples varying by less than 2 ‰ in both δ(13)C and δ(18)O values have mixing offsets below current IRMS detection limits. We recommend the use of isotopic subsampling for δ(13)C and δ(18)O values to determine sample heterogeneity, and to evaluate any potential mixing effects in samples suspected of being heterogonous. Copyright © 2015 John Wiley & Sons, Ltd.

  10. The salinity effect in a mixed layer ocean model

    NASA Technical Reports Server (NTRS)

    Miller, J. R.

    1976-01-01

    A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.

  11. Three Approaches to Modeling Gene-Environment Interactions in Longitudinal Family Data: Gene-Smoking Interactions in Blood Pressure.

    PubMed

    Basson, Jacob; Sung, Yun Ju; de Las Fuentes, Lisa; Schwander, Karen L; Vazquez, Ana; Rao, Dabeeru C

    2016-01-01

    Blood pressure (BP) has been shown to be substantially heritable, yet identified genetic variants explain only a small fraction of the heritability. Gene-smoking interactions have detected novel BP loci in cross-sectional family data. Longitudinal family data are available and have additional promise to identify BP loci. However, this type of data presents unique analysis challenges. Although several methods for analyzing longitudinal family data are available, which method is the most appropriate and under what conditions has not been fully studied. Using data from three clinic visits from the Framingham Heart Study, we performed association analysis accounting for gene-smoking interactions in BP at 31,203 markers on chromosome 22. We evaluated three different modeling frameworks: generalized estimating equations (GEE), hierarchical linear modeling, and pedigree-based mixed modeling. The three models performed somewhat comparably, with multiple overlaps in the most strongly associated loci from each model. Loci with the greatest significance were more strongly supported in the longitudinal analyses than in any of the component single-visit analyses. The pedigree-based mixed model was more conservative, with less inflation in the variant main effect and greater deflation in the gene-smoking interactions. The GEE, but not the other two models, resulted in substantial inflation in the tail of the distribution when variants with minor allele frequency <1% were included in the analysis. The choice of analysis method should depend on the model and the structure and complexity of the familial and longitudinal data. © 2015 WILEY PERIODICALS, INC.

  12. Exclusive vector meson production with leading neutrons in a saturation model for the dipole amplitude in mixed space

    NASA Astrophysics Data System (ADS)

    Amaral, J. T.; Becker, V. M.

    2018-05-01

    We investigate ρ vector meson production in e p collisions at HERA with leading neutrons in the dipole formalism. The interaction of the dipole and the pion is described in a mixed-space approach, in which the dipole-pion scattering amplitude is given by the Marquet-Peschanski-Soyez saturation model, which is based on the traveling wave solutions of the nonlinear Balitsky-Kovchegov equation. We estimate the magnitude of the absorption effects and compare our results with a previous analysis of the same process in full coordinate space. In contrast with this approach, the present study leads to absorption K factors in the range of those predicted by previous theoretical studies on semi-inclusive processes.

  13. Effective Jet Properties for the Prediction of Turbulent Mixing Noise Reduction by Water Injection

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Lonergan, Michael J.

    2007-01-01

    A one-dimensional control volume formulation is developed for the determination of jet mixing noise reduction due to water injection. The analysis starts from the conservation of mass, momentum and energy for the control volume, and introduces the concept of effective jet parameters (jet temperature, jet velocity and jet Mach number). It is shown that the water to jet mass flow rate ratio is an important parameter characterizing the jet noise reduction on account of gas-to-droplet momentum and heat transfer. Two independent dimensionless invariant groups are postulated, and provide the necessary relations for the droplet size and droplet Reynolds number. Results are presented illustrating the effect of mass flow rate ratio on the jet mixing noise reduction for a range of jet Mach number and jet Reynolds number. Predictions from the model show satisfactory comparison with available test data on supersonic jets. The results suggest that significant noise reductions can be achieved at increased flow rate ratios.

  14. Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models

    NASA Astrophysics Data System (ADS)

    Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph; Johnson, Jennifer A.

    2017-02-01

    Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]-[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracks in [O/Fe]-[Fe/H] unlike the observed bimodality (separate high-α and low-α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]-[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α-elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.

  15. Inflow, Outflow, Yields, and Stellar Population Mixing in Chemical Evolution Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Brett H.; Weinberg, David H.; Schönrich, Ralph

    Chemical evolution models are powerful tools for interpreting stellar abundance surveys and understanding galaxy evolution. However, their predictions depend heavily on the treatment of inflow, outflow, star formation efficiency (SFE), the stellar initial mass function, the SN Ia delay time distribution, stellar yields, and stellar population mixing. Using flexCE, a flexible one-zone chemical evolution code, we investigate the effects of and trade-offs between parameters. Two critical parameters are SFE and the outflow mass-loading parameter, which shift the knee in [O/Fe]–[Fe/H] and the equilibrium abundances that the simulations asymptotically approach, respectively. One-zone models with simple star formation histories follow narrow tracksmore » in [O/Fe]–[Fe/H] unlike the observed bimodality (separate high- α and low- α sequences) in this plane. A mix of one-zone models with inflow timescale and outflow mass-loading parameter variations, motivated by the inside-out galaxy formation scenario with radial mixing, reproduces the two sequences better than a one-zone model with two infall epochs. We present [X/Fe]–[Fe/H] tracks for 20 elements assuming three different supernova yield models and find some significant discrepancies with solar neighborhood observations, especially for elements with strongly metallicity-dependent yields. We apply principal component abundance analysis to the simulations and existing data to reveal the main correlations among abundances and quantify their contributions to variation in abundance space. For the stellar population mixing scenario, the abundances of α -elements and elements with metallicity-dependent yields dominate the first and second principal components, respectively, and collectively explain 99% of the variance in the model. flexCE is a python package available at https://github.com/bretthandrews/flexCE.« less

  16. Impacts of different characterizations of large-scale background on simulated regional-scale ozone over the continental United States

    NASA Astrophysics Data System (ADS)

    Hogrefe, Christian; Liu, Peng; Pouliot, George; Mathur, Rohit; Roselle, Shawn; Flemming, Johannes; Lin, Meiyun; Park, Rokjin J.

    2018-03-01

    This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boundary conditions derived from hemispheric or global-scale models. The Community Multiscale Air Quality (CMAQ) model simulations supporting this analysis were performed over the continental US for the year 2010 within the context of the Air Quality Model Evaluation International Initiative (AQMEII) and Task Force on Hemispheric Transport of Air Pollution (TF-HTAP) activities. CMAQ process analysis (PA) results highlight the dominant role of horizontal and vertical advection on the ozone burden in the mid-to-upper troposphere and lower stratosphere. Vertical mixing, including mixing by convective clouds, couples fluctuations in free-tropospheric ozone to ozone in lower layers. Hypothetical bounding scenarios were performed to quantify the effects of emissions, boundary conditions, and ozone dry deposition on the simulated ozone burden. Analysis of these simulations confirms that the characterization of ozone outside the regional-scale modeling domain can have a profound impact on simulated regional-scale ozone. This was further investigated by using data from four hemispheric or global modeling systems (Chemistry - Integrated Forecasting Model (C-IFS), CMAQ extended for hemispheric applications (H-CMAQ), the Goddard Earth Observing System model coupled to chemistry (GEOS-Chem), and AM3) to derive alternate boundary conditions for the regional-scale CMAQ simulations. The regional-scale CMAQ simulations using these four different boundary conditions showed that the largest ozone abundance in the upper layers was simulated when using boundary conditions from GEOS-Chem, followed by the simulations using C-IFS, AM3, and H-CMAQ boundary conditions, consistent with the analysis of the ozone fields from the global models along the CMAQ boundaries. Using boundary conditions from AM3 yielded higher springtime ozone columns burdens in the middle and lower troposphere compared to boundary conditions from the other models. For surface ozone, the differences between the AM3-driven CMAQ simulations and the CMAQ simulations driven by other large-scale models are especially pronounced during spring and winter where they can reach more than 10 ppb for seasonal mean ozone mixing ratios and as much as 15 ppb for domain-averaged daily maximum 8 h average ozone on individual days. In contrast, the differences between the C-IFS-, GEOS-Chem-, and H-CMAQ-driven regional-scale CMAQ simulations are typically smaller. Comparing simulated surface ozone mixing ratios to observations and computing seasonal and regional model performance statistics revealed that boundary conditions can have a substantial impact on model performance. Further analysis showed that boundary conditions can affect model performance across the entire range of the observed distribution, although the impacts tend to be lower during summer and for the very highest observed percentiles. The results are discussed in the context of future model development and analysis opportunities.

  17. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    EPA Science Inventory

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  18. An Overview of Longitudinal Data Analysis Methods for Neurological Research

    PubMed Central

    Locascio, Joseph J.; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825

  19. Senstitivity analysis of horizontal heat and vapor transfer coefficients for a cloud-topped marine boundary layer during cold-air outbreaks. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Chang, Y. V.

    1986-01-01

    The effects of external parameters on the surface heat and vapor fluxes into the marine atmospheric boundary layer (MABL) during cold-air outbreaks are investigated using the numerical model of Stage and Businger (1981a). These fluxes are nondimensionalized using the horizontal heat (g1) and vapor (g2) transfer coefficient method first suggested by Chou and Atlas (1982) and further formulated by Stage (1983a). In order to simplify the problem, the boundary layer is assumed to be well mixed and horizontally homogeneous, and to have linear shoreline soundings of equivalent potential temperature and mixing ratio. Modifications of initial surface flux estimates, time step limitation, and termination conditions are made to the MABL model to obtain accurate computations. The dependence of g1 and g2 in the cloud topped boundary layer on the external parameters (wind speed, divergence, sea surface temperature, radiative sky temperature, cloud top radiation cooling, and initial shoreline soundings of temperature, and mixing ratio) is studied by a sensitivity analysis, which shows that the uncertainties of horizontal transfer coefficients caused by changes in the parameters are reasonably small.

  20. APPLICATION OF STABLE ISOTOPE TECHNIQUES TO AIR POLLUTION RESEARCH

    EPA Science Inventory

    Stable isotope techniques provide a robust, yet under-utilized tool for examining pollutant effects on plant growth and ecosystem function. Here, we survey a range of mixing model, physiological and system level applications for documenting pollutant effects. Mixing model examp...

  1. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.

  2. A compressibility correction of the pressure strain correlation model in turbulent flow

    NASA Astrophysics Data System (ADS)

    Klifi, Hechmi; Lili, Taieb

    2013-07-01

    This paper is devoted to the second-order closure for compressible turbulent flows with special attention paid to modeling the pressure-strain correlation appearing in the Reynolds stress equation. This term appears as the main one responsible for the changes of the turbulence structures that arise from structural compressibility effects. From the analysis and DNS results of Simone et al. and Sarkar, the compressibility effects on the homogeneous turbulence shear flow are parameterized by the gradient Mach number. Several experiment and DNS results suggest that the convective Mach number is appropriate to study the compressibility effects on the mixing layers. The extension of the LRR model recently proposed by Marzougui, Khlifi and Lili for the pressure-strain correlation gives results that are in disagreement with the DNS results of Sarkar for high-speed shear flows. This extension is revised to derive a turbulence model for the pressure-strain correlation in which the compressibility is included in the turbulent Mach number, the gradient Mach number and then the convective Mach number. The behavior of the proposed model is compared to the compressible model of Adumitroiae et al. for the pressure-strain correlation in two turbulent compressible flows: homogeneous shear flow and mixing layers. In compressible homogeneous shear flows, the predicted results are compared with the DNS data of Simone et al. and those of Sarkar. For low compressibility, the two compressible models are similar, but they become substantially different at high compressibilities. The proposed model shows good agreement with all cases of DNS results. Those of Adumitroiae et al. do not reflect any effect of a change in the initial value of the gradient Mach number on the Reynolds stress anisotropy. The models are used to simulate compressible mixing layers. Comparison of our predictions with those of Adumitroiae et al. and with the experimental results of Goebel et al. shows good qualitative agreement.

  3. Effects of Visual Complexity and Sublexical Information in the Occipitotemporal Cortex in the Reading of Chinese Phonograms: A Single-Trial Analysis with MEG

    ERIC Educational Resources Information Center

    Hsu, Chun-Hsien; Lee, Chia-Ying; Marantz, Alec

    2011-01-01

    We employ a linear mixed-effects model to estimate the effects of visual form and the linguistic properties of Chinese characters on M100 and M170 MEG responses from single-trial data of Chinese and English speakers in a Chinese lexical decision task. Cortically constrained minimum-norm estimation is used to compute the activation of M100 and M170…

  4. The impact of loss to follow-up on hypothesis tests of the treatment effect for several statistical methods in substance abuse clinical trials.

    PubMed

    Hedden, Sarra L; Woolson, Robert F; Carter, Rickey E; Palesch, Yuko; Upadhyaya, Himanshu P; Malcolm, Robert J

    2009-07-01

    "Loss to follow-up" can be substantial in substance abuse clinical trials. When extensive losses to follow-up occur, one must cautiously analyze and interpret the findings of a research study. Aims of this project were to introduce the types of missing data mechanisms and describe several methods for analyzing data with loss to follow-up. Furthermore, a simulation study compared Type I error and power of several methods when missing data amount and mechanism varies. Methods compared were the following: Last observation carried forward (LOCF), multiple imputation (MI), modified stratified summary statistics (SSS), and mixed effects models. Results demonstrated nominal Type I error for all methods; power was high for all methods except LOCF. Mixed effect model, modified SSS, and MI are generally recommended for use; however, many methods require that the data are missing at random or missing completely at random (i.e., "ignorable"). If the missing data are presumed to be nonignorable, a sensitivity analysis is recommended.

  5. Nurse staffing patterns and hospital efficiency in the United States.

    PubMed

    Bloom, J R; Alexander, J A; Nuchols, B A

    1997-01-01

    The objective of this exploratory study was to assess the effects of four nurse staffing patterns on the efficiency of patient care delivery in the hospital: registered nurses (RNs) from temporary agencies; part-time career RNs; RN rich skill mix; and organizationally experienced RNs. Using Transaction Cost Analysis, four regression models were specified to consider the effect of these staffing plans on personnel and benefit costs and on non-personnel operating costs. A number of additional variables were also included in the models to control for the effect of other organization and environmental determinants of hospital costs. Use of career part-time RNs and experienced staff reduced both personnel and benefit costs, as well as total non-personnel operating costs, while the use of temporary agencies for RNs increased non-personnel operating costs. An RN rich skill mix was not related to either measure of hospital costs. These findings provide partial support of the theory. Implications of our findings for future research on hospital management are discussed.

  6. Joint physical and numerical modeling of water distribution networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, Adam; O'Hern, Timothy John; Orear, Leslie Jr.

    2009-01-01

    This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in somemore » cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.« less

  7. The operating room case-mix problem under uncertainty and nurses capacity constraints.

    PubMed

    Yahia, Zakaria; Eltawil, Amr B; Harraz, Nermine A

    2016-12-01

    Surgery is one of the key functions in hospitals; it generates significant revenue and admissions to hospitals. In this paper we address the decision of choosing a case-mix for a surgery department. The objective of this study is to generate an optimal case-mix plan of surgery patients with uncertain surgery operations, which includes uncertainty in surgery durations, length of stay, surgery demand and the availability of nurses. In order to obtain an optimal case-mix plan, a stochastic optimization model is proposed and the sample average approximation method is applied. The proposed model is used to determine the number of surgery cases to be weekly served, the amount of operating rooms' time dedicated to each specialty and the number of ward beds dedicated to each specialty. The optimal case-mix selection criterion is based upon a weighted score taking into account both the waiting list and the historical demand of each patient category. The score aims to maximizing the service level of the operating rooms by increasing the total number of surgery cases that could be served. A computational experiment is presented to demonstrate the performance of the proposed method. The results show that the stochastic model solution outperforms the expected value problem solution. Additional analysis is conducted to study the effect of varying the number of ORs and nurses capacity on the overall ORs' performance.

  8. Modeling of thermo-mechanical and irradiation behavior of mixed oxide fuel for sodium fast reactors

    NASA Astrophysics Data System (ADS)

    Karahan, Aydın; Buongiorno, Jacopo

    2010-01-01

    An engineering code to model the irradiation behavior of UO2-PuO2 mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named fuel engineering and structural analysis tool (FEAST-OXIDE). FEAST-OXIDE has several modules working in coupled form with an explicit numerical algorithm. These modules describe: (1) fission gas release and swelling, (2) fuel chemistry and restructuring, (3) temperature distribution, (4) fuel-clad chemical interaction and (5) fuel-clad mechanical analysis. Given the fuel pin geometry, composition and irradiation history, FEAST-OXIDE can analyze fuel and cladding thermo-mechanical behavior at both steady-state and design-basis transient scenarios. The code was written in FORTRAN-90 program language. The mechanical analysis module implements the LIFE algorithm. Fission gas release and swelling behavior is described by the OGRES and NEFIG models. However, the original OGRES model has been extended to include the effects of joint oxide gain (JOG) formation on fission gas release and swelling. A detailed fuel chemistry model has been included to describe the cesium radial migration and JOG formation, oxygen and plutonium radial distribution and the axial migration of cesium. The fuel restructuring model includes the effects of as-fabricated porosity migration, irradiation-induced fuel densification, grain growth, hot pressing and fuel cracking and relocation. Finally, a kinetics model is included to predict the clad wastage formation. FEAST-OXIDE predictions have been compared to the available FFTF, EBR-II and JOYO databases, as well as the LIFE-4 code predictions. The agreement was found to be satisfactory for steady-state and slow-ramp over-power accidents.

  9. The Promotion Strategy of Green Construction Materials: A Path Analysis Approach

    PubMed Central

    Huang, Chung-Fah; Chen, Jung-Lu

    2015-01-01

    As one of the major materials used in construction, cement can be very resource-consuming and polluting to produce and use. Compared with traditional cement processing methods, dry-mix mortar is more environmentally friendly by reducing waste production or carbon emissions. Despite the continuous development and promotion of green construction materials, only a few of them are accepted or widely used in the market. In addition, the majority of existing research on green construction materials focuses more on their physical or chemical characteristics than on their promotion. Without effective promotion, their benefits cannot be fully appreciated and realized. Therefore, this study is conducted to explore the promotion of dry-mix mortars, one of the green materials. This study uses both qualitative and quantitative methods. First, through a case study, the potential of reducing carbon emission is verified. Then a path analysis is conducted to verify the validity and predictability of the samples based on the technology acceptance model (TAM) in this study. According to the findings of this research, to ensure better promotion results and wider application of dry-mix mortar, it is suggested that more systematic efforts be invested in promoting the usefulness and benefits of dry-mix mortar. The model developed in this study can provide helpful references for future research and promotion of other green materials. PMID:28793613

  10. Numerical Investigation Into Effect of Fuel Injection Timing on CAI/HCCI Combustion in a Four-Stroke GDI Engine

    NASA Astrophysics Data System (ADS)

    Cao, Li; Zhao, Hua; Jiang, Xi; Kalian, Navin

    2006-02-01

    The Controlled Auto-Ignition (CAI) combustion, also known as Homogeneous Charge Compression Ignition (HCCI), was achieved by trapping residuals with early exhaust valve closure in conjunction with direct injection. Multi-cycle 3D engine simulations have been carried out for parametric study on four different injection timings in order to better understand the effects of injection timings on in-cylinder mixing and CAI combustion. The full engine cycle simulation including complete gas exchange and combustion processes was carried out over several cycles in order to obtain the stable cycle for analysis. The combustion models used in the present study are the Shell auto-ignition model and the characteristic-time combustion model, which were modified to take the high level of EGR into consideration. A liquid sheet breakup spray model was used for the droplet breakup processes. The analyses show that the injection timing plays an important role in affecting the in-cylinder air/fuel mixing and mixture temperature, which in turn affects the CAI combustion and engine performance.

  11. A big data approach to the development of mixed-effects models for seizure count data.

    PubMed

    Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M

    2017-05-01

    Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  12. Effective temperatures of red giants in the APOKASC catalogue and the mixing length calibration in stellar models

    NASA Astrophysics Data System (ADS)

    Salaris, M.; Cassisi, S.; Schiavon, R. P.; Pietrinferni, A.

    2018-04-01

    Red giants in the updated APOGEE-Kepler catalogue, with estimates of mass, chemical composition, surface gravity and effective temperature, have recently challenged stellar models computed under the standard assumption of solar calibrated mixing length. In this work, we critically reanalyse this sample of red giants, adopting our own stellar model calculations. Contrary to previous results, we find that the disagreement between the Teff scale of red giants and models with solar calibrated mixing length disappears when considering our models and the APOGEE-Kepler stars with scaled solar metal distribution. However, a discrepancy shows up when α-enhanced stars are included in the sample. We have found that assuming mass, chemical composition and effective temperature scale of the APOGEE-Kepler catalogue, stellar models generally underpredict the change of temperature of red giants caused by α-element enhancements at fixed [Fe/H]. A second important conclusion is that the choice of the outer boundary conditions employed in model calculations is critical. Effective temperature differences (metallicity dependent) between models with solar calibrated mixing length and observations appear for some choices of the boundary conditions, but this is not a general result.

  13. An independent component analysis confounding factor correction framework for identifying broad impact expression quantitative trait loci

    PubMed Central

    Ju, Jin Hyun; Crystal, Ronald G.

    2017-01-01

    Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL. PMID:28505156

  14. An independent component analysis confounding factor correction framework for identifying broad impact expression quantitative trait loci.

    PubMed

    Ju, Jin Hyun; Shenoy, Sushila A; Crystal, Ronald G; Mezey, Jason G

    2017-05-01

    Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL.

  15. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing disease resistance data

    USDA-ARS?s Scientific Manuscript database

    The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...

  16. RO1 Funding for Mixed Methods Research: Lessons learned from the Mixed-Method Analysis of Japanese Depression Project

    PubMed Central

    Arnault, Denise Saint; Fetters, Michael D.

    2013-01-01

    Mixed methods research has made significant in-roads in the effort to examine complex health related phenomenon. However, little has been published on the funding of mixed methods research projects. This paper addresses that gap by presenting an example of an NIMH funded project using a mixed methods QUAL-QUAN triangulation design entitled “The Mixed-Method Analysis of Japanese Depression.” We present the Cultural Determinants of Health Seeking model that framed the study, the specific aims, the quantitative and qualitative data sources informing the study, and overview of the mixing of the two studies. Finally, we examine reviewer's comments and our insights related to writing mixed method proposal successful for achieving RO1 level funding. PMID:25419196

  17. Converting isotope ratios to diet composition - the use of mixing models - June 2010

    EPA Science Inventory

    One application of stable isotope analysis is to reconstruct diet composition based on isotopic mass balance. The isotopic value of a consumer’s tissue reflects the isotopic values of its food sources proportional to their dietary contributions. Isotopic mixing models are used ...

  18. Simulating mixed-phase Arctic stratus clouds: sensitivity to ice initiation mechanisms

    NASA Astrophysics Data System (ADS)

    Sednev, I.; Menon, S.; McFarquhar, G.

    2008-06-01

    The importance of Arctic mixed-phase clouds on radiation and the Arctic climate is well known. However, the development of mixed-phase cloud parameterization for use in large scale models is limited by lack of both related observations and numerical studies using multidimensional models with advanced microphysics that provide the basis for understanding the relative importance of different microphysical processes that take place in mixed-phase clouds. To improve the representation of mixed-phase cloud processes in the GISS GCM we use the GISS single-column model coupled to a bin resolved microphysics (BRM) scheme that was specially designed to simulate mixed-phase clouds and aerosol-cloud interactions. Using this model with the microphysical measurements obtained from the DOE ARM Mixed-Phase Arctic Cloud Experiment (MPACE) campaign in October 2004 at the North Slope of Alaska, we investigate the effect of ice initiation processes and Bergeron-Findeisen process (BFP) on glaciation time and longevity of single-layer stratiform mixed-phase clouds. We focus on observations taken during 9th-10th October, which indicated the presence of a single-layer mixed-phase clouds. We performed several sets of 12-h simulations to examine model sensitivity to different ice initiation mechanisms and evaluate model output (hydrometeors' concentrations, contents, effective radii, precipitation fluxes, and radar reflectivity) against measurements from the MPACE Intensive Observing Period. Overall, the model qualitatively simulates ice crystal concentration and hydrometeors content, but it fails to predict quantitatively the effective radii of ice particles and their vertical profiles. In particular, the ice effective radii are overestimated by at least 50%. However, using the same definition as used for observations, the effective radii simulated and that observed were more comparable. We find that for the single-layer stratiform mixed-phase clouds simulated, process of ice phase initiation due to freezing of supercooled water in both saturated and undersaturated (w.r.t. water) environments is as important as primary ice crystal origination from water vapor. We also find that the BFP is a process mainly responsible for the rates of glaciation of simulated clouds. These glaciation rates cannot be adequately represented by a water-ice saturation adjustment scheme that only depends on temperature and liquid and solid hydrometeors' contents as is widely used in bulk microphysics schemes and are better represented by processes that also account for supersaturation changes as the hydrometeors grow.

  19. Simulating mixed-phase Arctic stratus clouds: sensitivity to ice initiation mechanisms

    NASA Astrophysics Data System (ADS)

    Sednev, I.; Menon, S.; McFarquhar, G.

    2009-07-01

    The importance of Arctic mixed-phase clouds on radiation and the Arctic climate is well known. However, the development of mixed-phase cloud parameterization for use in large scale models is limited by lack of both related observations and numerical studies using multidimensional models with advanced microphysics that provide the basis for understanding the relative importance of different microphysical processes that take place in mixed-phase clouds. To improve the representation of mixed-phase cloud processes in the GISS GCM we use the GISS single-column model coupled to a bin resolved microphysics (BRM) scheme that was specially designed to simulate mixed-phase clouds and aerosol-cloud interactions. Using this model with the microphysical measurements obtained from the DOE ARM Mixed-Phase Arctic Cloud Experiment (MPACE) campaign in October 2004 at the North Slope of Alaska, we investigate the effect of ice initiation processes and Bergeron-Findeisen process (BFP) on glaciation time and longevity of single-layer stratiform mixed-phase clouds. We focus on observations taken during 9-10 October, which indicated the presence of a single-layer mixed-phase clouds. We performed several sets of 12-h simulations to examine model sensitivity to different ice initiation mechanisms and evaluate model output (hydrometeors' concentrations, contents, effective radii, precipitation fluxes, and radar reflectivity) against measurements from the MPACE Intensive Observing Period. Overall, the model qualitatively simulates ice crystal concentration and hydrometeors content, but it fails to predict quantitatively the effective radii of ice particles and their vertical profiles. In particular, the ice effective radii are overestimated by at least 50%. However, using the same definition as used for observations, the effective radii simulated and that observed were more comparable. We find that for the single-layer stratiform mixed-phase clouds simulated, process of ice phase initiation due to freezing of supercooled water in both saturated and subsaturated (w.r.t. water) environments is as important as primary ice crystal origination from water vapor. We also find that the BFP is a process mainly responsible for the rates of glaciation of simulated clouds. These glaciation rates cannot be adequately represented by a water-ice saturation adjustment scheme that only depends on temperature and liquid and solid hydrometeors' contents as is widely used in bulk microphysics schemes and are better represented by processes that also account for supersaturation changes as the hydrometeors grow.

  20. Experimental Testing and Modeling Analysis of Solute Mixing at Water Distribution Pipe Junctions

    EPA Science Inventory

    Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. Here we have categorized pipe junctions into five hydraulic types, for which flow distribution factors and analytical equations for describing the solute mixing ...

  1. Mixed Effects Modeling of Morris Water Maze Data: Advantages and Cautionary Notes

    ERIC Educational Resources Information Center

    Young, Michael E.; Clark, M. H.; Goffus, Andrea; Hoane, Michael R.

    2009-01-01

    Morris water maze data are most commonly analyzed using repeated measures analysis of variance in which daily test sessions are analyzed as an unordered categorical variable. This approach, however, may lack power, relies heavily on post hoc tests of daily performance that can complicate interpretation, and does not target the nonlinear trends…

  2. A MIXED MODEL ANALYSIS OF SOIL CO2 EFFLUX AND NIGHT-TIME RESPIRATION RESPONSES TO ELEVATED CO2 AND TEMPERATURE

    EPA Science Inventory

    Abstract: We investigated the effects of elevated soil temperature and atmospheric CO2 on soil CO2 efflux and system respiration responses. The study was conducted in sun-lit controlled-environment chambers using two-year-old Douglas-fir seedlings grown in reconstructed litter-so...

  3. Disaggregating the Distal, Proximal, and Time-Varying Effects of Parent Alcoholism on Children's Internalizing Symptoms

    ERIC Educational Resources Information Center

    Hussong, A. M.; Cai, L.; Curran, P. J.; Flora, D. B.; Chassin, L. A.; Zucker, R. A.

    2008-01-01

    We tested whether children show greater internalizing symptoms when their parents are actively abusing alcohol. In an integrative data analysis, we combined observations over ages 2 through 17 from two longitudinal studies of children of alcoholic parents and matched controls recruited from the community. Using a mixed modeling approach, we tested…

  4. Pharmacoepidemiologic investigation of clonazepam relative clearance by mixed-effect modeling using routine clinical pharmacokinetic data in Japanese patients.

    PubMed

    Yukawa, Eiji; Satou, Masayasu; Nonaka, Toshiharu; Yukawa, Miho; Ohdo, Shigehiro; Higuchi, Shun; Kuroda, Takeshi; Goto, Yoshinobu

    2002-01-01

    The effects of drug-drug interactions on clonazepam clearance were examined through a retrospective analysis of serum concentration data from pediatric and adult epileptic patients. Patients received clonazepam as monotherapy or in combination with other antiepileptic drugs. A total of 259 serum clonazepam concentrations gathered from 137 patients were used in a population analysis of drug-drug interactions on clonazepam clearance. Data were analyzed using a nonlinear mixed-effects modeling (NONMEM) technique. The final model describing clonazepam clearance was CL = 152 x TBW(-0.181) x DIF, where CL is clearance (ml/kg/h), TBWis total body weight (kg), and DIF (drug interaction factor) is a scaling factor for concomitant medication with a value of 1 for patients on clonazepam monotherapy, 1.18 for those patients receiving concomitant administration of clonazepam and one antiepileptic drug (carbamazepine or valproic acid), and 2.12 x TBW(-0.119) for those patients receiving concomitant administration of clonazepam and more than two antiepileptic drugs. Clonazepam clearance decreased in a weight-related fashion in children, with minimal changes observed in adults. Concomitant administration of clonazepam and carbamazepine resulted in a 22% increase in clonazepam clearance. Concomitant administration of clonazepam and valproic acid resulted in a 12% increase in clonazepam clearance. Concomitant administration of clonazepam with two or more antiepileptic drugs resulted in a 23% to 75% increase in clonazepam clearance.

  5. Trends in stratospheric ozone profiles using functional mixed models

    NASA Astrophysics Data System (ADS)

    Park, A. Y.; Guillas, S.; Petropavlovskikh, I.

    2013-05-01

    This paper is devoted to the modeling of altitude-dependent patterns of ozone variations over time. Umkher ozone profiles (quarter of Umkehr layer) from 1978 to 2011 are investigated at two locations: Boulder (USA) and Arosa (Switzerland). The study consists of two statistical stages. First we approximate ozone profiles employing an appropriate basis. To capture primary modes of ozone variations without losing essential information, a functional principal component analysis is performed as it penalizes roughness of the function and smooths excessive variations in the shape of the ozone profiles. As a result, data driven basis functions are obtained. Secondly we estimate the effects of covariates - month, year (trend), quasi biennial oscillation, the Solar cycle, arctic oscillation and the El Niño/Southern Oscillation cycle - on the principal component scores of ozone profiles over time using generalized additive models. The effects are smooth functions of the covariates, and are represented by knot-based regression cubic splines. Finally we employ generalized additive mixed effects models incorporating a more complex error structure that reflects the observed seasonality in the data. The analysis provides more accurate estimates of influences and trends, together with enhanced uncertainty quantification. We are able to capture fine variations in the time evolution of the profiles such as the semi-annual oscillation. We conclude by showing the trends by altitude over Boulder. The strongly declining trends over 2003-2011 for altitudes of 32-64 hPa show that stratospheric ozone is not yet fully recovering.

  6. A meta-analysis of Th2 pathway genetic variants and risk for allergic rhinitis.

    PubMed

    Bunyavanich, Supinda; Shargorodsky, Josef; Celedón, Juan C

    2011-06-01

    There is a significant genetic contribution to allergic rhinitis (AR). Genetic association studies for AR have been performed, but varying results make it challenging to decipher the overall potential effect of specific variants. The Th2 pathway plays an important role in the immunological development of AR. We performed meta-analyses of genetic association studies of variants in Th2 pathway genes and AR. PubMed and Phenopedia were searched by double extraction for original studies on Th2 pathway-related genetic polymorphisms and their associations with AR. A meta-analysis was conducted on each genetic polymorphism with data meeting our predetermined selection criteria. Analyses were performed using both fixed and random effects models, with stratification by age group, ethnicity, and AR definition where appropriate. Heterogeneity and publication bias were assessed. Six independent studies analyzing three candidate polymorphisms and involving a total of 1596 cases and 2892 controls met our inclusion criteria. Overall, the A allele of IL13 single nucleotide polymorphism (SNP) rs20541 was associated with increased odds of AR (estimated OR=1.2; 95% CI 1.1-1.3, p-value 0.004 in fixed effects model, 95% CI 1.0-1.5, p-value 0.056 in random effects model). The A allele of rs20541 was associated with increased odds of AR in mixed age groups using both fixed effects and random effects modeling. IL13 SNP rs1800925 and IL4R SNP 1801275 did not demonstrate overall associations with AR. We conclude that there is evidence for an overall association between IL13 SNP rs20541 and increased risk of AR, especially in mixed-age populations. © 2011 John Wiley & Sons A/S.

  7. Influence of non-homogeneous mixing on final epidemic size in a meta-population model.

    PubMed

    Cui, Jingan; Zhang, Yanan; Feng, Zhilan

    2018-06-18

    In meta-population models for infectious diseases, the basic reproduction number [Formula: see text] can be as much as 70% larger in the case of preferential mixing than that in homogeneous mixing [J.W. Glasser, Z. Feng, S.B. Omer, P.J. Smith, and L.E. Rodewald, The effect of heterogeneity in uptake of the measles, mumps, and rubella vaccine on the potential for outbreaks of measles: A modelling study, Lancet ID 16 (2016), pp. 599-605. doi: 10.1016/S1473-3099(16)00004-9 ]. This suggests that realistic mixing can be an important factor to consider in order for the models to provide a reliable assessment of intervention strategies. The influence of mixing is more significant when the population is highly heterogeneous. In this paper, another quantity, the final epidemic size ([Formula: see text]) of an outbreak, is considered to examine the influence of mixing and population heterogeneity. Final size relation is derived for a meta-population model accounting for a general mixing. The results show that [Formula: see text] can be influenced by the pattern of mixing in a significant way. Another interesting finding is that, heterogeneity in various sub-population characteristics may have the opposite effect on [Formula: see text] and [Formula: see text].

  8. Numerical simulation of Forchheimer flow to a partially penetrating well with a mixed-type boundary condition

    NASA Astrophysics Data System (ADS)

    Mathias, Simon A.; Wen, Zhang

    2015-05-01

    This article presents a numerical study to investigate the combined role of partial well penetration (PWP) and non-Darcy effects concerning the performance of groundwater production wells. A finite difference model is developed in MATLAB to solve the two-dimensional mixed-type boundary value problem associated with flow to a partially penetrating well within a cylindrical confined aquifer. Non-Darcy effects are incorporated using the Forchheimer equation. The model is verified by comparison to results from existing semi-analytical solutions concerning the same problem but assuming Darcy's law. A sensitivity analysis is presented to explore the problem of concern. For constant pressure production, Non-Darcy effects lead to a reduction in production rate, as compared to an equivalent problem solved using Darcy's law. For fully penetrating wells, this reduction in production rate becomes less significant with time. However, for partially penetrating wells, the reduction in production rate persists for much larger times. For constant production rate scenarios, the combined effect of PWP and non-Darcy flow takes the form of a constant additional drawdown term. An approximate solution for this loss term is obtained by performing linear regression on the modeling results.

  9. Structure Elucidation of Mixed-Linker Zeolitic Imidazolate Frameworks by Solid-State (1)H CRAMPS NMR Spectroscopy and Computational Modeling.

    PubMed

    Jayachandrababu, Krishna C; Verploegh, Ross J; Leisen, Johannes; Nieuwendaal, Ryan C; Sholl, David S; Nair, Sankar

    2016-06-15

    Mixed-linker zeolitic imidazolate frameworks (ZIFs) are nanoporous materials that exhibit continuous and controllable tunability of properties like effective pore size, hydrophobicity, and organophilicity. The structure of mixed-linker ZIFs has been studied on macroscopic scales using gravimetric and spectroscopic techniques. However, it has so far not been possible to obtain information on unit-cell-level linker distribution, an understanding of which is key to predicting and controlling their adsorption and diffusion properties. We demonstrate the use of (1)H combined rotation and multiple pulse spectroscopy (CRAMPS) NMR spin exchange measurements in combination with computational modeling to elucidate potential structures of mixed-linker ZIFs, particularly the ZIF 8-90 series. All of the compositions studied have structures that have linkers mixed at a unit-cell-level as opposed to separated or highly clustered phases within the same crystal. Direct experimental observations of linker mixing were accomplished by measuring the proton spin exchange behavior between functional groups on the linkers. The data were then fitted to a kinetic spin exchange model using proton positions from candidate mixed-linker ZIF structures that were generated computationally using the short-range order (SRO) parameter as a measure of the ordering, clustering, or randomization of the linkers. The present method offers the advantages of sensitivity without requiring isotope enrichment, a straightforward NMR pulse sequence, and an analysis framework that allows one to relate spin diffusion behavior to proposed atomic positions. We find that structures close to equimolar composition of the two linkers show a greater tendency for linker clustering than what would be predicted based on random models. Using computational modeling we have also shown how the window-type distribution in experimentally synthesized mixed-linker ZIF-8-90 materials varies as a function of their composition. The structural information thus obtained can be further used for predicting, screening, or understanding the tunable adsorption and diffusion behavior of mixed-linker ZIFs, for which the knowledge of linker distributions in the framework is expected to be important.

  10. Numerical simulation of wave-current interaction under strong wind conditions

    NASA Astrophysics Data System (ADS)

    Larrañaga, Marco; Osuna, Pedro; Ocampo-Torres, Francisco Javier

    2017-04-01

    Although ocean surface waves are known to play an important role in the momentum and other scalar transfer between the atmosphere and the ocean, most operational numerical models do not explicitly include the terms of wave-current interaction. In this work, a numerical analysis about the relative importance of the processes associated with the wave-current interaction under strong off-shore wind conditions in Gulf of Tehuantepec (the southern Mexican Pacific) was carried out. The numerical system includes the spectral wave model WAM and the 3D hydrodynamic model POLCOMS, with the vertical turbulent mixing parametrized by the kappa-epsilon closure model. The coupling methodology is based on the vortex-force formalism. The hydrodynamic model was forced at the open boundaries using the HYCOM database and the wave model was forced at the open boundaries by remote waves from the southern Pacific. The atmospheric forcing for both models was provided by a local implementation of the WRF model, forced at the open boundaries using the CFSR database. The preliminary analysis of the model results indicates an effect of currents on the propagation of the swell throughout the study area. The Stokes-Coriolis term have an impact on the transient Ekman transport by modifying the Ekman spiral, while the Stokes drift has an effect on the momentum advection and the production of TKE, where the later induces a deepening of the mixing layer. This study is carried out in the framework of the project CONACYT CB-2015-01 255377 and RugDiSMar Project (CONACYT 155793).

  11. Remedying excessive numerical diapycnal mixing in a global 0.25° NEMO configuration

    NASA Astrophysics Data System (ADS)

    Megann, Alex; Nurser, George; Storkey, Dave

    2016-04-01

    If numerical ocean models are to simulate faithfully the upwelling branches of the global overturning circulation, they need to have a good representation of the diapycnal mixing processes which contribute to conversion of the bottom and deep waters produced in high latitudes into less dense watermasses. It is known that the default class of depth-coordinate ocean models such as NEMO and MOM5, as used in many state-of-the art coupled climate models and Earth System Models, have excessive numerical diapycnal mixing, resulting from irreversible advection across coordinate surfaces. The GO5.0 configuration of the NEMO ocean model, on an "eddy-permitting" 0.25° global grid, is used in the current UK GC1 and GC2 coupled models. Megann and Nurser (2016) have shown, using the isopycnal watermass analysis of Lee et al (2002), that spurious numerical mixing is substantially larger than the explicit mixing prescribed by the mixing scheme used by the model. It will be shown that increasing the biharmonic viscosity by a factor of three tends to suppress small-scale noise in the vertical velocity in the model. This significantly reduces the numerical mixing in GO5.0, and we shall show that it also leads to large-scale improvements in model biases.

  12. Statistical models of global Langmuir mixing

    NASA Astrophysics Data System (ADS)

    Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean

    2017-05-01

    The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.

  13. Evaluation and linking of effective parameters in particle-based models and continuum models for mixing-limited bimolecular reactions

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Papelis, Charalambos; Sun, Pengtao; Yu, Zhongbo

    2013-08-01

    Particle-based models and continuum models have been developed to quantify mixing-limited bimolecular reactions for decades. Effective model parameters control reaction kinetics, but the relationship between the particle-based model parameter (such as the interaction radius R) and the continuum model parameter (i.e., the effective rate coefficient Kf) remains obscure. This study attempts to evaluate and link R and Kf for the second-order bimolecular reaction in both the bulk and the sharp-concentration-gradient (SCG) systems. First, in the bulk system, the agent-based method reveals that R remains constant for irreversible reactions and decreases nonlinearly in time for a reversible reaction, while mathematical analysis shows that Kf transitions from an exponential to a power-law function. Qualitative link between R and Kf can then be built for the irreversible reaction with equal initial reactant concentrations. Second, in the SCG system with a reaction interface, numerical experiments show that when R and Kf decline as t-1/2 (for example, to account for the reactant front expansion), the two models capture the transient power-law growth of product mass, and their effective parameters have the same functional form. Finally, revisiting of laboratory experiments further shows that the best fit factor in R and Kf is on the same order, and both models can efficiently describe chemical kinetics observed in the SCG system. Effective model parameters used to describe reaction kinetics therefore may be linked directly, where the exact linkage may depend on the chemical and physical properties of the system.

  14. Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects

    NASA Astrophysics Data System (ADS)

    Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.

    2008-10-01

    A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.

  15. Multifractal Analysis of Seismically Induced Soft-Sediment Deformation Structures Imaged by X-Ray Computed Tomography

    NASA Astrophysics Data System (ADS)

    Nakashima, Yoshito; Komatsubara, Junko

    Unconsolidated soft sediments deform and mix complexly by seismically induced fluidization. Such geological soft-sediment deformation structures (SSDSs) recorded in boring cores were imaged by X-ray computed tomography (CT), which enables visualization of the inhomogeneous spatial distribution of iron-bearing mineral grains as strong X-ray absorbers in the deformed strata. Multifractal analysis was applied to the two-dimensional (2D) CT images with various degrees of deformation and mixing. The results show that the distribution of the iron-bearing mineral grains is multifractal for less deformed/mixed strata and almost monofractal for fully mixed (i.e. almost homogenized) strata. Computer simulations of deformation of real and synthetic digital images were performed using the egg-beater flow model. The simulations successfully reproduced the transformation from the multifractal spectra into almost monofractal spectra (i.e. almost convergence on a single point) with an increase in deformation/mixing intensity. The present study demonstrates that multifractal analysis coupled with X-ray CT and the mixing flow model is useful to quantify the complexity of seismically induced SSDSs, standing as a novel method for the evaluation of cores for seismic risk assessment.

  16. Statistical modelling of growth using a mixed model with orthogonal polynomials.

    PubMed

    Suchocki, T; Szyda, J

    2011-02-01

    In statistical modelling, the effects of single-nucleotide polymorphisms (SNPs) are often regarded as time-independent. However, for traits recorded repeatedly, it is very interesting to investigate the behaviour of gene effects over time. In the analysis, simulated data from the 13th QTL-MAS Workshop (Wageningen, The Netherlands, April 2009) was used and the major goal was the modelling of genetic effects as time-dependent. For this purpose, a mixed model which describes each effect using the third-order Legendre orthogonal polynomials, in order to account for the correlation between consecutive measurements, is fitted. In this model, SNPs are modelled as fixed, while the environment is modelled as random effects. The maximum likelihood estimates of model parameters are obtained by the expectation-maximisation (EM) algorithm and the significance of the additive SNP effects is based on the likelihood ratio test, with p-values corrected for multiple testing. For each significant SNP, the percentage of the total variance contributed by this SNP is calculated. Moreover, by using a model which simultaneously incorporates effects of all of the SNPs, the prediction of future yields is conducted. As a result, 179 from the total of 453 SNPs covering 16 out of 18 true quantitative trait loci (QTL) were selected. The correlation between predicted and true breeding values was 0.73 for the data set with all SNPs and 0.84 for the data set with selected SNPs. In conclusion, we showed that a longitudinal approach allows for estimating changes of the variance contributed by each SNP over time and demonstrated that, for prediction, the pre-selection of SNPs plays an important role.

  17. Some effects of swirl on turbulent mixing and combustion

    NASA Technical Reports Server (NTRS)

    Rubel, A.

    1972-01-01

    A general formulation of some effects of swirl on turbulent mixing is given. The basis for the analysis is that momentum transport is enhanced by turbulence resulting from rotational instability of the fluid field. An appropriate form for the turbulent eddy viscosity is obtained by mixing length type arguments. The result takes the form of a corrective factor that is a function of the swirl and acts to increase the eddy viscosity. The factor is based upon the initial mixing conditions implying that the rotational turbulence decays in a manner similar to that of free shear turbulence. Existing experimental data for free jet combustion are adequately matched by using the modifying factor to relate the effects of swirl on eddy viscosity. The model is extended and applied to the supersonic combustion of a ring jet of hydrogen injected into a constant area annular air stream. The computations demonstrate that swirling the flow could: (1) reduce the burning length by one half, (2) result in more uniform burning across the annulus width, and (3) open the possibility of optimization of the combustion characteristics by locating the fuel jet between the inner wall and center of the annulus width.

  18. Fully-coupled analysis of jet mixing problems. Three-dimensional PNS model, SCIP3D

    NASA Technical Reports Server (NTRS)

    Wolf, D. E.; Sinha, N.; Dash, S. M.

    1988-01-01

    Numerical procedures formulated for the analysis of 3D jet mixing problems, as incorporated in the computer model, SCIP3D, are described. The overall methodology closely parallels that developed in the earlier 2D axisymmetric jet mixing model, SCIPVIS. SCIP3D integrates the 3D parabolized Navier-Stokes (PNS) jet mixing equations, cast in mapped cartesian or cylindrical coordinates, employing the explicit MacCormack Algorithm. A pressure split variant of this algorithm is employed in subsonic regions with a sublayer approximation utilized for treating the streamwise pressure component. SCIP3D contains both the ks and kW turbulence models, and employs a two component mixture approach to treat jet exhausts of arbitrary composition. Specialized grid procedures are used to adjust the grid growth in accordance with the growth of the jet, including a hybrid cartesian/cylindrical grid procedure for rectangular jets which moves the hybrid coordinate origin towards the flow origin as the jet transitions from a rectangular to circular shape. Numerous calculations are presented for rectangular mixing problems, as well as for a variety of basic unit problems exhibiting overall capabilities of SCIP3D.

  19. Multivariate Models for Normal and Binary Responses in Intervention Studies

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen

    2016-01-01

    Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…

  20. The Influence of Atomic Diffusion on Stellar Ages and Chemical Tagging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dotter, Aaron; Conroy, Charlie; Cargile, Phillip

    2017-05-10

    In the era of large stellar spectroscopic surveys, there is an emphasis on deriving not only stellar abundances but also the ages for millions of stars. In the context of Galactic archeology, stellar ages provide a direct probe of the formation history of the Galaxy. We use the stellar evolution code MESA to compute models with atomic diffusion—with and without radiative acceleration—and extra mixing in the surface layers. The extra mixing consists of both density-dependent turbulent mixing and envelope overshoot mixing. Based on these models we argue that it is important to distinguish between initial, bulk abundances (parameters) and current,more » surface abundances (variables) in the analysis of individual stellar ages. In stars that maintain radiative regions on evolutionary timescales, atomic diffusion modifies the surface abundances. We show that when initial, bulk metallicity is equated with current, surface metallicity in isochrone age analysis, the resulting stellar ages can be systematically overestimated by up to 20%. The change of surface abundances with evolutionary phase also complicates chemical tagging, which is the concept that dispersed star clusters can be identified through unique, high-dimensional chemical signatures. Stars from the same cluster, but in different evolutionary phases, will show different surface abundances. We speculate that calibration of stellar models may allow us to estimate not only stellar ages but also initial abundances for individual stars. In the meantime, analyzing the chemical properties of stars in similar evolutionary phases is essential to minimize the effects of atomic diffusion in the context of chemical tagging.« less

  1. Quantitative Analysis of the Efficiency of OLEDs.

    PubMed

    Sim, Bomi; Moon, Chang-Ki; Kim, Kwon-Hyeon; Kim, Jang-Joo

    2016-12-07

    We present a comprehensive model for the quantitative analysis of factors influencing the efficiency of organic light-emitting diodes (OLEDs) as a function of the current density. The model takes into account the contribution made by the charge carrier imbalance, quenching processes, and optical design loss of the device arising from various optical effects including the cavity structure, location and profile of the excitons, effective radiative quantum efficiency, and out-coupling efficiency. Quantitative analysis of the efficiency can be performed with an optical simulation using material parameters and experimental measurements of the exciton profile in the emission layer and the lifetime of the exciton as a function of the current density. This method was applied to three phosphorescent OLEDs based on a single host, mixed host, and exciplex-forming cohost. The three factors (charge carrier imbalance, quenching processes, and optical design loss) were influential in different ways, depending on the device. The proposed model can potentially be used to optimize OLED configurations on the basis of an analysis of the underlying physical processes.

  2. Multivariate meta-analysis using individual participant data

    PubMed Central

    Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.

    2016-01-01

    When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484

  3. Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations

    NASA Technical Reports Server (NTRS)

    Stefanski, Philip L.

    2014-01-01

    A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.

  4. Toward Describing the Effects of Ozone Depletion on Marine Primary Productivity and Carbon Cycling

    NASA Technical Reports Server (NTRS)

    Cullen, John J.

    1995-01-01

    This project was aimed at improved predictions of the effects of UVB and ozone depletion on marine primary productivity and carbon flux. A principal objective was to incorporate a new analytical description of photosynthesis as a function of UV and photosynthetically available radiation (Cullen et. al., Science 258:646) into a general oceanographic model. We made significant progress: new insights into the kinetics of photoinhibition were used in the analysis of experiments on Antarctic phytoplankton to generate a general model of UV-induced photoinhibition under the influence of ozone depletion and vertical mixing. The way has been paved for general models on a global scale.

  5. Extraction and identification of mixed pesticides’ Raman signal and establishment of their prediction models

    USDA-ARS?s Scientific Manuscript database

    A nondestructive and sensitive method was developed to detect the presence of mixed pesticides of acetamiprid, chlorpyrifos and carbendazim on apples by surface-enhanced Raman spectroscopy (SERS). Self-modeling mixture analysis (SMA) was used to extract and identify the Raman spectra of individual p...

  6. Analysis of air quality with numerical simulation (CMAQ), and observations of trace gases

    NASA Astrophysics Data System (ADS)

    Castellanos, Patricia

    Ozone, a secondary pollutant, is a strong oxidant that can pose a risk to human health. It is formed from a complex set of photochemical reactions involving nitrogen oxides (NOx) and volatile organic compounds (VOCs). Ambient measurements and air quality modeling of ozone and its precursors are important tools for support of regulatory decisions, and analyzing atmospheric chemical and physical processes. I worked on three methods to improve our understanding of photochemical ozone production in the Eastern U.S.: a new detector for NO2, a numerical experiment to test the sensitivity to the timing to emissions, and comparison of modeled and observed vertical profiles of CO and ozone. A small, commercially available cavity ring-down spectroscopy (CRDS) NO2 detector suitable for surface and aircraft monitoring was modified and characterized. The CRDS detector was run in parallel to an ozone chemiluminescence device with photolytic conversion of NO2 to NO. The two instruments measured ambient air in suburban Maryland. A linear least-squares fit to a direct comparison of the data resulted in a slope of 0.960+/-0.002 and R of 0.995, showing agreement between two measurement techniques within experimental uncertainty. The sensitivity of the Community Multiscale Air Quality (CMAQ) model to the temporal variation of four emissions sectors was investigated to understand the effect of emissions' daily variability on modeled ozone. Decreasing the variability of mobile source emissions changed the 8-hour maximum ozone concentration by +/-7 parts per billion by volume (ppbv). Increasing the variability of point source emissions affected ozone concentrations by +/-6 ppbv, but only in areas close to the source. CO is an ideal tracer for analyzing pollutant transport in AQMs because the atmospheric lifetime is longer than the timescale of boundary layer mixing. CO can be used as a tracer if model performance of CO is well understood. An evaluation of CO model performance in CMAQ was carried out using aircraft observations taken for the Regional Atmospheric Measurement, Modeling and Prediction Program (RAMMPP) in the summer of 2002. Comparison of modeled and observed CO total columns were generally in agreement within 5-10%. There is little evidence that the CO emissions inventory is grossly overestimated. CMAQ predicts the same vertical profile shape for all of the observations, i.e. CO is well mixed throughout the boundary layer. However, the majority of observations have poorly mixed air below 500 m, and well mixed air above. CMAQ appears to be transporting CO away from the surface more quickly than what is observed. Turbulent mixing in the model is represented with K-theory. A minimum Kz that scales with fractional urban land use is imposed in order to account for subgrid scale obstacles in urban areas and the urban heat island effect. Micrometeorological observations suggest that the minimum Kz is somewhat high. A sensitivity case where the minimum K z was reduced from 0.5 m2/s to 0.1 m2/s was carried out. Model performance of surface ozone observations at night increased significantly. The model better captures the observed ozone minimum with slower mixing, and increases ozone concentrations in the residual layer. Model performance of CO and ozone morning vertical profiles improves, but the effect is not large enough to bring the model and measurements into agreement. Comparison of modeled CO and O3 vertical profiles shows that turbulent mixing (as represented by eddy diffusivity) appears to be too fast, while convective mixing may be too slow.

  7. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  8. lme4qtl: linear mixed models with flexible covariance structure for genetic studies of related individuals.

    PubMed

    Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel

    2018-02-27

    Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .

  9. An improved NSGA - II algorithm for mixed model assembly line balancing

    NASA Astrophysics Data System (ADS)

    Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong

    2018-05-01

    Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.

  10. Computational models for the viscous/inviscid analysis of jet aircraft exhaust plumes

    NASA Astrophysics Data System (ADS)

    Dash, S. M.; Pergament, H. S.; Thorpe, R. D.

    1980-05-01

    Computational models which analyze viscous/inviscid flow processes in jet aircraft exhaust plumes are discussed. These models are component parts of an NASA-LaRC method for the prediction of nozzle afterbody drag. Inviscid/shock processes are analyzed by the SCIPAC code which is a compact version of a generalized shock capturing, inviscid plume code (SCIPPY). The SCIPAC code analyzes underexpanded jet exhaust gas mixtures with a self-contained thermodynamic package for hydrocarbon exhaust products and air. A detailed and automated treatment of the embedded subsonic zones behind Mach discs is provided in this analysis. Mixing processes along the plume interface are analyzed by two upgraded versions of an overlaid, turbulent mixing code (BOAT) developed previously for calculating nearfield jet entrainment. The BOATAC program is a frozen chemistry version of BOAT containing the aircraft thermodynamic package as SCIPAC; BOATAB is an afterburning version with a self-contained aircraft (hydrocarbon/air) finite-rate chemistry package. The coupling of viscous and inviscid flow processes is achieved by an overlaid procedure with interactive effects accounted for by a displacement thickness type correction to the inviscid plume interface.

  11. Computational models for the viscous/inviscid analysis of jet aircraft exhaust plumes. [predicting afterbody drag

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Pergament, H. S.; Thorpe, R. D.

    1980-01-01

    Computational models which analyze viscous/inviscid flow processes in jet aircraft exhaust plumes are discussed. These models are component parts of an NASA-LaRC method for the prediction of nozzle afterbody drag. Inviscid/shock processes are analyzed by the SCIPAC code which is a compact version of a generalized shock capturing, inviscid plume code (SCIPPY). The SCIPAC code analyzes underexpanded jet exhaust gas mixtures with a self-contained thermodynamic package for hydrocarbon exhaust products and air. A detailed and automated treatment of the embedded subsonic zones behind Mach discs is provided in this analysis. Mixing processes along the plume interface are analyzed by two upgraded versions of an overlaid, turbulent mixing code (BOAT) developed previously for calculating nearfield jet entrainment. The BOATAC program is a frozen chemistry version of BOAT containing the aircraft thermodynamic package as SCIPAC; BOATAB is an afterburning version with a self-contained aircraft (hydrocarbon/air) finite-rate chemistry package. The coupling of viscous and inviscid flow processes is achieved by an overlaid procedure with interactive effects accounted for by a displacement thickness type correction to the inviscid plume interface.

  12. Application of the balanced scorecard to an academic medical center in Taiwan: the effect of warning systems on improvement of hospital performance.

    PubMed

    Chen, Hsueh-Fen; Hou, Ying-Hui; Chang, Ray-E

    2012-10-01

    The balanced scorecard (BSC) is considered to be a useful tool for management in a variety of business environments. The purpose of this article is to utilize the experimental data produced by the incorporation and implementation of the BSC in hospitals and to investigate the effects of the BSC red light tracking warning system on performance improvement. This research was designed to be a retrospective follow-up study. The linear mixed model was applied for correcting the correlated errors. The data used in this study were secondary data collected by repeated measurements taken between 2004 and 2010 by 67 first-line medical departments of a public academic medical center in Taipei, Taiwan. The linear mixed model of analysis was applied for multilevel analysis. Improvements were observed with various time lags, from the subsequent month to three months after red light warning. During follow-up, the red light warning system more effectively improved controllable costs, infection rates, and the medical records completion rate. This further suggests that follow-up management promotes an enhancing and supportive effect to the red light warning. The red light follow-up management of BSC is an effective and efficient tool where improvement depends on ongoing and consistent attention in a continuing effort to better administer medical care and control costs. Copyright © 2012. Published by Elsevier B.V.

  13. Statistical strategies for averaging EC50 from multiple dose-response experiments.

    PubMed

    Jiang, Xiaoqi; Kopp-Schneider, Annette

    2015-11-01

    In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.

  14. Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model.

    PubMed

    Musekiwa, Alfred; Manda, Samuel O M; Mwambi, Henry G; Chen, Ding-Geng

    2016-01-01

    Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results.

  15. Mixed-effects models for estimating stand volume by means of small footprint airborne laser scanner data.

    Treesearch

    J. Breidenbach; E. Kublin; R. McGaughey; H.-E. Andersen; S. Reutebuch

    2008-01-01

    For this study, hierarchical data sets--in that several sample plots are located within a stand--were analyzed for study sites in the USA and Germany. The German data had an additional hierarchy as the stands are located within four distinct public forests. Fixed-effects models and mixed-effects models with a random intercept on the stand level were fit to each data...

  16. Effects of a DVD-delivered exercise program on patterns of sedentary behavior in older adults: a randomized controlled trial.

    PubMed

    Fanning, J; Porter, G; Awick, E A; Wójcicki, T R; Gothe, N P; Roberts, S A; Ehlers, D K; Motl, R W; McAuley, E

    2016-06-01

    In the present study, we examined the influence of a home-based, DVD-delivered exercise intervention on daily sedentary time and breaks in sedentary time in older adults. Between 2010 and 2012, older adults (i.e., aged 65 or older) residing in Illinois (N = 307) were randomized into a 6-month home-based, DVD-delivered exercise program (i.e., FlexToBa; FTB) or a waitlist control. Participants completed measurements prior to the first week (baseline), following the intervention period (month 6), and after a 6 month no-contact follow-up (month 12). Sedentary behavior was measured objectively using accelerometers for 7 consecutive days at each time point. Differences in daily sedentary time and breaks between groups and across the three time points were examined using mixed-factor analysis of variance (mixed ANOVA) and analysis of covariance (ANCOVA). Mixed ANOVA models revealed that daily minutes of sedentary time did not differ by group or time. The FTB condition, however, demonstrated a greater number of daily breaks in sedentary time relative to the control condition (p = .02). ANCOVA models revealed a non-significant effect favoring FTB at month 6, and a significant difference between groups at month 12 (p = .02). While overall sedentary time did not differ between groups, the DVD-delivered exercise intervention was effective for maintaining a greater number of breaks when compared with the control condition. Given the accumulating evidence emphasizing the importance of breaking up sedentary time, these findings have important implications for the design of future health behavior interventions.

  17. Differential expression analysis for RNAseq using Poisson mixed models.

    PubMed

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. A mixed model for the relationship between climate and human cranial form.

    PubMed

    Katz, David C; Grote, Mark N; Weaver, Timothy D

    2016-08-01

    We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Growth rate characteristics of acidophilic heterotrophic organisms from mine waste rock piles

    NASA Astrophysics Data System (ADS)

    Yacob, T. W.; Silverstein, J.; Jenkins, J.; Andre, B. J.; Rajaram, H.

    2010-12-01

    Autotrophic iron oxidizing bacteria play a key role in pyrite oxidation and generation of acid mine drainage AMD. Scarcity of organic substrates in many disturbed sites insures that IOB have sufficient oxygen and other nutrients for growth. It is proposed that addition of organic carbon substrate to waste rock piles will result in enrichment of heterotrophic microorganisms limiting the role of IOB in AMD generation. Previous researchers have used the acidophilic heterotroph Acidiphilium cryptum as a model to study the effects of organic substrate addition on the pyrite oxidation/AMD cycle. In order to develop a quantitative model of effects such as competition for oxygen, it is necessary to use growth and substrate consumption rate expressions, and one approach is to choose a model strain such as A. cryptum for kinetic studies. However we have found that the growth rate characteristics of A. cryptum may not provide an accurate model of the remediation effects of organic addition to subsurface mined sites. Fluorescent in-situ hybridization (FISH) assays of extracts of mine waste rock enriched with glucose and yeast extract did not produce countable numbers of cells in the Acidiphilium genus, with a detection limit of3 x 104 cells/gram rock, despite evidence of the presence of well established heterotrophic organisms. However, an MPN enrichment produced heterotrophic population estimates of 1x107 and 1x109 cells/gram rock. Growth rate studies of A. cryptum showed that cultures took 120 hours to degrade 50% of an initial glucose concentration of 2,000 mg/L. However a mixed culture enriched from mine waste rock consumed 100% of the same amount of glucose in 24 hours. Substrate consumption data for the mixed culture were fit to a Monod growth model: {dS}/{dt} = μ_{max}S {( {X_0}/{Y} + S_0 -S )}/{(K_s +S)} Kinetic parameters were estimated utilizing a non linear regression method coupled with an ODE solver. The maximum specific growth rate of the mixed population with μ max was calculated to be 0.13 hr-1 and a yield of 0.52 g cells/g glucose and Ks of 0.2 g/L glucose. The effect of pH on growth was compared for A. cryptum and the mixed population. It was found that the mixed culture had a higher tolerance for extremely low pH conditions, with no growth at pH = 1; whereas no growth of A cryptum was observed at pH = 1.5. Both A. cryptum and the mixed cultures grew within a pH range of 2.5 - 6. A phospholipid fatty acid analysis (PLFA) of the mixed culture indicated that both eukaryotic and prokaryotic organisms are present at a ratio of approximately 1:1, indicating that organisms such as fungi may be important in carbon cycling in these acidic subsurface formations. The results from this research show that utilization of mixed wild cultures for environmental modeling may yield better results than selection of a single strain to represent populations in a quantitative model.

  20. Estimating Pressure Reactivity Using Noninvasive Doppler-Based Systolic Flow Index.

    PubMed

    Zeiler, Frederick A; Smielewski, Peter; Donnelly, Joseph; Czosnyka, Marek; Menon, David K; Ercole, Ari

    2018-04-05

    The study objective was to derive models that estimate the pressure reactivity index (PRx) using the noninvasive transcranial Doppler (TCD) based systolic flow index (Sx_a) and mean flow index (Mx_a), both based on mean arterial pressure, in traumatic brain injury (TBI). Using a retrospective database of 347 patients with TBI with intracranial pressure and TCD time series recordings, we derived PRx, Sx_a, and Mx_a. We first derived the autocorrelative structure of PRx based on: (A) autoregressive integrative moving average (ARIMA) modeling in representative patients, and (B) within sequential linear mixed effects (LME) models with various embedded ARIMA error structures for PRx for the entire population. Finally, we performed sequential LME models with embedded PRx ARIMA modeling to find the best model for estimating PRx using Sx_a and Mx_a. Model adequacy was assessed via normally distributed residual density. Model superiority was assessed via Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log likelihood (LL), and analysis of variance testing between models. The most appropriate ARIMA structure for PRx in this population was (2,0,2). This was applied in sequential LME modeling. Two models were superior (employing random effects in the independent variables and intercept): (A) PRx ∼ Sx_a, and (B) PRx ∼ Sx_a + Mx_a. Correlation between observed and estimated PRx with these two models was: (A) 0.794 (p < 0.0001, 95% confidence interval (CI) = 0.788-0.799), and (B) 0.814 (p < 0.0001, 95% CI = 0.809-0.819), with acceptable agreement on Bland-Altman analysis. Through using linear mixed effects modeling and accounting for the ARIMA structure of PRx, one can estimate PRx using noninvasive TCD-based indices. We have described our first attempts at such modeling and PRx estimation, establishing the strong link between two aspects of cerebral autoregulation: measures of cerebral blood flow and those of pulsatile cerebral blood volume. Further work is required to validate.

  1. Photonic states mixing beyond the plasmon hybridization model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryadharma, Radius N. S.; Iskandar, Alexander A., E-mail: iskandar@fi.itb.ac.id; Tjia, May-On

    2016-07-28

    A study is performed on a photonic-state mixing-pattern in an insulator-metal-insulator cylindrical silver nanoshell and its rich variations induced by changes in the geometry and dielectric media of the system, representing the combined influences of plasmon coupling strength and cavity effects. This study is performed in terms of the photonic local density of states (LDOS) calculated using the Green tensor method, in order to elucidate those combined effects. The energy profiles of LDOS inside the dielectric core are shown to exhibit consistently growing number of redshifted photonic states due to an enhanced plasmon coupling induced state mixing arising from decreasedmore » shell thickness, increased cavity size effect, and larger symmetry breaking effect induced by increased permittivity difference between the core and the background media. Further, an increase in cavity size leads to increased additional peaks that spread out toward the lower energy regime. A systematic analysis of those variations for a silver nanoshell with a fixed inner radius in vacuum background reveals a certain pattern of those growing number of redshifted states with an analytic expression for the corresponding energy downshifts, signifying a photonic state mixing scheme beyond the commonly adopted plasmon hybridization scheme. Finally, a remarkable correlation is demonstrated between the LDOS energy profiles outside the shell and the corresponding scattering efficiencies.« less

  2. Effect of winds and waves on salt intrusion in the Pearl River estuary

    NASA Astrophysics Data System (ADS)

    Gong, Wenping; Lin, Zhongyuan; Chen, Yunzhen; Chen, Zhaoyun; Zhang, Heng

    2018-02-01

    Salt intrusion in the Pearl River estuary (PRE) is a dynamic process that is influenced by a range of factors and to date, few studies have examined the effects of winds and waves on salt intrusion in the PRE. We investigate these effects using the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system applied to the PRE. After careful validation, the model is used for a series of diagnostic simulations. It is revealed that the local wind considerably strengthens the salt intrusion by lowering the water level in the eastern part of the estuary and increasing the bottom landward flow. The remote wind increases the water mixing on the continental shelf, elevates the water level on the shelf and in the PRE and pumps saltier shelf water into the estuary by Ekman transport. Enhancement of the salt intrusion is comparable between the remote and local winds. Waves decrease the salt intrusion by increasing the water mixing. Sensitivity analysis shows that the axial down-estuary wind, is most efficient in driving increases in salt intrusion via wind straining effect.

  3. IMPACT: Investigating the impact of Models of Practice for Allied health Care in subacuTe settings. A protocol for a quasi-experimental mixed methods study of cost effectiveness and outcomes for patients exposed to different models of allied health care.

    PubMed

    Coker, Freya; Williams, Cylie M; Taylor, Nicholas F; Caspers, Kirsten; McAlinden, Fiona; Wilton, Anita; Shields, Nora; Haines, Terry P

    2018-05-10

    This protocol considers three allied health staffing models across public health subacute hospitals. This quasi-experimental mixed-methods study, including qualitative process evaluation, aims to evaluate the impact of additional allied health services in subacute care, in rehabilitation and geriatric evaluation management settings, on patient, health service and societal outcomes. This health services research will analyse outcomes of patients exposed to different allied health models of care at three health services. Each health service will have a control ward (routine care) and an intervention ward (additional allied health). This project has two parts. Part 1: a whole of site data extraction for included wards. Outcome measures will include: length of stay, rate of readmissions, discharge destinations, community referrals, patient feedback and staff perspectives. Part 2: Functional Independence Measure scores will be collected every 2-3 days for the duration of 60 patient admissions.Data from part 1 will be analysed by linear regression analysis for continuous outcomes using patient-level data and logistic regression analysis for binary outcomes. Qualitative data will be analysed using a deductive thematic approach. For part 2, a linear mixed model analysis will be conducted using therapy service delivery and days since admission to subacute care as fixed factors in the model and individual participant as a random factor. Graphical analysis will be used to examine the growth curve of the model and transformations. The days since admission factor will be used to examine non-linear growth trajectories to determine if they lead to better model fit. Findings will be disseminated through local reports and to the Department of Health and Human Services Victoria. Results will be presented at conferences and submitted to peer-reviewed journals. The Monash Health Human Research Ethics committee approved this multisite research (HREC/17/MonH/144 and HREC/17/MonH/547). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. The effects of choir spacing and choir formation on the tuning accuracy and intonation tendencies of a mixed choir

    NASA Astrophysics Data System (ADS)

    Daugherty, James F.

    2005-09-01

    The tuning accuracy and intonation tendencies of a high school mixed choir (N=46) were measured from digital recordings obtained as the ensemble performed an a cappella motet under concert conditions in N=3 singer spacing configurations (close, lateral, circumambient) and N=2 choir formations (sectional and mixed). Methods of analysis were modeled on Howard's (2004) pitch-based measurements of the tuning accuracy of crowds of football fans. Results were discussed in terms of (a) previous studies on choir spacing (Daugherty, 1999, 2003) and self-to-other singer ratios (Ternstrm, 1995, 1999); (b) contributions of choir spacing to vocal/choral pedagogy; and (c) potential ramifications for the design and use of auditoria and portable standing risers for choral performances.

  5. Bayesian mixture analysis for metagenomic community profiling.

    PubMed

    Morfopoulou, Sofia; Plagnol, Vincent

    2015-09-15

    Deep sequencing of clinical samples is now an established tool for the detection of infectious pathogens, with direct medical applications. The large amount of data generated produces an opportunity to detect species even at very low levels, provided that computational tools can effectively profile the relevant metagenomic communities. Data interpretation is complicated by the fact that short sequencing reads can match multiple organisms and by the lack of completeness of existing databases, in particular for viral pathogens. Here we present metaMix, a Bayesian mixture model framework for resolving complex metagenomic mixtures. We show that the use of parallel Monte Carlo Markov chains for the exploration of the species space enables the identification of the set of species most likely to contribute to the mixture. We demonstrate the greater accuracy of metaMix compared with relevant methods, particularly for profiling complex communities consisting of several related species. We designed metaMix specifically for the analysis of deep transcriptome sequencing datasets, with a focus on viral pathogen detection; however, the principles are generally applicable to all types of metagenomic mixtures. metaMix is implemented as a user friendly R package, freely available on CRAN: http://cran.r-project.org/web/packages/metaMix sofia.morfopoulou.10@ucl.ac.uk Supplementary data are available at Bionformatics online. © The Author 2015. Published by Oxford University Press.

  6. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  7. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  8. Extracting a mix parameter from 2D radiography of variable density flow

    NASA Astrophysics Data System (ADS)

    Kurien, Susan; Doss, Forrest; Livescu, Daniel

    2017-11-01

    A methodology is presented for extracting quantities related to the statistical description of the mixing state from the 2D radiographic image of a flow. X-ray attenuation through a target flow is given by the Beer-Lambert law which exponentially damps the incident beam intensity by a factor proportional to the density, opacity and thickness of the target. By making reasonable assumptions for the mean density, opacity and effective thickness of the target flow, we estimate the contribution of density fluctuations to the attenuation. The fluctuations thus inferred may be used to form the correlation of density and specific-volume, averaged across the thickness of the flow in the direction of the beam. This correlation function, denoted by b in RANS modeling, quantifies turbulent mixing in variable density flows. The scheme is tested using DNS data computed for variable-density buoyancy-driven mixing. We quantify the deficits in the extracted value of b due to target thickness, Atwood number, and modeled noise in the incident beam. This analysis corroborates the proposed scheme to infer the mix parameter from thin targets at moderate to low Atwood numbers. The scheme is then applied to an image of counter-shear flow obtained from experiments at the National Ignition Facility. US Department of Energy.

  9. Fermion masses and mixings and dark matter constraints in a model with radiative seesaw mechanism

    NASA Astrophysics Data System (ADS)

    Bernal, Nicolás; Cárcamo Hernández, A. E.; de Medeiros Varzielas, Ivo; Kovalenko, Sergey

    2018-05-01

    We formulate a predictive model of fermion masses and mixings based on a Δ(27) family symmetry. In the quark sector the model leads to the viable mixing inspired texture where the Cabibbo angle comes from the down quark sector and the other angles come from both up and down quark sectors. In the lepton sector the model generates a predictive structure for charged leptons and, after radiative seesaw, an effective neutrino mass matrix with only one real and one complex parameter. We carry out a detailed analysis of the predictions in the lepton sector, where the model is only viable for inverted neutrino mass hierarchy, predicting a strict correlation between θ 23 and θ 13. We show a benchmark point that leads to the best-fit values of θ 12, θ 13, predicting a specific sin2 θ 23 ≃ 0.51 (within the 3 σ range), a leptonic CP-violating Dirac phase δ ≃ 281.6° and for neutrinoless double-beta decay m ee ≃ 41.3 meV. We turn then to an analysis of the dark matter candidates in the model, which are stabilized by an unbroken ℤ2 symmetry. We discuss the possibility of scalar dark matter, which can generate the observed abundance through the Higgs portal by the standard WIMP mechanism. An interesting possibility arises if the lightest heavy Majorana neutrino is the lightest ℤ2-odd particle. The model can produce a viable fermionic dark matter candidate, but only as a feebly interacting massive particle (FIMP), with the smallness of the coupling to the visible sector protected by a symmetry and directly related to the smallness of the light neutrino masses.

  10. An IR Sounding-Based Analysis of the Saharan Air Layer in North Africa

    NASA Technical Reports Server (NTRS)

    Nicholls, Stephen D.; Mohr, Karen I.

    2018-01-01

    Intense daytime surface heating over barren-to-sparsely vegetated surfaces results in dry convective mixing. In the absence of external forcing such as mountain waves, the dry convection can produce a deep, well-mixed, nearly isentropic boundary layer that becomes a well-mixed residual layer in the evening. These well-mixed layers (WML) retain their unique mid-tropospheric thermal and humidity structure for several days. To detect the SAL and characterize its properties, AIRS Level 2 Ver. 6 temperature and humidity products (2003-Present) are evaluated against rawinsondes and compared to model analysis at each of the 55 rawinsonde stations in northern Africa. To distinguish WML from Saharan air layers (WMLs of Saharan origin), the detection involved a two-step process: 1) algorithm-based detection of WMLs in dry environments (less than 7 g per kilogram mixing ratio) 2) identification of Sahara air layers (SAL) by applying Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) back trajectories to determine the history of each WML. WML occurrence rates from AIRS closely resemble that from rawinsondes, yet rates from model analysis were up to 30% higher than observations in the Sahara due to model errors. Despite the overly frequent occurrence of WMLs from model analysis, HYSPLIT trajectory analysis showed that SAL occurrence rates (given a WML exists) from rawinsondes, AIRS, and model analysis were nearly identical. Although the number of WMLs varied among the data sources, the proportion of WMLs which were classified as SAL was nearly the same. The analysis of SAL bulk properties showed that AIRS and model analysis exhibited a slight warm and moist bias relative to rawinsondes in non-Saharan locations, but model analysis was notably warmer than rawinsondes and AIRS within the Sahara. The latter result is likely associated with the dearth of available data assimilated by model analysis in the Sahara. The variability of SAL thicknesses was reasonably captured by both AIRS and model analysis, but the former favor layers than are thinner than observations. Finally, further analysis of HYSPLIT trajectories revealed that fewer than 10% and 33% of all SAL back trajectories passed through regions with notable precipitation (>100 mm accumulated along the trajectory path) or Aerosol Optical Depth (AOD greater than 0.4, 75th percentile of AOD) on average, respectively. Trajectory analysis indicated that only 57% of Saharan and 24% of non-Saharan WMLs are definitively of Saharan origin (Saharan requirement: Two consecutive days in Sahara and 24 or more of those hours within 72 hours of detection). Non-SAL WMLs either originate from local-to-regionally generated residual layers or from mid-latitude air streams that do not linger over the Sahara for a sufficient time period. Initial analysis shows these non-SAL WMLs tend to be both notably cooler and slightly moister than their SAL counter parts. Continuing analysis will address what role Saharan and non-Saharan air masses characteristics may play on local and regional environmental conditions.

  11. Adaptation of non-linear mixed amount with zero amount response surface model for analysis of concentration-dependent synergism and safety with midazolam, alfentanil, and propofol sedation.

    PubMed

    Liou, J-Y; Ting, C-K; Teng, W-N; Mandell, M S; Tsou, M-Y

    2018-06-01

    The non-linear mixed amount with zero amounts response surface model can be used to describe drug interactions and predict loss of response to noxious stimuli and respiratory depression. We aimed to determine whether this response surface model could be used to model sedation with the triple drug combination of midazolam, alfentanil and propofol. Sedation was monitored in 56 patients undergoing gastrointestinal endoscopy (modelling group) using modified alertness/sedation scores. A total of 227 combinations of effect-site concentrations were derived from pharmacokinetic models. Accuracy and the area under the receiver operating characteristic curve were calculated. Accuracy was defined as an absolute difference <0.5 between the binary patient responses and the predicted probability of loss of responsiveness. Validation was performed with a separate group (validation group) of 47 patients. Effect-site concentration ranged from 0 to 108 ng ml -1 for midazolam, 0-156 ng ml -1 for alfentanil, and 0-2.6 μg ml -1 for propofol in both groups. Synergy was strongest with midazolam and alfentanil (24.3% decrease in U 50 , concentration for half maximal drug effect). Adding propofol, a third drug, offered little additional synergy (25.8% decrease in U 50 ). Two patients (3%) experienced respiratory depression. Model accuracy was 83% and 76%, area under the curve was 0.87 and 0.80 for the modelling and validation group, respectively. The non-linear mixed amount with zero amounts triple interaction response surface model predicts patient sedation responses during endoscopy with combinations of midazolam, alfentanil, or propofol that fall within clinical use. Our model also suggests a safety margin of alfentanil fraction <0.12 that avoids respiratory depression after loss of responsiveness. Copyright © 2018 British Journal of Anaesthesia. Published by Elsevier Ltd. All rights reserved.

  12. Modeling Power Plant Cooling Water Requirements: A Regional Analysis of the Energy-Water Nexus Considering Renewable Sources within the Power Generation Mix

    NASA Astrophysics Data System (ADS)

    Peck, Jaron Joshua

    Water is used in power generation for cooling processes in thermoelectric power. plants and currently withdraws more water than any other sector in the U.S. Reducing water. use from power generation will help to alleviate water stress in at risk areas, where droughts. have the potential to strain water resources. The amount of water used for power varies. depending on many climatic aspects as well as plant operation factors. This work presents. a model that quantifies the water use for power generation for two regions representing. different generation fuel portfolios, California and Utah. The analysis of the California Independent System Operator introduces the methods. of water energy modeling by creating an overall water use factor in volume of water per. unit of energy produced based on the fuel generation mix of the area. The idea of water. monitoring based on energy used by a building or region is explored based on live fuel mix. data. This is for the purposes of increasing public awareness of the water associated with. personal energy use and helping to promote greater energy efficiency. The Utah case study explores the effects more renewable, and less water-intensive, forms of energy will have on the overall water use from power generation for the state. Using a similar model to that of the California case study, total water savings are quantified. based on power reduction scenarios involving increased use of renewable energy. The. plausibility of implementing more renewable energy into Utah’s power grid is also. discussed. Data resolution, as well as dispatch methods, economics, and solar variability, introduces some uncertainty into the analysis.

  13. A two-dimensional cascade solution using minimized surface singularity density distributions - with application to film cooled turbine blades

    NASA Technical Reports Server (NTRS)

    Mcfarland, E.; Tabakoff, W.; Hamed, A.

    1977-01-01

    An investigation of the effects of coolant injection on the aerodynamic performance of cooled turbine blades is presented. The coolant injection is modeled in the inviscid irrotational adiabatic flow analysis through the cascade using the distributed singularities approach. The resulting integral equations are solved using a minimized surface singularity density criteria. The aerodynamic performance was evaluated using this solution in conjunction with an existing mixing theory analysis. The results of the present analysis are compared with experimental measurements in cold flow tests.

  14. Deliberate practice predicts performance over time in adolescent chess players and drop-outs: a linear mixed models analysis.

    PubMed

    de Bruin, Anique B H; Smits, Niels; Rikers, Remy M J P; Schmidt, Henk G

    2008-11-01

    In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training, were analysed since they had started playing chess seriously. The results revealed that deliberate practice (i.e. serious chess study alone and serious chess play) strongly contributed to chess performance. The influence of deliberate practice was not only observable in current performance, but also over chess players' careers. Moreover, although the drop-outs' chess ratings developed more slowly over time, both the persistent and drop-out chess players benefited to the same extent from investments in deliberate practice. Finally, the effect of gender on chess performance proved to be much smaller than the effect of deliberate practice. This study provides longitudinal support for the monotonic benefits assumption of deliberate practice, by showing that over chess players' careers, deliberate practice has a significant effect on performance, and to the same extent for chess players of different ultimate performance levels. The results of this study are not in line with critique raised against the deliberate practice theory that the factors deliberate practice and talent could be confounded.

  15. Node-Splitting Generalized Linear Mixed Models for Evaluation of Inconsistency in Network Meta-Analysis.

    PubMed

    Yu-Kang, Tu

    2016-12-01

    Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. Babies and math: A meta-analysis of infants' simple arithmetic competence.

    PubMed

    Christodoulou, Joan; Lac, Andrew; Moore, David S

    2017-08-01

    Wynn's (1992) seminal research reported that infants looked longer at stimuli representing "incorrect" versus "correct" solutions of basic addition and subtraction problems and concluded that infants have innate arithmetical abilities. Since then, infancy researchers have attempted to replicate this effect, yielding mixed findings. The present meta-analysis aimed to systematically compile and synthesize all of the primary replications and extensions of Wynn (1992) that have been conducted to date. The synthesis included 12 studies consisting of 26 independent samples and 550 unique infants. The summary effect, computed using a random-effects model, was statistically significant, d = +0.34, p < .001, suggesting that the phenomenon Wynn originally reported is reliable. Five different tests of publication bias yielded mixed results, suggesting that while a moderate level of publication bias is probable, the summary effect would be positive even after accounting for this issue. Out of the 10 metamoderators tested, none were found to be significant, but most of the moderator subgroups were significantly different from a null effect. Although this meta-analysis provides support for Wynn's original findings, further research is warranted to understand the underlying mechanisms responsible for infants' visual preferences for "mathematically incorrect" test stimuli. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    NASA Astrophysics Data System (ADS)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  18. Coastal Ocean Variability Off the Coast of Taiwan in Response toTyphoon Morakot: River Forcing, Atmospheric Forcing, and Cold Dome Dynamics

    DTIC Science & Technology

    2014-09-01

    very short time period and in this research, we model and study the effects of this rainfall on Taiwan?s coastal oceans as a result of river discharge...model and study the effects of this rainfall on Taiwan’s coastal oceans as a result of river discharge. We do this through the use of a river discharge... Effects of Footprint Shape on the Bulk Mixing Model . . . . . . . . . 57 4.2 Effects of the Horizontal Extent of the Bulk Mixing Model . . . . . . 59

  19. Hierarchical fermions and detectable Z' from effective two-Higgs-triplet 3-3-1 model

    NASA Astrophysics Data System (ADS)

    Barreto, E. R.; Dias, A. G.; Leite, J.; Nishi, C. C.; Oliveira, R. L. N.; Vieira, W. C.

    2018-03-01

    We develop a SU (3 )C⊗SU (3 )L⊗U (1 )X model where the number of fermion generations is fixed by cancellation of gauge anomalies, being a type of 3-3-1 model with new charged leptons. Similarly to the economical 3-3-1 models, symmetry breaking is achieved effectively with two scalar triplets so that the spectrum of scalar particles at the TeV scale contains just two C P even scalars, one of which is the recently discovered Higgs boson, plus a charged scalar. Such a scalar sector is simpler than the one in the Two Higgs Doublet Model, hence more attractive for phenomenological studies, and has no flavor changing neutral currents (FCNC) mediated by scalars except for the ones induced by the mixing of Standard Model (SM) fermions with heavy fermions. We identify a global residual symmetry of the model which guarantees mass degeneracies and some massless fermions whose masses need to be generated by the introduction of effective operators. The fermion masses so generated require less fine-tuning for most of the SM fermions and FCNC are naturally suppressed by the small mixing between the third family of quarks and the rest. The effective setting is justified by an ultraviolet completion of the model from which the effective operators emerge naturally. A detailed particle mass spectrum is presented, and an analysis of the Z' production at the LHC run II is performed to show that it could be easily detected by considering the invariant mass and transverse momentum distributions in the dimuon channel.

  20. Budget model can aid group practice planning.

    PubMed

    Bender, A D

    1991-12-01

    A medical practice can enhance its planning by developing a budgetary model to test effects of planning assumptions on its profitability and cash requirements. A model focusing on patient visits, payment mix, patient mix, and fee and payment schedules can help assess effects of proposed decisions. A planning model is not a substitute for planning but should complement a plan that includes mission, goals, values, strategic issues, and different outcomes.

  1. Modeling Individual Differences in Within-Person Variation of Negative and Positive Affect in a Mixed Effects Location Scale Model Using BUGS/JAGS

    ERIC Educational Resources Information Center

    Rast, Philippe; Hofer, Scott M.; Sparks, Catharine

    2012-01-01

    A mixed effects location scale model was used to model and explain individual differences in within-person variability of negative and positive affect across 7 days (N=178) within a measurement burst design. The data come from undergraduate university students and are pooled from a study that was repeated at two consecutive years. Individual…

  2. Influence diagnostics for count data under AB-BA crossover trials.

    PubMed

    Hao, Chengcheng; von Rosen, Dietrich; von Rosen, Tatjana

    2017-12-01

    This paper aims to develop diagnostic measures to assess the influence of data perturbations on estimates in AB-BA crossover studies with a Poisson distributed response. Generalised mixed linear models with normally distributed random effects are utilised. We show that in this special case, the model can be decomposed into two independent sub-models which allow to derive closed-form expressions to evaluate the changes in the maximum likelihood estimates under several perturbation schemes. The performance of the new influence measures is illustrated by simulation studies and the analysis of a real dataset.

  3. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach

    PubMed Central

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2018-01-01

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591

  4. Using the Mixed Rasch Model to analyze data from the beliefs and attitudes about memory survey.

    PubMed

    Smith, Everett V; Ying, Yuping; Brown, Scott W

    2012-01-01

    In this study, we used the Mixed Rasch Model (MRM) to analyze data from the Beliefs and Attitudes About Memory Survey (BAMS; Brown, Garry, Silver, and Loftus, 1997). We used the original 5-point BAMS data to investigate the functioning of the "Neutral" category via threshold analysis under a 2-class MRM solution. The "Neutral" category was identified as not eliciting the model expected responses and observations in the "Neutral" category were subsequently treated as missing data. For the BAMS data without the "Neutral" category, exploratory MRM analyses specifying up to 5 latent classes were conducted to evaluate data-model fit using the consistent Akaike information criterion (CAIC). For each of three BAMS subscales, a two latent class solution was identified as fitting the mixed Rasch rating scale model the best. Results regarding threshold analysis, person parameters, and item fit based on the final models are presented and discussed as well as the implications of this study.

  5. Assessing total fungal concentrations on commercial passenger aircraft using mixed-effects modeling.

    PubMed

    McKernan, Lauralynn Taylor; Hein, Misty J; Wallingford, Kenneth M; Burge, Harriet; Herrick, Robert

    2008-01-01

    The primary objective of this study was to compare airborne fungal concentrations onboard commercial passenger aircraft at various in-flight times with concentrations measured inside and outside airport terminals. A secondary objective was to investigate the use of mixed-effects modeling of repeat measures from multiple sampling intervals and locations. Sequential triplicate culturable and total spore samples were collected on wide-body commercial passenger aircraft (n = 12) in the front and rear of coach class during six sampling intervals: boarding, midclimb, early cruise, midcruise, late cruise, and deplaning. Comparison samples were collected inside and outside airport terminals at the origin and destination cities. The MIXED procedure in SAS was used to model the mean and the covariance matrix of the natural log transformed fungal concentrations. Five covariance structures were tested to determine the appropriate models for analysis. Fixed effects considered included the sampling interval and, for samples obtained onboard the aircraft, location (front/rear of coach section), occupancy rate, and carbon dioxide concentrations. Overall, both total culturable and total spore fungal concentrations were low while the aircraft were in flight. No statistical difference was observed between measurements made in the front and rear sections of the coach cabin for either culturable or total spore concentrations. Both culturable and total spore concentrations were significantly higher outside the airport terminal compared with inside the airport terminal (p-value < 0.0001) and inside the aircraft (p-value < 0.0001). On the aircraft, the majority of total fungal exposure occurred during the boarding and deplaning processes, when the aircraft utilized ancillary ventilation and passenger activity was at its peak.

  6. Stepped wedge designs: insights from a design of experiments perspective.

    PubMed

    Matthews, J N S; Forbes, A B

    2017-10-30

    Stepped wedge designs (SWDs) have received considerable attention recently, as they are potentially a useful way to assess new treatments in areas such as health services implementation. Because allocation is usually by cluster, SWDs are often viewed as a form of cluster-randomized trial. However, since the treatment within a cluster changes during the course of the study, they can also be viewed as a form of crossover design. This article explores SWDs from the perspective of crossover trials and designed experiments more generally. We show that the treatment effect estimator in a linear mixed effects model can be decomposed into a weighted mean of the estimators obtained from (1) regarding an SWD as a conventional row-column design and (2) a so-called vertical analysis, which is a row-column design with row effects omitted. This provides a precise representation of "horizontal" and "vertical" comparisons, respectively, which to date have appeared without formal description in the literature. This decomposition displays a sometimes surprising way the analysis corrects for the partial confounding between time and treatment effects. The approach also permits the quantification of the loss of efficiency caused by mis-specifying the correlation parameter in the mixed-effects model. Optimal extensions of the vertical analysis are obtained, and these are shown to be highly inefficient for values of the within-cluster dependence that are likely to be encountered in practice. Some recently described extensions to the classic SWD incorporating multiple treatments are also compared using the experimental design framework. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    PubMed

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  8. Theoretical and experimental investigation of turbulent mixing on ejector configuration and performance in a solar-driven organic-vapor ejector cycle chiller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucha, E.I.

    1984-01-01

    A general method was developed to calculate two dimensional (axisymmetric) mixing of a compressible jet in a variable cross-sectional area mixing channel of the ejector. The analysis considers mixing of the primary and secondary fluids at constant pressure and incorporates finite difference approximations to the conservation equations. The flow model is based on the mixing length approximations. A detailed study and modeling of the flow phenomenon determines the best (optimum) mixing channel geometry of the ejector. The detailed ejector performance characteristics are predicted by incorporating the flow model into a solar-powered ejector cycle cooling system computer model. Freon-11 is usedmore » as both the primary and secondary fluids. Performance evaluation of the cooling system is examined for its coefficient of performance (COP) under a variety of operating conditions. A study is also conducted on a modified ejector cycle in which a secondary pump is introduced at the exit of the evaporator. Results show a significant improvement in the overall performance over that of the conventional ejector cycle (without a secondary pump). Comparison between one and two-dimensional analyses indicates that the two-dimensional ejector fluid flow analysis predicts a better overall system performance. This is true for both the conventional and modified ejector cycles.« less

  9. Effects of an Inverted Instructional Delivery Model on Achievement of Ninth-Grade Physical Science Honors Students

    ERIC Educational Resources Information Center

    Howell, Donna

    2013-01-01

    This mixed-methods action research study was designed to assess the achievement of ninth-grade Physical Science Honors students by analysis of pre and posttest data. In addition, perceptual data from students, parents, and the researcher were collected to form a complete picture of the flipped lecture format versus the traditional lecture format.…

  10. Height-growth response to climatic changes differs among populations of Douglas-fir: A novel analysis of historic data

    Treesearch

    Laura P. Leites; Andrew P. Robinson; Gerald E. Rehfeldt; John D. Marshall; Nicholas L. Crookston

    2012-01-01

    Projected climate change will affect existing forests, as substantial changes are predicted to occur during their life spans. Species that have ample intraspecific genetic differentiation, such as Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco), are expected to display population-specific growth responses to climate change. Using a mixed-effects modeling approach,...

  11. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  12. Investigation of factors influencing radioiodine (131I) biokinetics in patients with benign thyroid disease using nonlinear mixed effects approach.

    PubMed

    Topić Vučenović, Valentina; Rajkovača, Zvezdana; Jelić, Dijana; Stanimirović, Dragi; Vuleta, Goran; Miljković, Branislava; Vučićević, Katarina

    2018-05-13

    Radioiodine ( 131 I) therapy is the common treatment option for benign thyroid diseases. The objective of this study was to characterize 131 I biokinetics in patients with benign thyroid disease and to investigate and quantify the influence of patients' demographic and clinical characteristics on intra-thyroidal 131 I kinetics by developing a population model. Population pharmacokinetic analysis was performed using a nonlinear mixed effects approach. Data sets of 345 adult patients with benign thyroid disease, retrospectively collected from patients' medical records, were evaluated in the analysis. The two-compartment model of 131 I biokinetics representing the blood compartment and thyroid gland was used as the structural model. Results of the study indicate that the rate constant of the uptake of 131 I into the thyroid (k tu ) is significantly influenced by clinical diagnosis, age, functional thyroid volume, free thyroxine in plasma (fT 4 ), use of anti-thyroid drugs, and time of discontinuation of therapy before administration of the radioiodine (THDT), while the effective half-life of 131 I is affected by the age of the patients. Inclusion of the covariates in the base model resulted in a decrease of the between subject variability for k tu from 91 (3.9) to 53.9 (4.5)%. This is the first population model that accounts for the influence of fT 4 and THDT on radioiodine kinetics. The model could be used for further investigations into the correlation between thyroidal exposure to 131 I and the outcome of radioiodine therapy of benign thyroid disease as well as the development of dosing recommendations.

  13. Comparing colon cancer outcomes: The impact of low hospital case volume and case-mix adjustment.

    PubMed

    Fischer, C; Lingsma, H F; van Leersum, N; Tollenaar, R A E M; Wouters, M W; Steyerberg, E W

    2015-08-01

    When comparing performance across hospitals it is essential to consider the noise caused by low hospital case volume and to perform adequate case-mix adjustment. We aimed to quantify the role of noise and case-mix adjustment on standardized postoperative mortality and anastomotic leakage (AL) rates. We studied 13,120 patients who underwent colon cancer resection in 85 Dutch hospitals. We addressed differences between hospitals in postoperative mortality and AL, using fixed (ignoring noise) and random effects (incorporating noise) logistic regression models with general and additional, disease specific, case-mix adjustment. Adding disease specific variables improved the performance of the case-mix adjustment models for postoperative mortality (c-statistic increased from 0.77 to 0.81). The overall variation in standardized mortality ratios was similar, but some individual hospitals changed considerably. For the standardized AL rates the performance of the adjustment models was poor (c-statistic 0.59 and 0.60) and overall variation was small. Most of the observed variation between hospitals was actually noise. Noise had a larger effect on hospital performance than extended case-mix adjustment, although some individual hospital outcome rates were affected by more detailed case-mix adjustment. To compare outcomes between hospitals it is crucial to consider noise due to low hospital case volume with a random effects model. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A computer model of long-term salinity in San Francisco Bay: Sensitivity to mixing and inflows

    USGS Publications Warehouse

    Uncles, R.J.; Peterson, D.H.

    1995-01-01

    A two-level model of the residual circulation and tidally-averaged salinity in San Francisco Bay has been developed in order to interpret long-term (days to decades) salinity variability in the Bay. Applications of the model to biogeochemical studies are also envisaged. The model has been used to simulate daily-averaged salinity in the upper and lower levels of a 51-segment discretization of the Bay over the 22-y period 1967–1988. Observed, monthly-averaged surface salinity data and monthly averages of the daily-simulated salinity are in reasonable agreement, both near the Golden Gate and in the upper reaches, close to the delta. Agreement is less satisfactory in the central reaches of North Bay, in the vicinity of Carquinez Strait. Comparison of daily-averaged data at Station 5 (Pittsburg, in the upper North Bay) with modeled data indicates close agreement with a correlation coefficient of 0.97 for the 4110 daily values. The model successfully simulates the marked seasonal variability in salinity as well as the effects of rapidly changing freshwater inflows. Salinity variability is driven primarily by freshwater inflow. The sensitivity of the modeled salinity to variations in the longitudinal mixing coefficients is investigated. The modeled salinity is relatively insensitive to the calibration factor for vertical mixing and relatively sensitive to the calibration factor for longitudinal mixing. The optimum value of the longitudinal calibration factor is 1.1, compared with the physically-based value of 1.0. Linear time-series analysis indicates that the observed and dynamically-modeled salinity-inflow responses are in good agreement in the lower reaches of the Bay.

  15. Characterization of Viscoelastic Materials Through an Active Mixer by Direct-Ink Writing

    NASA Astrophysics Data System (ADS)

    Drake, Eric

    The goal of this thesis is two-fold: First, to determine mixing effectiveness of an active mixer attachment to a three-dimensional (3D) printer by characterizing actively-mixed, three-dimensionally printed silicone elastomers. Second, to understand mechanical properties of a printed lattice structure with varying geometry and composition. Ober et al defines mixing effectiveness as a measureable quantity characterized by two key variables: (i) a dimensionless impeller parameter (O ) that depends on mixer geometry as well as Peclet number (Pe) and (ii) a coefficient of variation (COV) that describes the mixer effectiveness based upon image intensity. The first objective utilizes tungsten tracer particles distributed throughout a batch of Dow Corning SE1700 (two parts silicone) - ink "A". Ink "B" is made from pure SE1700. Using the in-site active mixer, both ink "A" and "B" coalesce to form a hybrid ink just before extrusion. Two samples of varying mixer speeds and composition ratios are printed and analyzed by microcomputed tomography (MicroCT). A continuous stirred tank reactor (CSTR) model is applied to better understand mixing behavior. Results are then compared with computer models to verify the hypothesis. Data suggests good mixing for the sample with higher impeller speed. A Radial Distrubtion Function (RDF) macro is used to provide further qualitative analysis of mixing efficiency. The second objective of this thesis utilized three-dimensionally printed samples of varying geometry and composition to ascertain mechanical properties. Samples were printed using SE1700 provided by Lawrence Livermore National Laboratory with a face-centered tetragonal (FCT) structure. Hardness testing is conducted using a Shore OO durometer guided by a computer-controlled, three-axis translation stage to provide precise movements. Data is collected across an 'x-y' plane of the specimen. To explain the data, a simply supported beam model is applied to a single unit cell which yields basic structural behavior per cell. Characterizing the sample as a whole requires a more rigorous approach and non-trivial complexities due to varying geometries and compositions exist. The data demonstrates a uniform change in hardness as a function of position. Additionally, the data indicates periodicities in the lattice structure.

  16. Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes

    PubMed Central

    Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.

    2018-01-01

    Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes the biosynthetic pathways for the main components of biomass—namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-α-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production. PMID:29381705

  17. Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes.

    PubMed

    Nag, Ambarish; St John, Peter C; Crowley, Michael F; Bomble, Yannick J

    2018-01-01

    Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes the biosynthetic pathways for the main components of biomass-namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-α-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.

  18. Effect of shroud geometry on the effectiveness of a short mixing stack gas eductor model

    NASA Astrophysics Data System (ADS)

    Kavalis, A. E.

    1983-06-01

    An existing apparatus for testing models of gas eductor systems using high temperature primary flow was modified to provide improved control and performance over a wide range of gas temperature and flow rates. Secondary flow pumping, temperature and pressure data were recorded for two gas eductor system models. The first, previously tested under hot flow conditions, consists of a primary plate with four tilted-angled nozzles and a slotted, shrouded mixing stack with two diffuser rings (overall L/D = 1.5). A portable pyrometer with a surface probe was used for the second model in order to identify any hot spots at the external surface of the mixing stack, shroud and diffuser rings. The second model is shown to have almost the same mixing and pumping performance with the first one but to exhibit much lower shroud and diffuser surface temperatures.

  19. Water mass mixing: The dominant control on the zinc distribution in the North Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Roshan, Saeed; Wu, Jingfeng

    2015-07-01

    Dissolved zinc (dZn) concentration was determined in the North Atlantic during the U.S. GEOTRACES 2010 and 2011 cruise (GOETRACES GA03). A relatively poor linear correlation (R2 = 0.756) was observed between dZn and silicic acid (Si), the slope of which was 0.0577 nM/µmol/kg. We attribute the relatively poor dZn-Si correlation to the following processes: (a) differential regeneration of zinc relative to silicic acid, (b) mixing of multiple water masses that have different Zn/Si, and (c) zinc sources such as sedimentary or hydrothermal. To quantitatively distinguish these possibilities, we use the results of Optimum Multi-Parameter Water Mass Analysis by Jenkins et al. (2015) to model the zinc distribution below 500 m. We hypothesized two scenarios: conservative mixing and regenerative mixing. The first scenario (conservative) could be modeled to results in a correlation with observations with a R2 = 0.846. In the second scenario, we took a Si-related regeneration into account, which could model the observations with a R2 = 0.867. Through this regenerative mixing scenario, we estimated a Zn/Si = 0.0548 nM/µmol/kg that may be more realistic than linear regression slope due to accounting for process b. However, this did not improve the model substantially (R2 = 0.867 versus0.846), which may indicate the insignificant effect of remineralization on the zinc distribution in this region. The relative weakness in the model-observation correlation (R2~0.85 for both scenarios) implies that processes (a) and (c) may be plausible. Furthermore, dZn in the upper 500 m exhibited a very poor correlation with apparent oxygen utilization, suggesting a minimal role for the organic matter-associated remineralization process.

  20. A joint modeling and estimation method for multivariate longitudinal data with mixed types of responses to analyze physical activity data generated by accelerometers.

    PubMed

    Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E

    2017-11-10

    A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Modeling vehicle operating speed on urban roads in Montreal: a panel mixed ordered probit fractional split model.

    PubMed

    Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F

    2013-10-01

    Vehicle operating speed measured on roadways is a critical component for a host of analysis in the transportation field including transportation safety, traffic flow modeling, roadway geometric design, vehicle emissions modeling, and road user route decisions. The current research effort contributes to the literature on examining vehicle speed on urban roads methodologically and substantively. In terms of methodology, we formulate a new econometric model framework for examining speed profiles. The proposed model is an ordered response formulation of a fractional split model. The ordered nature of the speed variable allows us to propose an ordered variant of the fractional split model in the literature. The proposed formulation allows us to model the proportion of vehicles traveling in each speed interval for the entire segment of roadway. We extend the model to allow the influence of exogenous variables to vary across the population. Further, we develop a panel mixed version of the fractional split model to account for the influence of site-specific unobserved effects. The paper contributes substantively by estimating the proposed model using a unique dataset from Montreal consisting of weekly speed data (collected in hourly intervals) for about 50 local roads and 70 arterial roads. We estimate separate models for local roads and arterial roads. The model estimation exercise considers a whole host of variables including geometric design attributes, roadway attributes, traffic characteristics and environmental factors. The model results highlight the role of various street characteristics including number of lanes, presence of parking, presence of sidewalks, vertical grade, and bicycle route on vehicle speed proportions. The results also highlight the presence of site-specific unobserved effects influencing the speed distribution. The parameters from the modeling exercise are validated using a hold-out sample not considered for model estimation. The results indicate that the proposed panel mixed ordered probit fractional split model offers promise for modeling such proportional ordinal variables. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Access disparities to Magnet hospitals for patients undergoing neurosurgical operations

    PubMed Central

    Missios, Symeon; Bekelis, Kimon

    2017-01-01

    Background Centers of excellence focusing on quality improvement have demonstrated superior outcomes for a variety of surgical interventions. We investigated the presence of access disparities to hospitals recognized by the Magnet Recognition Program of the American Nurses Credentialing Center (ANCC) for patients undergoing neurosurgical operations. Methods We performed a cohort study of all neurosurgery patients who were registered in the New York Statewide Planning and Research Cooperative System (SPARCS) database from 2009–2013. We examined the association of African-American race and lack of insurance with Magnet status hospitalization for neurosurgical procedures. A mixed effects propensity adjusted multivariable regression analysis was used to control for confounding. Results During the study period, 190,535 neurosurgical patients met the inclusion criteria. Using a multivariable logistic regression, we demonstrate that African-Americans had lower admission rates to Magnet institutions (OR 0.62; 95% CI, 0.58–0.67). This persisted in a mixed effects logistic regression model (OR 0.77; 95% CI, 0.70–0.83) to adjust for clustering at the patient county level, and a propensity score adjusted logistic regression model (OR 0.75; 95% CI, 0.69–0.82). Additionally, lack of insurance was associated with lower admission rates to Magnet institutions (OR 0.71; 95% CI, 0.68–0.73), in a multivariable logistic regression model. This persisted in a mixed effects logistic regression model (OR 0.72; 95% CI, 0.69–0.74), and a propensity score adjusted logistic regression model (OR 0.72; 95% CI, 0.69–0.75). Conclusions Using a comprehensive all-payer cohort of neurosurgery patients in New York State we identified an association of African-American race and lack of insurance with lower rates of admission to Magnet hospitals. PMID:28684152

  3. A Laboratory Study of River Discharges into Shallow Seas

    NASA Astrophysics Data System (ADS)

    Crawford, T. J.; Linden, P. F.

    2016-02-01

    We present an experimental study that aims to simulate the buoyancy driven coastal currents produced by estuarine freshwater discharges into the ocean. The currents are generated inside a rotating tank filled with saltwater by the continuous release of buoyant freshwater from a source structure located at the fluid surface. The freshwater is discharged horizontally from a finite-depth source, giving rise to significant momentum-flux effects and a non-zero potential vorticity. We perform a parametric study in which we vary the rotation rate, freshwater discharge magnitude, the density difference and the source cross-sectional area. The parameter values are chosen to match the regimes appropriate to the River Rhine and River Elbe when entering the North Sea. Persistent features of an anticyclonic outflow vortex and a propagating boundary current were identified and their properties quantified. We also present a finite potential vorticity, geostrophic model that provides theoretical predictions for the current height, width and velocity as functions of the experimental parameters. The experiments and model are compared with each other in terms of a set of non-dimensional parameters identified in the theoretical analysis of the problem. Good agreement between the model and the experimental data is found. The effect of mixing in the turbulent ocean is also addressed with the addition of an oscillating grid to the experimental setup. The grid generates turbulence in the saltwater ambient that is designed to represent the mixing effects of the wind, tides and bathymetry in a shallow shelf sea. The impact of the addition of turbulence is discussed in terms of the experimental data and through modifications to the theoretical model to include mixing. Once again, good agreement is seen between the experiments and the model.

  4. Competitive adsorption from mixed hen egg-white lysozyme/surfactant solutions at the air-water interface studied by tensiometry, ellipsometry, and surface dilational rheology.

    PubMed

    Alahverdjieva, V S; Grigoriev, D O; Fainerman, V B; Aksenenko, E V; Miller, R; Möhwald, H

    2008-02-21

    The competitive adsorption at the air-water interface from mixed adsorption layers of hen egg-white lysozyme with a non-ionic surfactant (C10DMPO) was studied and compared to the mixture with an ionic surfactant (SDS) using bubble and drop shape analysis tensiometry, ellipsometry, and surface dilational rheology. The set of equilibrium and kinetic data of the mixed solutions is described by a thermodynamic model developed recently. The theoretical description of the mixed system is based on the model parameters for the individual components.

  5. Correlations and risk contagion between mixed assets and mixed-asset portfolio VaR measurements in a dynamic view: An application based on time varying copula models

    NASA Astrophysics Data System (ADS)

    Han, Yingying; Gong, Pu; Zhou, Xiang

    2016-02-01

    In this paper, we apply time varying Gaussian and SJC copula models to study the correlations and risk contagion between mixed assets: financial (stock), real estate and commodity (gold) assets in China firstly. Then we study the dynamic mixed-asset portfolio risk through VaR measurement based on the correlations computed by the time varying copulas. This dynamic VaR-copula measurement analysis has never been used on mixed-asset portfolios. The results show the time varying estimations fit much better than the static models, not only for the correlations and risk contagion based on time varying copulas, but also for the VaR-copula measurement. The time varying VaR-SJC copula models are more accurate than VaR-Gaussian copula models when measuring more risky portfolios with higher confidence levels. The major findings suggest that real estate and gold play a role on portfolio risk diversification and there exist risk contagion and flight to quality between mixed-assets when extreme cases happen, but if we take different mixed-asset portfolio strategies with the varying of time and environment, the portfolio risk will be reduced.

  6. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  7. Impact of Antarctic mixed-phase clouds on climate.

    PubMed

    Lawson, R Paul; Gettelman, Andrew

    2014-12-23

    Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. We modify the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm(-2), and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. These sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than -20 °C.

  8. Impact of Antarctic mixed-phase clouds on climate

    PubMed Central

    Lawson, R. Paul; Gettelman, Andrew

    2014-01-01

    Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. We modify the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm−2, and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. These sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than −20 °C. PMID:25489069

  9. Multilevel modeling and panel data analysis in educational research (Case study: National examination data senior high school in West Java)

    NASA Astrophysics Data System (ADS)

    Zulvia, Pepi; Kurnia, Anang; Soleh, Agus M.

    2017-03-01

    Individual and environment are a hierarchical structure consist of units grouped at different levels. Hierarchical data structures are analyzed based on several levels, with the lowest level nested in the highest level. This modeling is commonly call multilevel modeling. Multilevel modeling is widely used in education research, for example, the average score of National Examination (UN). While in Indonesia UN for high school student is divided into natural science and social science. The purpose of this research is to develop multilevel and panel data modeling using linear mixed model on educational data. The first step is data exploration and identification relationships between independent and dependent variable by checking correlation coefficient and variance inflation factor (VIF). Furthermore, we use a simple model approach with highest level of the hierarchy (level-2) is regency/city while school is the lowest of hierarchy (level-1). The best model was determined by comparing goodness-of-fit and checking assumption from residual plots and predictions for each model. Our finding that for natural science and social science, the regression with random effects of regency/city and fixed effects of the time i.e multilevel model has better performance than the linear mixed model in explaining the variability of the dependent variable, which is the average scores of UN.

  10. Footedness Is Associated with Self-reported Sporting Performance and Motor Abilities in the General Population

    PubMed Central

    Tran, Ulrich S.; Voracek, Martin

    2016-01-01

    Left-handers may have strategic advantages over right-handers in interactive sports and innate superior abilities that are beneficial for sports. Previous studies relied on differing criteria for handedness classification and mostly did not investigate mixed preferences and footedness. Footedness appears to be less influenced by external and societal factors than handedness. Utilizing latent class analysis and structural equation modeling, we investigated in a series of studies (total N > 15300) associations of handedness and footedness with self-reported sporting performance and motor abilities in the general population. Using a discovery and a replication sample (ns = 7658 and 5062), Study 1 revealed replicable beneficial effects of mixed-footedness and left-footedness in team sports, martial arts and fencing, dancing, skiing, and swimming. Study 2 (n = 2592) showed that footedness for unskilled bipedal movement tasks, but not for skilled unipedal tasks, was beneficial for sporting performance. Mixed- and left-footedness had effects on motor abilities that were consistent with published results on better brain interhemispheric communication, but also akin to testosterone-induced effects regarding flexibility, strength, and endurance. Laterality effects were only small. Possible neural and hormonal bases of observed effects need to be examined in future studies. PMID:27559326

  11. Constraints on the Z-Z Prime mixing angle from data measured for the process e{sup +}e{sup -} {yields} W{sup +}W{sup -} at the LEP2 collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreev, Vas. V., E-mail: quarks@gsu.by; Pankov, A. A., E-mail: pankov@ictp.it

    2012-01-15

    An analysis of effects induced by new neutral gauge Z Prime bosons was performed on the basis of data from the OPAL, DELPHI, ALEPH, and L3 experiments devoted to measuring differential cross sections for the process of the annihilation production of pairs of charged gauge W{sup {+-}} bosons at the LEP2 collider. By using these experimental data, constraints on the Z Prime -boson mass and on the angle of Z-Z Prime mixing were obtained for a number of extended gauge models.

  12. A mixing-model approach to quantifying sources of organic matter to salt marsh sediments

    NASA Astrophysics Data System (ADS)

    Bowles, K. M.; Meile, C. D.

    2010-12-01

    Salt marshes are highly productive ecosystems, where autochthonous production controls an intricate exchange of carbon and energy among organisms. The major sources of organic carbon to these systems include 1) autochthonous production by vascular plant matter, 2) import of allochthonous plant material, and 3) phytoplankton biomass. Quantifying the relative contribution of organic matter sources to a salt marsh is important for understanding the fate and transformation of organic carbon in these systems, which also impacts the timing and magnitude of carbon export to the coastal ocean. A common approach to quantify organic matter source contributions to mixtures is the use of linear mixing models. To estimate the relative contributions of endmember materials to total organic matter in the sediment, the problem is formulated as a constrained linear least-square problem. However, the type of data that is utilized in such mixing models, the uncertainties in endmember compositions and the temporal dynamics of non-conservative entitites can have varying affects on the results. Making use of a comprehensive data set that encompasses several endmember characteristics - including a yearlong degradation experiment - we study the impact of these factors on estimates of the origin of sedimentary organic carbon in a saltmarsh located in the SE United States. We first evaluate the sensitivity of linear mixing models to the type of data employed by analyzing a series of mixing models that utilize various combinations of parameters (i.e. endmember characteristics such as δ13COC, C/N ratios or lignin content). Next, we assess the importance of using more than the minimum number of parameters required to estimate endmember contributions to the total organic matter pool. Then, we quantify the impact of data uncertainty on the outcome of the analysis using Monte Carlo simulations and accounting for the uncertainty in endmember characteristics. Finally, as biogeochemical processes can alter endmember characteristics over time, we investigate the effect of early diagenesis on chosen parameters, an analysis that entails an assessment of the organic matter age distribution. Thus, estimates of the relative contributions of phytoplankton, C3 and C4 plants to bulk sediment organic matter depend not only on environmental characteristics that impact reactivity, but also on sediment mixing processes.

  13. Relationship between attributional style, perceived control, self-esteem, and depressive mood in a nonclinical sample: a structural equation-modelling approach.

    PubMed

    Ledrich, Julie; Gana, Kamel

    2013-12-01

    The aim of this study was to examine the intricate relationship between some personality traits (i.e., attributional style, perceived control over consequences, self-esteem), and depressive mood in a nonclinical sample (N= 334). Method. Structural equation modelling was used to estimate five competing models: two vulnerability models describing the effects of personality traits on depressive mood, one scar model describing the effects of depression on personality traits, a mixed model describing the effects of attributional style and perceived control over consequences on depressive mood, which in turn affects self-esteem, and a reciprocal model which is a non-recursive version of the mixed model that specifies bidirectional effects between depressive mood and self-esteem. The best-fitting model was the mixed model. Moreover, we observed a significant negative effect of depression on self-esteem, but no effect in the opposite direction. These findings provide supporting arguments against the continuum model of the relationship between self-esteem and depression, and lend substantial support to the scar model, which claims that depressive mood damages and erodes self-esteem. In addition, the 'depressogenic' nature of the pessimistic attributional style, and the 'antidepressant' nature of perceived control over consequences plead in favour of the vulnerability model. © 2012 The British Psychological Society.

  14. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    PubMed

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  15. [Review on HSPF model for simulation of hydrology and water quality processes].

    PubMed

    Li, Zhao-fu; Liu, Hong-Yu; Li, Yan

    2012-07-01

    Hydrological Simulation Program-FORTRAN (HSPF), written in FORTRAN, is one ol the best semi-distributed hydrology and water quality models, which was first developed based on the Stanford Watershed Model. Many studies on HSPF model application were conducted. It can represent the contributions of sediment, nutrients, pesticides, conservatives and fecal coliforms from agricultural areas, continuously simulate water quantity and quality processes, as well as the effects of climate change and land use change on water quantity and quality. HSPF consists of three basic application components: PERLND (Pervious Land Segment) IMPLND (Impervious Land Segment), and RCHRES (free-flowing reach or mixed reservoirs). In general, HSPF has extensive application in the modeling of hydrology or water quality processes and the analysis of climate change and land use change. However, it has limited use in China. The main problems with HSPF include: (1) some algorithms and procedures still need to revise, (2) due to the high standard for input data, the accuracy of the model is limited by spatial and attribute data, (3) the model is only applicable for the simulation of well-mixed rivers, reservoirs and one-dimensional water bodies, it must be integrated with other models to solve more complex problems. At present, studies on HSPF model development are still undergoing, such as revision of model platform, extension of model function, method development for model calibration, and analysis of parameter sensitivity. With the accumulation of basic data and imorovement of data sharing, the HSPF model will be applied more extensively in China.

  16. Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.

    In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less

  17. Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction

    DOE PAGES

    Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.; ...

    2016-08-01

    In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less

  18. Development of a Mixed Methods Investigation of Process and Outcomes of Community-Based Participatory Research.

    PubMed

    Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R

    2018-01-01

    This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided.

  19. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  20. Experimental and mathematical model of the interactions in the mixed culture of links in the "producer-consumer" cycle

    NASA Astrophysics Data System (ADS)

    Pisman, T. I.; Galayda, Ya. V.

    The paper presents experimental and mathematical model of interactions between invertebrates the ciliates Paramecium caudatum and the rotifers Brachionus plicatilis and algae Chlorella vulgaris and Scenedesmus quadricauda in the producer -- consumer aquatic biotic cycle with spatially separated components The model describes the dynamics of the mixed culture of ciliates and rotifers in the consumer component feeding on the mixed algal culture of the producer component It has been found that metabolites of the algae Scenedesmus produce an adverse effect on the reproduction of the ciliates P caudatum Taking into account this effect the results of investigation of the mathematical model were in qualitative agreement with the experimental results In the producer -- consumer biotic cycle it was shown that coexistence is impossible in the mixed algal culture of the producer component and in the mixed culture of invertebrates of the consumer component The ciliates P caudatum are driven out by the rotifers Brachionus plicatilis

  1. Modelling individual tree height to crown base of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.)

    PubMed Central

    Jansa, Václav

    2017-01-01

    Height to crown base (HCB) of a tree is an important variable often included as a predictor in various forest models that serve as the fundamental tools for decision-making in forestry. We developed spatially explicit and spatially inexplicit mixed-effects HCB models using measurements from a total 19,404 trees of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.) on the permanent sample plots that are located across the Czech Republic. Variables describing site quality, stand density or competition, and species mixing effects were included into the HCB model with use of dominant height (HDOM), basal area of trees larger in diameters than a subject tree (BAL- spatially inexplicit measure) or Hegyi’s competition index (HCI—spatially explicit measure), and basal area proportion of a species of interest (BAPOR), respectively. The parameters describing sample plot-level random effects were included into the HCB model by applying the mixed-effects modelling approach. Among several functional forms evaluated, the logistic function was found most suited to our data. The HCB model for Norway spruce was tested against the data originated from different inventory designs, but model for European beech was tested using partitioned dataset (a part of the main dataset). The variance heteroscedasticity in the residuals was substantially reduced through inclusion of a power variance function into the HCB model. The results showed that spatially explicit model described significantly a larger part of the HCB variations [R2adj = 0.86 (spruce), 0.85 (beech)] than its spatially inexplicit counterpart [R2adj = 0.84 (spruce), 0.83 (beech)]. The HCB increased with increasing competitive interactions described by tree-centered competition measure: BAL or HCI, and species mixing effects described by BAPOR. A test of the mixed-effects HCB model with the random effects estimated using at least four trees per sample plot in the validation data confirmed that the model was precise enough for the prediction of HCB for a range of site quality, tree size, stand density, and stand structure. We therefore recommend measuring of HCB on four randomly selected trees of a species of interest on each sample plot for localizing the mixed-effects model and predicting HCB of the remaining trees on the plot. Growth simulations can be made from the data that lack the values for either crown ratio or HCB using the HCB models. PMID:29049391

  2. Analysis of data collected from right and left limbs: Accounting for dependence and improving statistical efficiency in musculoskeletal research.

    PubMed

    Stewart, Sarah; Pearson, Janet; Rome, Keith; Dalbeth, Nicola; Vandal, Alain C

    2018-01-01

    Statistical techniques currently used in musculoskeletal research often inefficiently account for paired-limb measurements or the relationship between measurements taken from multiple regions within limbs. This study compared three commonly used analysis methods with a mixed-models approach that appropriately accounted for the association between limbs, regions, and trials and that utilised all information available from repeated trials. Four analysis were applied to an existing data set containing plantar pressure data, which was collected for seven masked regions on right and left feet, over three trials, across three participant groups. Methods 1-3 averaged data over trials and analysed right foot data (Method 1), data from a randomly selected foot (Method 2), and averaged right and left foot data (Method 3). Method 4 used all available data in a mixed-effects regression that accounted for repeated measures taken for each foot, foot region and trial. Confidence interval widths for the mean differences between groups for each foot region were used as a criterion for comparison of statistical efficiency. Mean differences in pressure between groups were similar across methods for each foot region, while the confidence interval widths were consistently smaller for Method 4. Method 4 also revealed significant between-group differences that were not detected by Methods 1-3. A mixed effects linear model approach generates improved efficiency and power by producing more precise estimates compared to alternative approaches that discard information in the process of accounting for paired-limb measurements. This approach is recommended in generating more clinically sound and statistically efficient research outputs. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Prediction of Turbulent Jet Mixing Noise Reduction by Water Injection

    NASA Technical Reports Server (NTRS)

    Kandula, Max

    2008-01-01

    A one-dimensional control volume formulation is developed for the determination of jet mixing noise reduction due to water injection. The analysis starts from the conservation of mass, momentum and energy for the confrol volume, and introduces the concept of effective jet parameters (jet temperature, jet velocity and jet Mach number). It is shown that the water to jet mass flow rate ratio is an important parameter characterizing the jet noise reduction on account of gas-to-droplet momentum and heat transfer. Two independent dimensionless invariant groups are postulated, and provide the necessary relations for the droplet size and droplet Reynolds number. Results are presented illustrating the effect of mass flow rate ratio on the jet mixing noise reduction for a range of jet Mach number and jet Reynolds number. Predictions from the model show satisfactory comparison with available test data on perfectly expanded hot supersonic jets. The results suggest that significant noise reductions can be achieved at increased flow rate ratios.

  4. Therapy preferences of patients with lung and colon cancer: a discrete choice experiment.

    PubMed

    Schmidt, Katharina; Damm, Kathrin; Vogel, Arndt; Golpon, Heiko; Manns, Michael P; Welte, Tobias; Graf von der Schulenburg, J-Matthias

    2017-01-01

    There is increasing interest in studies that examine patient preferences to measure health-related outcomes. Understanding patients' preferences can improve the treatment process and is particularly relevant for oncology. In this study, we aimed to identify the subgroup-specific treatment preferences of German patients with lung cancer (LC) or colorectal cancer (CRC). Six discrete choice experiment (DCE) attributes were established on the basis of a systematic literature review and qualitative interviews. The DCE analyses comprised generalized linear mixed-effects model and latent class mixed logit model. The study cohort comprised 310 patients (194 with LC, 108 with CRC, 8 with both types of cancer) with a median age of 63 (SD =10.66) years. The generalized linear mixed-effects model showed a significant ( P <0.05) degree of association for all of the tested attributes. "Strongly increased life expectancy" was the attribute given the greatest weight by all patient groups. Using latent class mixed logit model analysis, we identified three classes of patients. Patients who were better informed tended to prefer a more balanced relationship between length and health-related quality of life (HRQoL) than those who were less informed. Class 2 (LC patients with low HRQoL who had undergone surgery) gave a very strong weighting to increased length of life. We deduced from Class 3 patients that those with a relatively good life expectancy (CRC compared with LC) gave a greater weight to moderate effects on HRQoL than to a longer life. Overall survival was the most important attribute of therapy for patients with LC or CRC. Differences in treatment preferences between subgroups should be considered in regard to treatment and development of guidelines. Patients' preferences were not affected by sex or age, but were affected by the cancer type, HRQoL, surgery status, and the main source of information on the disease.

  5. Multilevel nonlinear mixed-effects models for the modeling of earlywood and latewood microfibril angle

    Treesearch

    Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He

    2005-01-01

    Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...

  6. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    ERIC Educational Resources Information Center

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  7. Improvement on vibration measurement performance of laser self-mixing interference by using a pre-feedback mirror

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Chen, Qianghua; Wang, Yanghong; Luo, Huifu; Wu, Huan; Ma, Binwu

    2018-06-01

    In the laser self-mixing interference vibration measurement system, the self mixing interference signal is usually weak so that it can be hardly distinguished from the environmental noise. In order to solve this problem, we present a self-mixing interference optical path with a pre-feedback mirror, a pre-feedback mirror is added between the object and the collimator lens, corresponding feedback light enters into the inner cavity of the laser and the interference by the pre-feedback mirror occurs. The pre-feedback system is established after that. The self-mixing interference theoretical model with a pre-feedback based on the F-P model is derived. The theoretical analysis shows that the amplitude of the intensity of the interference signal can be improved by 2-4 times. The influence factors of system are also discussed. The experiment results show that the amplitude of the signal is greatly improved, which agrees with the theoretical analysis.

  8. Using mixed methods research in medical education: basic guidelines for researchers.

    PubMed

    Schifferdecker, Karen E; Reed, Virginia A

    2009-07-01

    Mixed methods research involves the collection, analysis and integration of both qualitative and quantitative data in a single study. The benefits of a mixed methods approach are particularly evident when studying new questions or complex initiatives and interactions, which is often the case in medical education research. Basic guidelines for when to use mixed methods research and how to design a mixed methods study in medical education research are not readily available. The purpose of this paper is to remedy that situation by providing an overview of mixed methods research, research design models relevant for medical education research, examples of each research design model in medical education research, and basic guidelines for medical education researchers interested in mixed methods research. Mixed methods may prove superior in increasing the integrity and applicability of findings when studying new or complex initiatives and interactions in medical education research. They deserve an increased presence and recognition in medical education research.

  9. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  10. Lithium ion dynamics in Li2S+GeS2+GeO2 glasses studied using (7)Li NMR field-cycling relaxometry and line-shape analysis.

    PubMed

    Gabriel, Jan; Petrov, Oleg V; Kim, Youngsik; Martin, Steve W; Vogel, Michael

    2015-09-01

    We use (7)Li NMR to study the ionic jump motion in ternary 0.5Li2S+0.5[(1-x)GeS2+xGeO2] glassy lithium ion conductors. Exploring the "mixed glass former effect" in this system led to the assumption of a homogeneous and random variation of diffusion barriers in this system. We exploit that combining traditional line-shape analysis with novel field-cycling relaxometry, it is possible to measure the spectral density of the ionic jump motion in broad frequency and temperature ranges and, thus, to determine the distribution of activation energies. Two models are employed to parameterize the (7)Li NMR data, namely, the multi-exponential autocorrelation function model and the power-law waiting times model. Careful evaluation of both of these models indicates a broadly inhomogeneous energy landscape for both the single (x=0.0) and the mixed (x=0.1) network former glasses. The multi-exponential autocorrelation function model can be well described by a Gaussian distribution of activation barriers. Applicability of the methods used and their sensitivity to microscopic details of ionic motion are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. A mixed-effects height-diameter model for cottonwood in the Mississippi Delta

    Treesearch

    Curtis L. VanderSchaaf; H. Christoph Stuhlinger

    2012-01-01

    Eastern cottonwood (Populus deltoides Bartr. ex Marsh.) has been artificially regenerated throughout the Mississippi Delta region because of its fast growth and is being considered for biofuel production.This paper presents a mixed-effects height-diameter model for cottonwood in the Mississippi Delta region. After obtaining height-diameter...

  12. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  13. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  14. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  15. Consequences of an Abelian family symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramond, P.

    1996-01-01

    The addition of an Abelian family symmetry to the Minimal Super-symmetric Standard Model reproduces the observed hierarchies of quark and lepton masses and quark mixing angles, only if it is anomalous. Green-Schwarz compensation of its anomalies requires the electroweak mixing angle to be sin{sup 2}{theta}{sub {omega}} = 3/8 at the string scale, without any assumed GUT structure, suggesting a superstring origin for the standard model. The analysis is extended to neutrino masses and the lepton mixing matrix.

  16. Integrating Stomach Content and Stable Isotope Analyses to Quantify the Diets of Pygoscelid Penguins

    PubMed Central

    Polito, Michael J.; Trivelpiece, Wayne Z.; Karnovsky, Nina J.; Ng, Elizabeth; Patterson, William P.; Emslie, Steven D.

    2011-01-01

    Stomach content analysis (SCA) and more recently stable isotope analysis (SIA) integrated with isotopic mixing models have become common methods for dietary studies and provide insight into the foraging ecology of seabirds. However, both methods have drawbacks and biases that may result in difficulties in quantifying inter-annual and species-specific differences in diets. We used these two methods to simultaneously quantify the chick-rearing diet of Chinstrap (Pygoscelis antarctica) and Gentoo (P. papua) penguins and highlight methods of integrating SCA data to increase accuracy of diet composition estimates using SIA. SCA biomass estimates were highly variable and underestimated the importance of soft-bodied prey such as fish. Two-source, isotopic mixing model predictions were less variable and identified inter-annual and species-specific differences in the relative amounts of fish and krill in penguin diets not readily apparent using SCA. In contrast, multi-source isotopic mixing models had difficulty estimating the dietary contribution of fish species occupying similar trophic levels without refinement using SCA-derived otolith data. Overall, our ability to track inter-annual and species-specific differences in penguin diets using SIA was enhanced by integrating SCA data to isotopic mixing modes in three ways: 1) selecting appropriate prey sources, 2) weighting combinations of isotopically similar prey in two-source mixing models and 3) refining predicted contributions of isotopically similar prey in multi-source models. PMID:22053199

  17. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    PubMed

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  18. Joint modelling of repeated measurement and time-to-event data: an introductory tutorial.

    PubMed

    Asar, Özgür; Ritchie, James; Kalra, Philip A; Diggle, Peter J

    2015-02-01

    The term 'joint modelling' is used in the statistical literature to refer to methods for simultaneously analysing longitudinal measurement outcomes, also called repeated measurement data, and time-to-event outcomes, also called survival data. A typical example from nephrology is a study in which the data from each participant consist of repeated estimated glomerular filtration rate (eGFR) measurements and time to initiation of renal replacement therapy (RRT). Joint models typically combine linear mixed effects models for repeated measurements and Cox models for censored survival outcomes. Our aim in this paper is to present an introductory tutorial on joint modelling methods, with a case study in nephrology. We describe the development of the joint modelling framework and compare the results with those obtained by the more widely used approaches of conducting separate analyses of the repeated measurements and survival times based on a linear mixed effects model and a Cox model, respectively. Our case study concerns a data set from the Chronic Renal Insufficiency Standards Implementation Study (CRISIS). We also provide details of our open-source software implementation to allow others to replicate and/or modify our analysis. The results for the conventional linear mixed effects model and the longitudinal component of the joint models were found to be similar. However, there were considerable differences between the results for the Cox model with time-varying covariate and the time-to-event component of the joint model. For example, the relationship between kidney function as measured by eGFR and the hazard for initiation of RRT was significantly underestimated by the Cox model that treats eGFR as a time-varying covariate, because the Cox model does not take measurement error in eGFR into account. Joint models should be preferred for simultaneous analyses of repeated measurement and survival data, especially when the former is measured with error and the association between the underlying error-free measurement process and the hazard for survival is of scientific interest. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  19. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    PubMed Central

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-01-01

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. PMID:28952708

  20. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    PubMed

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-09-27

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License

Top