Sample records for mixed-effects model analyses

  1. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  2. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    PubMed

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  4. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Modelling ventricular fibrillation coarseness during cardiopulmonary resuscitation by mixed effects stochastic differential equations.

    PubMed

    Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo

    2015-10-15

    For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.

  6. MANOVA vs nonlinear mixed effects modeling: The comparison of growth patterns of female and male quail

    NASA Astrophysics Data System (ADS)

    Gürcan, Eser Kemal

    2017-04-01

    The most commonly used methods for analyzing time-dependent data are multivariate analysis of variance (MANOVA) and nonlinear regression models. The aim of this study was to compare some MANOVA techniques and nonlinear mixed modeling approach for investigation of growth differentiation in female and male Japanese quail. Weekly individual body weight data of 352 male and 335 female quail from hatch to 8 weeks of age were used to perform analyses. It is possible to say that when all the analyses are evaluated, the nonlinear mixed modeling is superior to the other techniques because it also reveals the individual variation. In addition, the profile analysis also provides important information.

  7. Applications of MIDAS regression in analysing trends in water quality

    NASA Astrophysics Data System (ADS)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  8. Mixing in the shear superposition micromixer: three-dimensional analysis.

    PubMed

    Bottausci, Frederic; Mezić, Igor; Meinhart, Carl D; Cardonne, Caroline

    2004-05-15

    In this paper, we analyse mixing in an active chaotic advection micromixer. The micromixer consists of a main rectangular channel and three cross-stream secondary channels that provide ability for time-dependent actuation of the flow stream in the direction orthogonal to the main stream. Three-dimensional motion in the mixer is studied. Numerical simulations and modelling of the flow are pursued in order to understand the experiments. It is shown that for some values of parameters a simple model can be derived that clearly represents the flow nature. Particle image velocimetry measurements of the flow are compared with numerical simulations and the analytical model. A measure for mixing, the mixing variance coefficient (MVC), is analysed. It is shown that mixing is substantially improved with multiple side channels with oscillatory flows, whose frequencies are increasing downstream. The optimization of MVC results for single side-channel mixing is presented. It is shown that dependence of MVC on frequency is not monotone, and a local minimum is found. Residence time distributions derived from the analytical model are analysed. It is shown that, while the average Lagrangian velocity profile is flattened over the steady flow, Taylor-dispersion effects are still present for the current micromixer configuration.

  9. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  10. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  11. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  12. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    NASA Astrophysics Data System (ADS)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable even after equilibrium. Therefore inclusion of FA concentrations of the sources in the IMM formulation is standard procedure for accurate estimation of source contributions. The post model correction approach that dominates the CSSI fingerprinting causes bias, especially if the FAs concentration of sources differs substantially.

  13. Evidence of a major gene from Bayesian segregation analyses of liability to osteochondral diseases in pigs.

    PubMed

    Kadarmideen, Haja N; Janss, Luc L G

    2005-11-01

    Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384-37.81), compared to the polygenic variance (sigmau2). Consequently, heritabilities for a mixed inheritance (range 0.65-0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38-0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on sigmau2, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a "reduced polygenic model" for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an "individual polygenic model." In all cases, "shrinkage estimators" for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans.

  14. Preliminary Empirical Model of Crucial Determinants of Best Practice for Peer Tutoring on Academic Achievement

    ERIC Educational Resources Information Center

    Leung, Kim Chau

    2015-01-01

    Previous meta-analyses of the effects of peer tutoring on academic achievement have been plagued with theoretical and methodological flaws. Specifically, these studies have not adopted both fixed and mixed effects models for analyzing the effect size; they have not evaluated the moderating effect of some commonly used parameters, such as comparing…

  15. Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

    PubMed

    Damman, Olga C; Stubbe, Janine H; Hendriks, Michelle; Arah, Onyebuchi A; Spreeuwenberg, Peter; Delnoij, Diana M J; Groenewegen, Peter P

    2009-04-01

    Ratings on the quality of healthcare from the consumer's perspective need to be adjusted for consumer characteristics to ensure fair and accurate comparisons between healthcare providers or health plans. Although multilevel analysis is already considered an appropriate method for analyzing healthcare performance data, it has rarely been used to assess case-mix adjustment of such data. The purpose of this article is to investigate whether multilevel regression analysis is a useful tool to detect case-mix adjusters in consumer assessment of healthcare. We used data on 11,539 consumers from 27 Dutch health plans, which were collected using the Dutch Consumer Quality Index health plan instrument. We conducted multilevel regression analyses of consumers' responses nested within health plans to assess the effects of consumer characteristics on consumer experience. We compared our findings to the results of another methodology: the impact factor approach, which combines the predictive effect of each case-mix variable with its heterogeneity across health plans. Both multilevel regression and impact factor analyses showed that age and education were the most important case-mix adjusters for consumer experience and ratings of health plans. With the exception of age, case-mix adjustment had little impact on the ranking of health plans. On both theoretical and practical grounds, multilevel modeling is useful for adequate case-mix adjustment and analysis of performance ratings.

  16. Multivariate statistical approach to estimate mixing proportions for unknown end members

    USGS Publications Warehouse

    Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.

    2012-01-01

    A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.

  17. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  18. Multilevel mixed effects parametric survival models using adaptive Gauss-Hermite quadrature with application to recurrent events and individual participant data meta-analysis.

    PubMed

    Crowther, Michael J; Look, Maxime P; Riley, Richard D

    2014-09-28

    Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  20. Population pharmacokinetics of caffeine in healthy male adults using mixed-effects models.

    PubMed

    Seng, K-Y; Fun, C-Y; Law, Y-L; Lim, W-M; Fan, W; Lim, C-L

    2009-02-01

    Caffeine has been shown to maintain or improve the performance of individuals, but its pharmacokinetic profile for Asians has not been well characterized. In this study, a population pharmacokinetic model for describing the pharmacokinetics of caffeine in Singapore males was developed. The data were also analysed using non-compartmental models. Data gathered from 59 male volunteers, who each ingested a single caffeine capsule in two clinical trials (3 or 5 mg/kg), were analysed via non-linear mixed-effects modelling. The participants' covariates, including age, body weight, and regularity of caffeinated-beverage consumption or smoking, were analysed in a stepwise fashion to identify their potential influence on caffeine pharmacokinetics. The final pharmacostatistical model was then subjected to stochastic simulation to predict the plasma concentrations of caffeine after oral (204, 340 and 476 mg) dosing regimens (repeated dosing every 6, 8 or 12 h) over a hypothetical 3-day period. The data were best described by a one-compartmental model with first-order absorption and first-order elimination. Smoking status was an influential covariate for clearance: clearance (mL/min) = 110*SMOKE + 114, where SMOKE was 0 and 1 for the non-smoker and the smoker respectively. Interoccasion variability was smaller compared to interindividual variability in clearance, volume and absorption rate (27% vs. 33%, 10% vs. 15% and 23% vs. 51% respectively). The extrapolated elimination half-lives of caffeine in the non-smokers and the smokers were 4.3 +/- 1.5 and 3.0 +/- 0.7 h respectively. Dosing simulations indicated that dosing regimens of 340 mg (repeated every 8 h) and 476 mg (repeated every 6 h) should achieve population-averaged caffeine concentrations within the reported beneficial range (4.5-9 microg/mL) in the non-smokers and the smokers respectively over 72 h. The population pharmacokinetic model satisfactorily described the disposition and variability of caffeine in the data. Mixed-effects modelling showed that the dose of caffeine depended on cigarette smoking status.

  1. A Poisson approach to the validation of failure time surrogate endpoints in individual patient data meta-analyses.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan

    2017-01-01

    Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).

  2. Joint modelling of repeated measurement and time-to-event data: an introductory tutorial.

    PubMed

    Asar, Özgür; Ritchie, James; Kalra, Philip A; Diggle, Peter J

    2015-02-01

    The term 'joint modelling' is used in the statistical literature to refer to methods for simultaneously analysing longitudinal measurement outcomes, also called repeated measurement data, and time-to-event outcomes, also called survival data. A typical example from nephrology is a study in which the data from each participant consist of repeated estimated glomerular filtration rate (eGFR) measurements and time to initiation of renal replacement therapy (RRT). Joint models typically combine linear mixed effects models for repeated measurements and Cox models for censored survival outcomes. Our aim in this paper is to present an introductory tutorial on joint modelling methods, with a case study in nephrology. We describe the development of the joint modelling framework and compare the results with those obtained by the more widely used approaches of conducting separate analyses of the repeated measurements and survival times based on a linear mixed effects model and a Cox model, respectively. Our case study concerns a data set from the Chronic Renal Insufficiency Standards Implementation Study (CRISIS). We also provide details of our open-source software implementation to allow others to replicate and/or modify our analysis. The results for the conventional linear mixed effects model and the longitudinal component of the joint models were found to be similar. However, there were considerable differences between the results for the Cox model with time-varying covariate and the time-to-event component of the joint model. For example, the relationship between kidney function as measured by eGFR and the hazard for initiation of RRT was significantly underestimated by the Cox model that treats eGFR as a time-varying covariate, because the Cox model does not take measurement error in eGFR into account. Joint models should be preferred for simultaneous analyses of repeated measurement and survival data, especially when the former is measured with error and the association between the underlying error-free measurement process and the hazard for survival is of scientific interest. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  3. Hydrothermal contamination of public supply wells in Napa and Sonoma Valleys, California

    USGS Publications Warehouse

    Forrest, Matthew J.; Kulongoski, Justin T.; Edwards, Matthew S.; Farrar, Christopher D.; Belitz, Kenneth; Norris, Richard D.

    2013-01-01

    Groundwater chemistry and isotope data from 44 public supply wells in the Napa and Sonoma Valleys, California were determined to investigate mixing of relatively shallow groundwater with deeper hydrothermal fluids. Multivariate analyses including Cluster Analyses, Multidimensional Scaling (MDS), Principal Components Analyses (PCA), Analysis of Similarities (ANOSIM), and Similarity Percentage Analyses (SIMPER) were used to elucidate constituent distribution patterns, determine which constituents are significantly associated with these hydrothermal systems, and investigate hydrothermal contamination of local groundwater used for drinking water. Multivariate statistical analyses were essential to this study because traditional methods, such as mixing tests involving single species (e.g. Cl or SiO2) were incapable of quantifying component proportions due to mixing of multiple water types. Based on these analyses, water samples collected from the wells were broadly classified as fresh groundwater, saline waters, hydrothermal fluids, or mixed hydrothermal fluids/meteoric water wells. The Multivariate Mixing and Mass-balance (M3) model was applied in order to determine the proportion of hydrothermal fluids, saline water, and fresh groundwater in each sample. Major ions, isotopes, and physical parameters of the waters were used to characterize the hydrothermal fluids as Na–Cl type, with significant enrichment in the trace elements As, B, F and Li. Five of the wells from this study were classified as hydrothermal, 28 as fresh groundwater, two as saline water, and nine as mixed hydrothermal fluids/meteoric water wells. The M3 mixing-model results indicated that the nine mixed wells contained between 14% and 30% hydrothermal fluids. Further, the chemical analyses show that several of these mixed-water wells have concentrations of As, F and B that exceed drinking-water standards or notification levels due to contamination by hydrothermal fluids.

  4. Axisymmetric magnetic modes of neutron stars having mixed poloidal and toroidal magnetic fields

    NASA Astrophysics Data System (ADS)

    Lee, Umin

    2018-05-01

    We calculate axisymmetric magnetic modes of a neutron star possessing a mixed poloidal and toroidal magnetic field, where the toroidal field is assumed to be proportional to a dimensionless parameter ζ0. Here, we assume an isentropic structure for the neutron star and consider no effects of rotation. Ignoring the equilibrium deformation due to the magnetic field, we employ a polytrope of the index n = 1 as the background model for our modal analyses. For the mixed poloidal and toroidal magnetic field with ζ _0\

  5. A Two-Step Approach for Analysis of Nonignorable Missing Outcomes in Longitudinal Regression: an Application to Upstate KIDS Study.

    PubMed

    Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari

    2017-09-01

    Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.

  6. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges.

    PubMed

    Phillips, Charles D

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges.

  7. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges

    PubMed Central

    Phillips, Charles D.

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges. PMID:26740744

  8. Qualitative and numerical analyses of the effects of river inflow variations on mixing diagrams in estuaries

    USGS Publications Warehouse

    Cifuentes, L.A.; Schemel, L.E.; Sharp, J.H.

    1990-01-01

    The effects of river inflow variations on alkalinity/salinity distributions in San Francisco Bay and nitrate/salinity distributions in Delaware Bay are described. One-dimensional, advective-dispersion equations for salinity and the dissolved constituents are solved numerically and are used to simulate mixing in the estuaries. These simulations account for time-varying river inflow, variations in estuarine cross-sectional area, and longitudinally varying dispersion coefficients. The model simulates field observations better than models that use constant hydrodynamic coefficients and uniform estuarine geometry. Furthermore, field observations and model simulations are consistent with theoretical 'predictions' that the curvature of propery-salinity distributions depends on the relation between the estuarine residence time and the period of river concentration variation. ?? 1990.

  9. Case study of flexure and shear strengthening of RC beams by CFRP using FEA

    NASA Astrophysics Data System (ADS)

    Jankowiak, Iwona

    2018-01-01

    In the paper the preliminary results of study on strengthening RC beams by means of CFRP materials under mixed shear-flexural work condition are presented. The Finite Element Method analyses were performed using numerical models proposed and verified earlier by the results of laboratory tests [4, 5] for estimation of effectiveness of CFRP strengthening of RC beams under flexure. The currently conducted analyses deal with 3D models of RC beams under mixed shear-flexural loading conditions. The symmetry of analyzed beams was taken into account (in both directions). The application of Concrete Damage Plasticity (CDP) model of RC beam allowed to predict a layout and propagation of cracks leading to failure. Different cases of strengthening were analyzed: with the use of CFRP strip or CFRP closed hoops as well as with the combination of above mentioned. The preliminary study was carried out and the first results were presented.

  10. A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design

    ERIC Educational Resources Information Center

    Palladino, John M.

    2009-01-01

    Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…

  11. Environmental and international tariffs in a mixed duopoly

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernanda A.; Ferreira, Flávio

    2013-10-01

    In this paper, we study the effects of environmental and trade policies in an international mixed duopoly serving two markets, in which the public firm maximizes the sum of consumer surplus and its profit. We also analyse the effects of privatization. The model has two stages. In the first stage, governments choose environmental taxes and import tariffs, simultaneously. Then, the firms engage in a Cournot competition, choosing output levels for the domestic market and to export. We compare the results obtained in the three different ways of moving on the decision make of the firms.

  12. Updated constraints on self-interacting dark matter from Supernova 1987A

    NASA Astrophysics Data System (ADS)

    Mahoney, Cameron; Leibovich, Adam K.; Zentner, Andrew R.

    2017-08-01

    We revisit SN1987A constraints on light, hidden sector gauge bosons ("dark photons") that are coupled to the standard model through kinetic mixing with the photon. These constraints are realized because excessive bremsstrahlung radiation of the dark photon can lead to rapid cooling of the SN1987A progenitor core, in contradiction to the observed neutrinos from that event. The models we consider are of interest as phenomenological models of strongly self-interacting dark matter. We clarify several possible ambiguities in the literature and identify errors in prior analyses. We find constraints on the dark photon mixing parameter that are in rough agreement with the early estimates of Dent et al. [arXiv:1201.2683.], but only because significant errors in their analyses fortuitously canceled. Our constraints are in good agreement with subsequent analyses by Rrapaj & Reddy [Phys. Rev. C 94, 045805 (2016)., 10.1103/PhysRevC.94.045805] and Hardy & Lasenby [J. High Energy Phys. 02 (2017) 33., 10.1007/JHEP02(2017)033]. We estimate the dark photon bremsstrahlung rate using one-pion exchange (OPE), while Rrapaj & Reddy use a soft radiation approximation (SRA) to exploit measured nuclear scattering cross sections. We find that the differences between mixing parameter constraints obtained through the OPE approximation or the SRA approximation are roughly a factor of ˜2 - 3 . Hardy & Laseby [J. High Energy Phys. 02 (2017) 33., 10.1007/JHEP02(2017)033] include plasma effects in their calculations finding significantly weaker constraints on dark photon mixing for dark photon masses below ˜10 MeV . We do not consider plasma effects. Lastly, we point out that the properties of the SN1987A progenitor core remain somewhat uncertain and that this uncertainty alone causes uncertainty of at least a factor of ˜2 - 3 in the excluded values of the dark photon mixing parameter. Further refinement of these estimates is unwarranted until either the interior of the SN1987A progenitor is more well understood or additional, large, and heretofore neglected effects, such as the plasma interactions studied by Hardy & Lasenby [J. High Energy Phys. 02 (2017) 33. 10.1007/JHEP02(2017)033], are identified.

  13. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    PubMed

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  14. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    PubMed Central

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263

  15. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    PubMed Central

    Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs

    2018-01-01

    Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344

  16. Environmental effects of interstate power trading on electricity consumption mixes.

    PubMed

    Marriott, Joe; Matthews, H Scott

    2005-11-15

    Although many studies of electricity generation use national or state average generation mix assumptions, in reality a great deal of electricity is transferred between states with very different mixes of fossil and renewable fuels, and using the average numbers could result in incorrect conclusions in these studies. We create electricity consumption profiles for each state and for key industry sectors in the U.S. based on existing state generation profiles, net state power imports, industry presence by state, and an optimization model to estimate interstate electricity trading. Using these "consumption mixes" can provide a more accurate assessment of electricity use in life-cycle analyses. We conclude that the published generation mixes for states that import power are misleading, since the power consumed in-state has a different makeup than the power that was generated. And, while most industry sectors have consumption mixes similar to the U.S. average, some of the most critical sectors of the economy--such as resource extraction and material processing sectors--are very different. This result does validate the average mix assumption made in many environmental assessments, but it is important to accurately quantify the generation methods for electricity used when doing life-cycle analyses.

  17. Impact of tree priors in species delimitation and phylogenetics of the genus Oligoryzomys (Rodentia: Cricetidae).

    PubMed

    da Cruz, Marcos de O R; Weksler, Marcelo

    2018-02-01

    The use of genetic data and tree-based algorithms to delimit evolutionary lineages is becoming an important practice in taxonomic identification, especially in morphologically cryptic groups. The effects of different phylogenetic and/or coalescent models in the analyses of species delimitation, however, are not clear. In this paper, we assess the impact of different evolutionary priors in phylogenetic estimation, species delimitation, and molecular dating of the genus Oligoryzomys (Mammalia: Rodentia), a group with complex taxonomy and morphological cryptic species. Phylogenetic and coalescent analyses included 20 of the 24 recognized species of the genus, comprising of 416 Cytochrome b sequences, 26 Cytochrome c oxidase I sequences, and 27 Beta-Fibrinogen Intron 7 sequences. For species delimitation, we employed the General Mixed Yule Coalescent (GMYC) and Bayesian Poisson tree processes (bPTP) analyses, and contrasted 4 genealogical and phylogenetic models: Pure-birth (Yule), Constant Population Size Coalescent, Multiple Species Coalescent, and a mixed Yule-Coalescent model. GMYC analyses of trees from different genealogical models resulted in similar species delimitation and phylogenetic relationships, with incongruence restricted to areas of poor nodal support. bPTP results, however, significantly differed from GMYC for 5 taxa. Oligoryzomys early diversification was estimated to have occurred in the Early Pleistocene, between 0.7 and 2.6 MYA. The mixed Yule-Coalescent model, however, recovered younger dating estimates for Oligoryzomys diversification, and for the threshold for the speciation-coalescent horizon in GMYC. Eight of the 20 included Oligoryzomys species were identified as having two or more independent evolutionary units, indicating that current taxonomy of Oligoryzomys is still unsettled. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Discovering human germ cell mutagens with whole genome sequencing: Insights from power calculations reveal the importance of controlling for between-family variability.

    PubMed

    Webster, R J; Williams, A; Marchetti, F; Yauk, C L

    2018-07-01

    Mutations in germ cells pose potential genetic risks to offspring. However, de novo mutations are rare events that are spread across the genome and are difficult to detect. Thus, studies in this area have generally been under-powered, and no human germ cell mutagen has been identified. Whole Genome Sequencing (WGS) of human pedigrees has been proposed as an approach to overcome these technical and statistical challenges. WGS enables analysis of a much wider breadth of the genome than traditional approaches. Here, we performed power analyses to determine the feasibility of using WGS in human families to identify germ cell mutagens. Different statistical models were compared in the power analyses (ANOVA and multiple regression for one-child families, and mixed effect model sampling between two to four siblings per family). Assumptions were made based on parameters from the existing literature, such as the mutation-by-paternal age effect. We explored two scenarios: a constant effect due to an exposure that occurred in the past, and an accumulating effect where the exposure is continuing. Our analysis revealed the importance of modeling inter-family variability of the mutation-by-paternal age effect. Statistical power was improved by models accounting for the family-to-family variability. Our power analyses suggest that sufficient statistical power can be attained with 4-28 four-sibling families per treatment group, when the increase in mutations ranges from 40 to 10% respectively. Modeling family variability using mixed effect models provided a reduction in sample size compared to a multiple regression approach. Much larger sample sizes were required to detect an interaction effect between environmental exposures and paternal age. These findings inform study design and statistical modeling approaches to improve power and reduce sequencing costs for future studies in this area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  19. Performance of nonlinear mixed effects models in the presence of informative dropout.

    PubMed

    Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H

    2015-01-01

    Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.

  20. Modelling exhaust plume mixing in the near field of an aircraft

    NASA Astrophysics Data System (ADS)

    Garnier, F.; Brunet, S.; Jacquin, L.

    1997-11-01

    A simplified approach has been applied to analyse the mixing and entrainment processes of the engine exhaust through their interaction with the vortex wake of an aircraft. Our investigation is focused on the near field, extending from the exit nozzle until about 30 s after the wake is generated, in the vortex phase. This study was performed by using an integral model and a numerical simulation for two large civil aircraft: a two-engine Airbus 330 and a four-engine Boeing 747. The influence of the wing-tip vortices on the dilution ratio (defined as a tracer concentration) shown. The mixing process is also affected by the buoyancy effect, but only after the jet regime, when the trapping in the vortex core has occurred. In the early wake, the engine jet location (i.e. inboard or outboard engine jet) has an important influence on the mixing rate. The plume streamlines inside the vortices are subject to distortion and stretching, and the role of the descent of the vortices on the maximum tracer concentration is discussed. Qualitative comparison with contrail photograph shows similar features. Finally, tracer concentration of inboard engine centreline of B-747 are compared with other theoretical analyses and measured data.

  1. The effects of green areas on air surface temperature of the Kuala Lumpur city using WRF-ARW modelling and Remote Sensing technique

    NASA Astrophysics Data System (ADS)

    Isa, N. A.; Mohd, W. M. N. Wan; Salleh, S. A.; Ooi, M. C. G.

    2018-02-01

    Matured trees contain high concentration of chlorophyll that encourages the process of photosynthesis. This process produces oxygen as a by-product and releases it into the atmosphere and helps in lowering the ambient temperature. This study attempts to analyse the effect of green area on air surface temperature of the Kuala Lumpur city. The air surface temperatures of two different dates which are, in March 2006 and March 2016 were simulated using the Weather Research and Forecasting (WRF) model. The green area in the city was extracted using the Normalized Difference Vegetation Index (NDVI) from two Landsat satellite images. The relationship between the air surface temperature and the green area were analysed using linear regression models. From the study, it was found that, the green area was significantly affecting the distribution of air temperature within the city. A strong negative correlation was identified through this study which indicated that higher NDVI values tend to have lower air surface temperature distribution within the focus study area. It was also found that, different urban setting in mixed built-up and vegetated areas resulted in different distributions of air surface temperature. Future studies should focus on analysing the air surface temperature within the area of mixed built-up and vegetated area.

  2. Influence assessment in censored mixed-effects models using the multivariate Student’s-t distribution

    PubMed Central

    Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.

    2015-01-01

    In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871

  3. Analyzing Mixed-Dyadic Data Using Structural Equation Models

    ERIC Educational Resources Information Center

    Peugh, James L.; DiLillo, David; Panuzio, Jillian

    2013-01-01

    Mixed-dyadic data, collected from distinguishable (nonexchangeable) or indistinguishable (exchangeable) dyads, require statistical analysis techniques that model the variation within dyads and between dyads appropriately. The purpose of this article is to provide a tutorial for performing structural equation modeling analyses of cross-sectional…

  4. Analytical solution for reactive solute transport considering incomplete mixing within a reference elementary volume

    NASA Astrophysics Data System (ADS)

    Chiogna, Gabriele; Bellin, Alberto

    2013-05-01

    The laboratory experiments of Gramling et al. (2002) showed that incomplete mixing at the pore scale exerts a significant impact on transport of reactive solutes and that assuming complete mixing leads to overestimation of product concentration in bimolecular reactions. Successively, several attempts have been made to model this experiment, either considering spatial segregation of the reactants, non-Fickian transport applying a Continuous Time Random Walk (CTRW) or an effective upscaled time-dependent kinetic reaction term. Previous analyses of these experimental results showed that, at the Darcy scale, conservative solute transport is well described by a standard advection dispersion equation, which assumes complete mixing at the pore scale. However, reactive transport is significantly affected by incomplete mixing at smaller scales, i.e., within a reference elementary volume (REV). We consider here the family of equilibrium reactions for which the concentration of the reactants and the product can be expressed as a function of the mixing ratio, the concentration of a fictitious non reactive solute. For this type of reactions we propose, in agreement with previous studies, to model the effect of incomplete mixing at scales smaller than the Darcy scale assuming that the mixing ratio is distributed within an REV according to a Beta distribution. We compute the parameters of the Beta model by imposing that the mean concentration is equal to the value that the concentration assumes at the continuum Darcy scale, while the variance decays with time as a power law. We show that our model reproduces the concentration profiles of the reaction product measured in the Gramling et al. (2002) experiments using the transport parameters obtained from conservative experiments and an instantaneous reaction kinetic. The results are obtained applying analytical solutions both for conservative and for reactive solute transport, thereby providing a method to handle the effect of incomplete mixing on multispecies reactive solute transport, which is simpler than other previously developed methods.

  5. Characteristics of Aspergillus fumigatus in Association with Stenotrophomonas maltophilia in an In Vitro Model of Mixed Biofilm

    PubMed Central

    Melloul, Elise; Luiggi, Stéphanie; Anaïs, Leslie; Arné, Pascal; Costa, Jean-Marc; Fihman, Vincent; Briard, Benoit; Dannaoui, Eric; Guillot, Jacques; Decousser, Jean-Winoc; Beauvais, Anne; Botterel, Françoise

    2016-01-01

    Background Biofilms are communal structures of microorganisms that have long been associated with a variety of persistent infections poorly responding to conventional antibiotic or antifungal therapy. Aspergillus fumigatus fungus and Stenotrophomonas maltophilia bacteria are examples of the microorganisms that can coexist to form a biofilm especially in the respiratory tract of immunocompromised patients or cystic fibrosis patients. The aim of the present study was to develop and assess an in vitro model of a mixed biofilm associating S. maltophilia and A. fumigatus by using analytical and quantitative approaches. Materials and Methods An A. fumigatus strain (ATCC 13073) expressing a Green Fluorescent Protein (GFP) and an S. maltophilia strain (ATCC 13637) were used. Fungal and bacterial inocula (105 conidia/mL and 106 cells/mL, respectively) were simultaneously deposited to initiate the development of an in vitro mixed biofilm on polystyrene supports at 37°C for 24 h. The structure of the biofilm was analysed via qualitative microscopic techniques like scanning electron and transmission electron microscopy, and fluorescence microscopy, and by quantitative techniques including qPCR and crystal violet staining. Results Analytic methods revealed typical structures of biofilm with production of an extracellular matrix (ECM) enclosing fungal hyphae and bacteria. Quantitative methods showed a decrease of A. fumigatus growth and ECM production in the mixed biofilm with antibiosis effect of the bacteria on the fungi seen as abortive hyphae, limited hyphal growth, fewer conidia, and thicker fungal cell walls. Conclusion For the first time, a mixed A. fumigatus—S. maltophilia biofilm was validated by various analytical and quantitative approaches and the bacterial antibiosis effect on the fungus was demonstrated. The mixed biofilm model is an interesting experimentation field to evaluate efficiency of antimicrobial agents and to analyse the interactions between the biofilm and the airways epithelium. PMID:27870863

  6. Application of CFX-10 to the Investigation of RPV Coolant Mixing in VVER Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moretti, Fabio; Melideo, Daniele; Terzuoli, Fulvio

    2006-07-01

    Coolant mixing phenomena occurring in the pressure vessel of a nuclear reactor constitute one of the main objectives of investigation by researchers concerned with nuclear reactor safety. For instance, mixing plays a relevant role in reactivity-induced accidents initiated by de-boration or boron dilution events, followed by transport of a de-borated slug into the vessel of a pressurized water reactor. Another example is constituted by temperature mixing, which may sensitively affect the consequences of a pressurized thermal shock scenario. Predictive analysis of mixing phenomena is strongly improved by the availability of computational tools able to cope with the inherent three-dimensionality ofmore » such problem, like system codes with three-dimensional capabilities, and Computational Fluid Dynamics (CFD) codes. The present paper deals with numerical analyses of coolant mixing in the reactor pressure vessel of a VVER-1000 reactor, performed by the ANSYS CFX-10 CFD code. In particular, the 'swirl' effect that has been observed to take place in the downcomer of such kind of reactor has been addressed, with the aim of assessing the capability of the codes to predict that effect, and to understand the reasons for its occurrence. Results have been compared against experimental data from V1000CT-2 Benchmark. Moreover, a boron mixing problem has been investigated, in the hypothesis that a de-borated slug, transported by natural circulation, enters the vessel. Sensitivity analyses have been conducted on some geometrical features, model parameters and boundary conditions. (authors)« less

  7. Genetic overlap between diagnostic subtypes of ischemic stroke.

    PubMed

    Holliday, Elizabeth G; Traylor, Matthew; Malik, Rainer; Bevan, Steve; Falcone, Guido; Hopewell, Jemma C; Cheng, Yu-Ching; Cotlarciuc, Ioana; Bis, Joshua C; Boerwinkle, Eric; Boncoraglio, Giorgio B; Clarke, Robert; Cole, John W; Fornage, Myriam; Furie, Karen L; Ikram, M Arfan; Jannes, Jim; Kittner, Steven J; Lincz, Lisa F; Maguire, Jane M; Meschia, James F; Mosley, Thomas H; Nalls, Mike A; Oldmeadow, Christopher; Parati, Eugenio A; Psaty, Bruce M; Rothwell, Peter M; Seshadri, Sudha; Scott, Rodney J; Sharma, Pankaj; Sudlow, Cathie; Wiggins, Kerri L; Worrall, Bradford B; Rosand, Jonathan; Mitchell, Braxton D; Dichgans, Martin; Markus, Hugh S; Levi, Christopher; Attia, John; Wray, Naomi R

    2015-03-01

    Despite moderate heritability, the phenotypic heterogeneity of ischemic stroke has hampered gene discovery, motivating analyses of diagnostic subtypes with reduced sample sizes. We assessed evidence for a shared genetic basis among the 3 major subtypes: large artery atherosclerosis (LAA), cardioembolism, and small vessel disease (SVD), to inform potential cross-subtype analyses. Analyses used genome-wide summary data for 12 389 ischemic stroke cases (including 2167 LAA, 2405 cardioembolism, and 1854 SVD) and 62 004 controls from the Metastroke consortium. For 4561 cases and 7094 controls, individual-level genotype data were also available. Genetic correlations between subtypes were estimated using linear mixed models and polygenic profile scores. Meta-analysis of a combined LAA-SVD phenotype (4021 cases and 51 976 controls) was performed to identify shared risk alleles. High genetic correlation was identified between LAA and SVD using linear mixed models (rg=0.96, SE=0.47, P=9×10(-4)) and profile scores (rg=0.72; 95% confidence interval, 0.52-0.93). Between LAA and cardioembolism and SVD and cardioembolism, correlation was moderate using linear mixed models but not significantly different from zero for profile scoring. Joint meta-analysis of LAA and SVD identified strong association (P=1×10(-7)) for single nucleotide polymorphisms near the opioid receptor μ1 (OPRM1) gene. Our results suggest that LAA and SVD, which have been hitherto treated as genetically distinct, may share a substantial genetic component. Combined analyses of LAA and SVD may increase power to identify small-effect alleles influencing shared pathophysiological processes. © 2015 American Heart Association, Inc.

  8. Evidence of a Major Gene From Bayesian Segregation Analyses of Liability to Osteochondral Diseases in Pigs

    PubMed Central

    Kadarmideen, Haja N.; Janss, Luc L. G.

    2005-01-01

    Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384–37.81), compared to the polygenic variance (\\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\pagestyle{empty} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\mathrm{{\\sigma}}}_{{\\mathrm{u}}}^{2}\\end{equation*}\\end{document}). Consequently, heritabilities for a mixed inheritance (range 0.65–0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38–0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\pagestyle{empty} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\mathrm{{\\sigma}}}_{{\\mathrm{u}}}^{2}\\end{equation*}\\end{document}, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a “reduced polygenic model” for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an “individual polygenic model.” In all cases, “shrinkage estimators” for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans. PMID:16020792

  9. Modeling condensation with a noncondensable gas for mixed convection flow

    NASA Astrophysics Data System (ADS)

    Liao, Yehong

    2007-05-01

    This research theoretically developed a novel mixed convection model for condensation with a noncondensable gas. The model developed herein is comprised of three components: a convection regime map; a mixed convection correlation; and a generalized diffusion layer model. These components were developed in a way to be consistent with the three-level methodology in MELCOR. The overall mixed convection model was implemented into MELCOR and satisfactorily validated with data covering a wide variety of test conditions. In the development of the convection regime map, two analyses with approximations of the local similarity method were performed to solve the multi-component two-phase boundary layer equations. The first analysis studied effects of the bulk velocity on a basic natural convection condensation process and setup conditions to distinguish natural convection from mixed convection. It was found that the superimposed velocity increases condensation heat transfer by sweeping away the noncondensable gas accumulated at the condensation boundary. The second analysis studied effects of the buoyancy force on a basic forced convection condensation process and setup conditions to distinguish forced convection from mixed convection. It was found that the superimposed buoyancy force increases condensation heat transfer by thinning the liquid film thickness and creating a steeper noncondensable gas concentration profile near the condensation interface. In the development of the mixed convection correlation accounting for suction effects, numerical data were obtained from boundary layer analysis for the three convection regimes and used to fit a curve for the Nusselt number of the mixed convection regime as a function of the Nusselt numbers of the natural and forced convection regimes. In the development of the generalized diffusion layer model, the driving potential for mass transfer was expressed as the temperature difference between the bulk and the liquid-gas interface using the Clausius-Clapeyron equation. The model was developed on a mass basis instead of a molar basis to be consistent with general conservation equations. It was found that vapor diffusion is not only driven by a gradient of the molar fraction but also a gradient of the mixture molecular weight at the diffusion layer.

  10. Panel Stiffener Debonding Analysis using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2008-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out -of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer fo to, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  11. Panel-Stiffener Debonding and Analysis Using a Shell/3D Modeling Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Ratcliffe, James G.; Minguet, Pierre J.

    2007-01-01

    A shear loaded, stringer reinforced composite panel is analyzed to evaluate the fidelity of computational fracture mechanics analyses of complex structures. Shear loading causes the panel to buckle. The resulting out-of-plane deformations initiate skin/stringer separation at the location of an embedded defect. The panel and surrounding load fixture were modeled with shell elements. A small section of the stringer foot, web and noodle as well as the panel skin near the delamination front were modeled with a local 3D solid model. Across the width of the stringer foot, the mixed-mode strain energy release rates were calculated using the virtual crack closure technique. A failure index was calculated by correlating the results with a mixed-mode failure criterion of the graphite/epoxy material. The objective was to study the effect of the fidelity of the local 3D finite element model on the computed mixed-mode strain energy release rates and the failure index.

  12. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    PubMed

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  13. Environmental effects of interstate power trading on electricity consumption mixes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joe Marriott; H. Scott Matthews

    2005-11-15

    Although many studies of electricity generation use national or state average generation mix assumptions, in reality a great deal of electricity is transferred between states with very different mixes of fossil and renewable fuels, and using the average numbers could result in incorrect conclusions in these studies. The authors create electricity consumption profiles for each state and for key industry sectors in the U.S. based on existing state generation profiles, net state power imports, industry presence by state, and an optimization model to estimate interstate electricity trading. Using these 'consumption mixes' can provide a more accurate assessment of electricity usemore » in life-cycle analyses. It is concluded that the published generation mixes for states that import power are misleading, since the power consumed in-state has a different makeup than the power that was generated. And, while most industry sectors have consumption mixes similar to the U.S. average, some of the most critical sectors of the economy - such as resource extraction and material processing sectors - are very different. This result does validate the average mix assumption made in many environmental assessments, but it is important to accurately quantify the generation methods for electricity used when doing life-cycle analyses. 16 refs., 7 figs., 2 tabs.« less

  14. Computational Analyses of Pressurization in Cryogenic Tanks

    NASA Technical Reports Server (NTRS)

    Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chun P.; Field, Robert E.; Ryan, Harry

    2010-01-01

    A comprehensive numerical framework utilizing multi-element unstructured CFD and rigorous real fluid property routines has been developed to carry out analyses of propellant tank and delivery systems at NASA SSC. Traditionally CFD modeling of pressurization and mixing in cryogenic tanks has been difficult primarily because the fluids in the tank co-exist in different sub-critical and supercritical states with largely varying properties that have to be accurately accounted for in order to predict the correct mixing and phase change between the ullage and the propellant. For example, during tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. In our modeling framework, we incorporated two different approaches to real fluids modeling: (a) the first approach is based on the HBMS model developed by Hirschfelder, Beuler, McGee and Sutton and (b) the second approach is based on a cubic equation of state developed by Soave, Redlich and Kwong (SRK). Both approaches cover fluid properties and property variation spanning sub-critical gas and liquid states as well as the supercritical states. Both models were rigorously tested and properties for common fluids such as oxygen, nitrogen, hydrogen etc were compared against NIST data in both the sub-critical as well as supercritical regimes.

  15. Cost-effectiveness of rivaroxaban for stroke prevention in atrial fibrillation in the Portuguese setting.

    PubMed

    Morais, João; Aguiar, Carlos; McLeod, Euan; Chatzitheofilou, Ismini; Fonseca Santos, Isabel; Pereira, Sónia

    2014-09-01

    To project the long-term cost-effectiveness of treating non-valvular atrial fibrillation (AF) patients for stroke prevention with rivaroxaban compared to warfarin in Portugal. A Markov model was used that included health and treatment states describing the management and consequences of AF and its treatment. The model's time horizon was set at a patient's lifetime and each cycle at three months. The analysis was conducted from a societal perspective and a 5% discount rate was applied to both costs and outcomes. Treatment effect data were obtained from the pivotal phase III ROCKET AF trial. The model was also populated with utility values obtained from the literature and with cost data derived from official Portuguese sources. The outcomes of the model included life-years, quality-adjusted life-years (QALYs), incremental costs, and associated incremental cost-effectiveness ratios (ICERs). Extensive sensitivity analyses were undertaken to further assess the findings of the model. As there is evidence indicating underuse and underprescription of warfarin in Portugal, an additional analysis was performed using a mixed comparator composed of no treatment, aspirin, and warfarin, which better reflects real-world prescribing in Portugal. This cost-effectiveness analysis produced an ICER of €3895/QALY for the base-case analysis (vs. warfarin) and of €6697/QALY for the real-world prescribing analysis (vs. mixed comparator). The findings were robust when tested in sensitivity analyses. The results showed that rivaroxaban may be a cost-effective alternative compared with warfarin or real-world prescribing in Portugal. Copyright © 2014 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.

  16. Effects of Morphological Family Size for Young Readers

    ERIC Educational Resources Information Center

    Perdijk, Kors; Schreuder, Robert; Baayen, R. Harald; Verhoeven, Ludo

    2012-01-01

    Dutch children, from the second and fourth grade of primary school, were each given a visual lexical decision test on 210 Dutch monomorphemic words. After removing words not recognized by a majority of the younger group, (lexical) decisions were analysed by mixed-model regression methods to see whether morphological Family Size influenced decision…

  17. Characterisation and modelling of mixing processes in groundwaters of a potential geological repository for nuclear wastes in crystalline rocks of Sweden.

    PubMed

    Gómez, Javier B; Gimeno, María J; Auqué, Luis F; Acero, Patricia

    2014-01-15

    This paper presents the mixing modelling results for the hydrogeochemical characterisation of groundwaters in the Laxemar area (Sweden). This area is one of the two sites that have been investigated, under the financial patronage of the Swedish Nuclear Waste and Management Co. (SKB), as possible candidates for hosting the proposed repository for the long-term storage of spent nuclear fuel. The classical geochemical modelling, interpreted in the light of the palaeohydrogeological history of the system, has shown that the driving process in the geochemical evolution of this groundwater system is the mixing between four end-member waters: a deep and old saline water, a glacial meltwater, an old marine water, and a meteoric water. In this paper we put the focus on mixing and its effects on the final chemical composition of the groundwaters using a comprehensive methodology that combines principal component analysis with mass balance calculations. This methodology allows us to test several combinations of end member waters and several combinations of compositional variables in order to find optimal solutions in terms of mixing proportions. We have applied this methodology to a dataset of 287 groundwater samples from the Laxemar area collected and analysed by SKB. The best model found uses four conservative elements (Cl, Br, oxygen-18 and deuterium), and computes mixing proportions with respect to three end member waters (saline, glacial and meteoric). Once the first order effect of mixing has been taken into account, water-rock interaction can be used to explain the remaining variability. In this way, the chemistry of each water sample can be obtained by using the mixing proportions for the conservative elements, only affected by mixing, or combining the mixing proportions and the chemical reactions for the non-conservative elements in the system, establishing the basis for predictive calculations. © 2013 Elsevier B.V. All rights reserved.

  18. Global analysis of fermion mixing with exotics

    NASA Technical Reports Server (NTRS)

    Nardi, Enrico; Roulet, Esteban; Tommasini, Daniele

    1991-01-01

    The limits are analyzed on deviation of the lepton and quark weak-couplings from their standard model values in a general class of models where the known fermions are allowed to mix with new heavy particles with exotic SU(2) x U(1) quantum number assignments (left-handed singlets or right-handed doublets). These mixings appear in many extensions of the electroweak theory such as models with mirror fermions, E(sub 6) models, etc. The results update previous analyses and improve considerably the existing bounds.

  19. Conventional Energy and Macronutrient Variables Distort the Accuracy of Children’s Dietary Reports: Illustrative Data from a Validation Study of Effect of Order Prompts

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.

    2008-01-01

    Objective Validation-study data are used to illustrate that conventional energy and macronutrient (protein, carbohydrate, fat) variables, which disregard accuracy of reported items and amounts, misrepresent reporting accuracy. Reporting-error-sensitive variables are proposed which classify reported items as matches or intrusions, and reported amounts as corresponding or overreported. Methods 58 girls and 63 boys were each observed eating school meals on 2 days separated by ≥4 weeks, and interviewed the morning after each observation day. One interview per child had forward-order (morning-to-evening) prompts; one had reverse-order prompts. Original food-item-level analyses found a sex-x-order prompt interaction for omission rates. Current analyses compared reference (observed) and reported information transformed to energy and macronutrients. Results Using conventional variables, reported amounts were less than reference amounts (ps<0.001; paired t-tests); report rates were higher for the first than second interview for energy, protein, and carbohydrate (ps≤0.049; mixed models). Using reporting-error-sensitive variables, correspondence rates were higher for girls with forward- but boys with reverse-order prompts (ps≤0.041; mixed models); inflation ratios were lower with reverse- than forward-order prompts for energy, carbohydrate, and fat (ps≤0.045; mixed models). Conclusions Conventional variables overestimated reporting accuracy and masked order prompt and sex effects. Reporting-error-sensitive variables are recommended when assessing accuracy for energy and macronutrients in validation studies. PMID:16959308

  20. Effects of morphological Family Size for young readers.

    PubMed

    Perdijk, Kors; Schreuder, Robert; Baayen, R Harald; Verhoeven, Ludo

    2012-09-01

    Dutch children, from the second and fourth grade of primary school, were each given a visual lexical decision test on 210 Dutch monomorphemic words. After removing words not recognized by a majority of the younger group, (lexical) decisions were analysed by mixed-model regression methods to see whether morphological Family Size influenced decision times over and above several other covariates. The effect of morphological Family Size on decision time was mixed: larger families led to significantly faster decision times for the second graders but not for the fourth graders. Since facilitative effects on decision times had been found for adults, we offer a developmental account to explain the absence of an effect of Family Size on decision times for fourth graders. ©2011 The British Psychological Society.

  1. The role of ice nuclei recycling in the maintenance of cloud ice in Arctic mixed-phase stratocumulus

    DOE PAGES

    Solomon, Amy; Feingold, G.; Shupe, M. D.

    2015-09-25

    This study investigates the maintenance of cloud ice production in Arctic mixed-phase stratocumulus in large eddy simulations that include a prognostic ice nuclei (IN) formulation and a diurnal cycle. Balances derived from a mixed-layer model and phase analyses are used to provide insight into buffering mechanisms that maintain ice in these cloud systems. We find that, for the case under investigation, IN recycling through subcloud sublimation considerably prolongs ice production over a multi-day integration. This effective source of IN to the cloud dominates over mixing sources from above or below the cloud-driven mixed layer. Competing feedbacks between dynamical mixing andmore » recycling are found to slow the rate of ice lost from the mixed layer when a diurnal cycle is simulated. Furthermore, the results of this study have important implications for maintaining phase partitioning of cloud ice and liquid that determine the radiative forcing of Arctic mixed-phase clouds.« less

  2. The role of ice nuclei recycling in the maintenance of cloud ice in Arctic mixed-phase stratocumulus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, Amy; Feingold, G.; Shupe, M. D.

    This study investigates the maintenance of cloud ice production in Arctic mixed-phase stratocumulus in large eddy simulations that include a prognostic ice nuclei (IN) formulation and a diurnal cycle. Balances derived from a mixed-layer model and phase analyses are used to provide insight into buffering mechanisms that maintain ice in these cloud systems. We find that, for the case under investigation, IN recycling through subcloud sublimation considerably prolongs ice production over a multi-day integration. This effective source of IN to the cloud dominates over mixing sources from above or below the cloud-driven mixed layer. Competing feedbacks between dynamical mixing andmore » recycling are found to slow the rate of ice lost from the mixed layer when a diurnal cycle is simulated. Furthermore, the results of this study have important implications for maintaining phase partitioning of cloud ice and liquid that determine the radiative forcing of Arctic mixed-phase clouds.« less

  3. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS: A REPLY TO ROBBINS, HILDERBRAND AND FARLEY (2002)

    EPA Science Inventory

    Phillips & Koch (2002) outlined a new stable isotope mixing model which incorporates differences in elemental concentrations in the determinations of source proportions in a mixture. They illustrated their method with sensitivity analyses and two examples from the wildlife ecolog...

  4. Combined Recirculatory-compartmental Population Pharmacokinetic Modeling of Arterial and Venous Plasma S(+) and R(-) Ketamine Concentrations.

    PubMed

    Henthorn, Thomas K; Avram, Michael J; Dahan, Albert; Gustafsson, Lars L; Persson, Jan; Krejcie, Tom C; Olofsen, Erik

    2018-05-16

    The pharmacokinetics of infused drugs have been modeled without regard for recirculatory or mixing kinetics. We used a unique ketamine dataset with simultaneous arterial and venous blood sampling, during and after separate S(+) and R(-) ketamine infusions, to develop a simplified recirculatory model of arterial and venous plasma drug concentrations. S(+) or R(-) ketamine was infused over 30 min on two occasions to 10 healthy male volunteers. Frequent, simultaneous arterial and forearm venous blood samples were obtained for up to 11 h. A multicompartmental pharmacokinetic model with front-end arterial mixing and venous blood components was developed using nonlinear mixed effects analyses. A three-compartment base pharmacokinetic model with additional arterial mixing and arm venous compartments and with shared S(+)/R(-) distribution kinetics proved superior to standard compartmental modeling approaches. Total pharmacokinetic flow was estimated to be 7.59 ± 0.36 l/min (mean ± standard error of the estimate), and S(+) and R(-) elimination clearances were 1.23 ± 0.04 and 1.06 ± 0.03 l/min, respectively. The arm-tissue link rate constant was 0.18 ± 0.01 min and the fraction of arm blood flow estimated to exchange with arm tissue was 0.04 ± 0.01. Arterial drug concentrations measured during drug infusion have two kinetically distinct components: partially or lung-mixed drug and fully mixed-recirculated drug. Front-end kinetics suggest the partially mixed concentration is proportional to the ratio of infusion rate and total pharmacokinetic flow. This simplified modeling approach could lead to more generalizable models for target-controlled infusions and improved methods for analyzing pharmacokinetic-pharmacodynamic data.

  5. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications

    PubMed Central

    Austin, Peter C.

    2017-01-01

    Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954

  6. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.

    PubMed

    Austin, Peter C

    2017-08-01

    Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).

  7. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  8. Nonlinear mixed effects modelling approach in investigating phenobarbital pharmacokinetic interactions in epileptic patients.

    PubMed

    Vučićević, Katarina; Jovanović, Marija; Golubović, Bojana; Kovačević, Sandra Vezmar; Miljković, Branislava; Martinović, Žarko; Prostran, Milica

    2015-02-01

    The present study aimed to establish population pharmacokinetic model for phenobarbital (PB), examining and quantifying the magnitude of PB interactions with other antiepileptic drugs concomitantly used and to demonstrate its use for individualization of PB dosing regimen in adult epileptic patients. In total 205 PB concentrations were obtained during routine clinical monitoring of 136 adult epilepsy patients. PB steady state concentrations were measured by homogeneous enzyme immunoassay. Nonlinear mixed effects modelling (NONMEM) was applied for data analyses and evaluation of the final model. According to the final population model, significant determinant of apparent PB clearance (CL/F) was daily dose of concomitantly given valproic acid (VPA). Typical value of PB CL/F for final model was estimated at 0.314 l/h. Based on the final model, co-therapy with usual VPA dose of 1000 mg/day, resulted in PB CL/F average decrease of about 25 %, while 2000 mg/day leads to an average 50 % decrease in PB CL/F. Developed population PB model may be used in estimating individual CL/F for adult epileptic patients and could be applied for individualizing dosing regimen taking into account dose-dependent effect of concomitantly given VPA.

  9. Associations between Responsible Beverage Service Laws and Binge Drinking and Alcohol-Impaired Driving

    ERIC Educational Resources Information Center

    Linde, Ann C.; Toomey, Traci L.; Wolfson, Julian; Lenk, Kathleen M.; Jones-Webb, Rhonda; Erickson, Darin J.

    2016-01-01

    We explored potential associations between the strength of state Responsible Beverage Service (RBS) laws and self-reported binge drinking and alcohol-impaired driving in the U.S. A multi-level logistic mixed-effects model was used, adjusting for potential confounders. Analyses were conducted on the overall BRFSS sample and drinkers only. Seven…

  10. An overview of longitudinal data analysis methods for neurological research.

    PubMed

    Locascio, Joseph J; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.

  11. Longitudinal analysis of the strengths and difficulties questionnaire scores of the Millennium Cohort Study children in England using M-quantile random-effects regression.

    PubMed

    Tzavidis, Nikos; Salvati, Nicola; Schmid, Timo; Flouri, Eirini; Midouhas, Emily

    2016-02-01

    Multilevel modelling is a popular approach for longitudinal data analysis. Statistical models conventionally target a parameter at the centre of a distribution. However, when the distribution of the data is asymmetric, modelling other location parameters, e.g. percentiles, may be more informative. We present a new approach, M -quantile random-effects regression, for modelling multilevel data. The proposed method is used for modelling location parameters of the distribution of the strengths and difficulties questionnaire scores of children in England who participate in the Millennium Cohort Study. Quantile mixed models are also considered. The analyses offer insights to child psychologists about the differential effects of risk factors on children's outcomes.

  12. Real medical benefit assessed by indirect comparison.

    PubMed

    Falissard, Bruno; Zylberman, Myriam; Cucherat, Michel; Izard, Valérie; Meyer, François

    2009-01-01

    Frequently, in data packages submitted for Marketing Approval to the CHMP, there is a lack of relevant head-to-head comparisons of medicinal products that could enable national authorities responsible for the approval of reimbursement to assess the Added Therapeutic Value (ASMR) of new clinical entities or line extensions of existing therapies.Indirect or mixed treatment comparisons (MTC) are methods stemming from the field of meta-analysis that have been designed to tackle this problem. Adjusted indirect comparisons, meta-regressions, mixed models, Bayesian network analyses pool results of randomised controlled trials (RCTs), enabling a quantitative synthesis.The REAL procedure, recently developed by the HAS (French National Authority for Health), is a mixture of an MTC and effect model based on expert opinions. It is intended to translate the efficacy observed in the trials into effectiveness expected in day-to-day clinical practice in France.

  13. Poverty, hunger, education, and residential status impact survival in HIV.

    PubMed

    McMahon, James; Wanke, Christine; Terrin, Norma; Skinner, Sally; Knox, Tamsin

    2011-10-01

    Despite combination antiretroviral therapy (ART), HIV infected people have higher mortality than non-infected. Lower socioeconomic status (SES) predicts higher mortality in many chronic illnesses but data in people with HIV is limited. We evaluated 878 HIV infected individuals followed from 1995 to 2005. Cox proportional hazards for all-cause mortality were estimated for SES measures and other factors. Mixed effects analyses examined how SES impacts factors predicting death. The 200 who died were older, had lower CD4 counts, and higher viral loads (VL). Age, transmission category, education, albumin, CD4 counts, VL, hunger, and poverty predicted death in univariate analyses; age, CD4 counts, albumin, VL, and poverty in the multivariable model. Mixed models showed associations between (1) CD4 counts with education and hunger; (2) albumin with education, homelessness, and poverty; and (3) VL with education and hunger. SES contributes to mortality in HIV infected persons directly and indirectly, and should be a target of health policy in this population.

  14. Health economic comparison of SLIT allergen and SCIT allergoid immunotherapy in patients with seasonal grass-allergic rhinoconjunctivitis in Germany.

    PubMed

    Verheggen, Bram G; Westerhout, Kirsten Y; Schreder, Carl H; Augustin, Matthias

    2015-01-01

    Allergoids are chemically modified allergen extracts administered to reduce allergenicity and to maintain immunogenicity. Oralair® (the 5-grass tablet) is a sublingual native grass allergen tablet for pre- and co-seasonal treatment. Based on a literature review, meta-analysis, and cost-effectiveness analysis the relative effects and costs of the 5-grass tablet versus a mix of subcutaneous allergoid compounds for grass pollen allergic rhinoconjunctivitis were assessed. A Markov model with a time horizon of nine years was used to assess the costs and effects of three-year immunotherapy treatment. Relative efficacy expressed as standardized mean differences was estimated using an indirect comparison on symptom scores extracted from available clinical trials. The Rhinitis Symptom Utility Index (RSUI) was applied as a proxy to estimate utility values for symptom scores. Drug acquisition and other medical costs were derived from published sources as well as estimates for resource use, immunotherapy persistence, and occurrence of asthma. The analysis was executed from the German payer's perspective, which includes payments of the Statutory Health Insurance (SHI) and additional payments by insurants. Comprehensive deterministic and probabilistic sensitivity analyses and different scenarios were performed to test the uncertainty concerning the incremental model outcomes. The applied model predicted a cost-utility ratio of the 5-grass tablet versus a market mix of injectable allergoid products of € 12,593 per QALY in the base case analysis. Predicted incremental costs and QALYs were € 458 (95% confidence interval, CI: € 220; € 739) and 0.036 (95% CI: 0.002; 0.078), respectively. Compared to the allergoid mix the probability of the 5-grass tablet being the most cost-effective treatment option was predicted to be 76% at a willingness-to-pay threshold of € 20,000. The results were most sensitive to changes in efficacy estimates, duration of the pollen season, and immunotherapy persistence rates. This analysis suggests the sublingual native 5-grass tablet to be cost-effective relative to a mix of subcutaneous allergoid compounds. The robustness of these statements has been confirmed in extensive sensitivity and scenario analyses.

  15. To mix or not to mix venous blood samples collected in vacuum tubes?

    PubMed

    Parenmark, Anna; Landberg, Eva

    2011-09-08

    There are recommendations to mix venous blood samples by inverting the tubes immediately after venipuncture. Though mixing allows efficient anticoagulation in plasma tubes and fast initiation of coagulation in serum tubes, the effect on laboratory analyses and risk of haemolysis has not been thoroughly evaluated. Venous blood samples were collected by venipuncture in vacuum tubes from 50 patients (10 or 20 patients in each group). Four types of tubes and 18 parameters used in routine clinical chemistry were evaluated. For each patient and tube, three types of mixing strategies were used: instant mixing, no mixing and 5 min of rest followed by mixing. Most analyses did not differ significantly in samples admitted to different mixing strategies. Plasma lactate dehydrogenase and haemolysis index showed a small but significant increase in samples omitted to instant mixing compared to samples without mixing. However, in one out of twenty non-mixed samples, activated partial thromboplastin time was seriously affected. These results indicate that mixing blood samples after venipuncture is not mandatory for all types of tubes. Instant mixing may introduce interference for those analyses susceptible to haemolysis. However, tubes with liquid-based citrate buffer for coagulation testing should be mixed to avoid clotting.

  16. Free energy of mixing of acetone and methanol: a computer simulation investigation.

    PubMed

    Idrissi, Abdenacer; Polok, Kamil; Barj, Mohammed; Marekha, Bogdan; Kiselev, Mikhail; Jedlovszky, Pál

    2013-12-19

    The change of the Helmholtz free energy, internal energy, and entropy accompanying the mixing of acetone and methanol is calculated in the entire composition range by the method of thermodynamic integration using three different potential model combinations of the two compounds. In the first system, both molecules are described by the OPLS, and in the second system, both molecules are described by the original TraPPE force field, whereas in the third system a modified version of the TraPPE potential is used for acetone in combination with the original TraPPE model of methanol. The results reveal that, in contrast with the acetone-water system, all of these three model combinations are able to reproduce the full miscibility of acetone and methanol, although the thermodynamic driving force of this mixing is very small. It is also seen, in accordance with the finding of former structural analyses, that the mixing of the two components is driven by the entropy term corresponding to the ideal mixing, which is large enough to overcompensate the effect of the energy increase and entropy loss due to the interaction of the unlike components in the mixtures. Among the three model combinations, the use of the original TraPPE model of methanol and modified TraPPE model of acetone turns out to be clearly the best in this respect, as it is able to reproduce the experimental free energy, internal energy, and entropy of mixing values within 0.15 kJ/mol, 0.2 kJ/mol, and 1 J/(mol K), respectively, in the entire composition range. The success of this model combination originates from the fact that the use of the modified TraPPE model of acetone instead of the original one in these mixtures improves the reproduction of the entropy of mixing, while it retains the ability of the original model of excellently reproducing the internal energy of mixing.

  17. Estimation and interpretation of genetic effects with epistasis using the NOIA model.

    PubMed

    Alvarez-Castro, José M; Carlborg, Orjan; Rönnegård, Lars

    2012-01-01

    We introduce this communication with a brief outline of the historical landmarks in genetic modeling, especially concerning epistasis. Then, we present methods for the use of genetic modeling in QTL analyses. In particular, we summarize the essential expressions of the natural and orthogonal interactions (NOIA) model of genetic effects. Our motivation for reviewing that theory here is twofold. First, this review presents a digest of the expressions for the application of the NOIA model, which are often mixed with intermediate and additional formulae in the original articles. Second, we make the required theory handy for the reader to relate the genetic concepts to the particular mathematical expressions underlying them. We illustrate those relations by providing graphical interpretations and a diagram summarizing the key features for applying genetic modeling with epistasis in comprehensive QTL analyses. Finally, we briefly review some examples of the application of NOIA to real data and the way it improves the interpretability of the results.

  18. Determining the impact of cell mixing on signaling during development.

    PubMed

    Uriu, Koichiro; Morelli, Luis G

    2017-06-01

    Cell movement and intercellular signaling occur simultaneously to organize morphogenesis during embryonic development. Cell movement can cause relative positional changes between neighboring cells. When intercellular signals are local such cell mixing may affect signaling, changing the flow of information in developing tissues. Little is known about the effect of cell mixing on intercellular signaling in collective cellular behaviors and methods to quantify its impact are lacking. Here we discuss how to determine the impact of cell mixing on cell signaling drawing an example from vertebrate embryogenesis: the segmentation clock, a collective rhythm of interacting genetic oscillators. We argue that comparing cell mixing and signaling timescales is key to determining the influence of mixing. A signaling timescale can be estimated by combining theoretical models with cell signaling perturbation experiments. A mixing timescale can be obtained by analysis of cell trajectories from live imaging. After comparing cell movement analyses in different experimental settings, we highlight challenges in quantifying cell mixing from embryonic timelapse experiments, especially a reference frame problem due to embryonic motions and shape changes. We propose statistical observables characterizing cell mixing that do not depend on the choice of reference frames. Finally, we consider situations in which both cell mixing and signaling involve multiple timescales, precluding a direct comparison between single characteristic timescales. In such situations, physical models based on observables of cell mixing and signaling can simulate the flow of information in tissues and reveal the impact of observed cell mixing on signaling. © 2017 Japanese Society of Developmental Biologists.

  19. “SNP Snappy”: A Strategy for Fast Genome-Wide Association Studies Fitting a Full Mixed Model

    PubMed Central

    Meyer, Karin; Tier, Bruce

    2012-01-01

    A strategy to reduce computational demands of genome-wide association studies fitting a mixed model is presented. Improvements are achieved by utilizing a large proportion of calculations that remain constant across the multiple analyses for individual markers involved, with estimates obtained without inverting large matrices. PMID:22021386

  20. Patient Expectancy as a Mediator of Placebo Effects in Antidepressant Clinical Trials.

    PubMed

    Rutherford, Bret R; Wall, Melanie M; Brown, Patrick J; Choo, Tse-Hwei; Wager, Tor D; Peterson, Bradley S; Chung, Sarah; Kirsch, Irving; Roose, Steven P

    2017-02-01

    Causes of placebo effects in antidepressant trials have been inferred from observational studies and meta-analyses, but their mechanisms have not been directly established. The goal of this study was to examine in a prospective, randomized controlled trial whether patient expectancy mediates placebo effects in antidepressant studies. Adult outpatients with major depressive disorder were randomly assigned to open or placebo-controlled citalopram treatment. Following measurement of pre- and postrandomization expectancy, participants were treated with citalopram or placebo for 8 weeks. Independent samples t tests determined whether patient expectancy differed between the open and placebo-controlled groups, and mixed-effects models assessed group effects on Hamilton Depression Rating Scale (HAM-D) scores over time while controlling for treatment assignment. Finally, mediation analyses tested whether between-group differences in patient expectancy mediated the group effect on HAM-D scores. Postrandomization expectancy scores were significantly higher in the open group (mean=12.1 [SD=2.1]) compared with the placebo-controlled group (mean=11.0 [SD=2.0]). Mixed-effects modeling revealed a significant week-by-group interaction, indicating that HAM-D scores for citalopram-treated participants declined at a faster rate in the open group compared with the placebo-controlled group. Patient expectations postrandomization partially mediated group effects on week 8 HAM-D. Patient expectancy is a significant mediator of placebo effects in antidepressant trials. Expectancy-related interventions should be investigated as a means of controlling placebo responses in antidepressant clinical trials and improving patient outcome in clinical treatment.

  1. Measuring the individual benefit of a medical or behavioral treatment using generalized linear mixed-effects models.

    PubMed

    Diaz, Francisco J

    2016-10-15

    We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. A meta-analysis of Th2 pathway genetic variants and risk for allergic rhinitis.

    PubMed

    Bunyavanich, Supinda; Shargorodsky, Josef; Celedón, Juan C

    2011-06-01

    There is a significant genetic contribution to allergic rhinitis (AR). Genetic association studies for AR have been performed, but varying results make it challenging to decipher the overall potential effect of specific variants. The Th2 pathway plays an important role in the immunological development of AR. We performed meta-analyses of genetic association studies of variants in Th2 pathway genes and AR. PubMed and Phenopedia were searched by double extraction for original studies on Th2 pathway-related genetic polymorphisms and their associations with AR. A meta-analysis was conducted on each genetic polymorphism with data meeting our predetermined selection criteria. Analyses were performed using both fixed and random effects models, with stratification by age group, ethnicity, and AR definition where appropriate. Heterogeneity and publication bias were assessed. Six independent studies analyzing three candidate polymorphisms and involving a total of 1596 cases and 2892 controls met our inclusion criteria. Overall, the A allele of IL13 single nucleotide polymorphism (SNP) rs20541 was associated with increased odds of AR (estimated OR=1.2; 95% CI 1.1-1.3, p-value 0.004 in fixed effects model, 95% CI 1.0-1.5, p-value 0.056 in random effects model). The A allele of rs20541 was associated with increased odds of AR in mixed age groups using both fixed effects and random effects modeling. IL13 SNP rs1800925 and IL4R SNP 1801275 did not demonstrate overall associations with AR. We conclude that there is evidence for an overall association between IL13 SNP rs20541 and increased risk of AR, especially in mixed-age populations. © 2011 John Wiley & Sons A/S.

  3. Iterative Usage of Fixed and Random Effect Models for Powerful and Efficient Genome-Wide Association Studies

    PubMed Central

    Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu

    2016-01-01

    False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793

  4. Model for toroidal velocity in H-mode plasmas in the presence of internal transport barriers

    NASA Astrophysics Data System (ADS)

    Chatthong, B.; Onjun, T.; Singhsomroje, W.

    2010-06-01

    A model for predicting toroidal velocity in H-mode plasmas in the presence of internal transport barriers (ITBs) is developed using an empirical approach. In this model, it is assumed that the toroidal velocity is directly proportional to the local ion temperature. This model is implemented in the BALDUR integrated predictive modelling code so that simulations of ITB plasmas can be carried out self-consistently. In these simulations, a combination of a semi-empirical mixed Bohm/gyro-Bohm (mixed B/gB) core transport model that includes ITB effects and NCLASS neoclassical transport is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The combination of the mixed B/gB core transport model with ITB effects, together with the pedestal and the toroidal velocity models, is used to simulate the time evolution of plasma current, temperature and density profiles of 10 JET optimized shear discharges. It is found that the simulations can reproduce an ITB formation in these discharges. Statistical analyses including root mean square error (RMSE) and offset are used to quantify the agreement. It is found that the averaged RMSE and offset among these discharges are about 24.59% and -0.14%, respectively.

  5. An Overview of Longitudinal Data Analysis Methods for Neurological Research

    PubMed Central

    Locascio, Joseph J.; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825

  6. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  7. Assessment of RANS and LES Turbulence Modeling for Buoyancy-Aided/Opposed Forced and Mixed Convection

    NASA Astrophysics Data System (ADS)

    Clifford, Corey; Kimber, Mark

    2017-11-01

    Over the last 30 years, an industry-wide shift within the nuclear community has led to increased utilization of computational fluid dynamics (CFD) to supplement nuclear reactor safety analyses. One such area that is of particular interest to the nuclear community, specifically to those performing loss-of-flow accident (LOFA) analyses for next-generation very-high temperature reactors (VHTR), is the capacity of current computational models to predict heat transfer across a wide range of buoyancy conditions. In the present investigation, a critical evaluation of Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) turbulence modeling techniques is conducted based on CFD validation data collected from the Rotatable Buoyancy Tunnel (RoBuT) at Utah State University. Four different experimental flow conditions are investigated: (1) buoyancy-aided forced convection; (2) buoyancy-opposed forced convection; (3) buoyancy-aided mixed convection; (4) buoyancy-opposed mixed convection. Overall, good agreement is found for both forced convection-dominated scenarios, but an overly-diffusive prediction of the normal Reynolds stress is observed for the RANS-based turbulence models. Low-Reynolds number RANS models perform adequately for mixed convection, while higher-order RANS approaches underestimate the influence of buoyancy on the production of turbulence.

  8. The mixing effects for real gases and their mixtures

    NASA Astrophysics Data System (ADS)

    Gong, M. Q.; Luo, E. C.; Wu, J. F.

    2004-10-01

    The definitions of the adiabatic and isothermal mixing effects in the mixing processes of real gases were presented in this paper. Eight substances with boiling-point temperatures from cryogenic temperature to the ambient temperature were selected from the interest of low temperature refrigeration to study their binary and multicomponent mixing effects. Detailed analyses were made on the parameters of the mixing process to know their influences on mixing effects. Those parameters include the temperatures, pressures, and mole fraction ratios of pure substances before mixing. The results show that the maximum temperature variation occurs at the saturation state of each component in the mixing process. Those components with higher boiling-point temperatures have higher isothermal mixing effects. The maximum temperature variation which is defined as the adiabatic mixing effect can even reach up to 50 K, and the isothermal mixing effect can reach about 20 kJ/mol. The possible applications of the mixing cooling effect in both open cycle and closed cycle refrigeration systems were also discussed.

  9. A Note on Recurring Misconceptions When Fitting Nonlinear Mixed Models.

    PubMed

    Harring, Jeffrey R; Blozis, Shelley A

    2016-01-01

    Nonlinear mixed-effects (NLME) models are used when analyzing continuous repeated measures data taken on each of a number of individuals where the focus is on characteristics of complex, nonlinear individual change. Challenges with fitting NLME models and interpreting analytic results have been well documented in the statistical literature. However, parameter estimates as well as fitted functions from NLME analyses in recent articles have been misinterpreted, suggesting the need for clarification of these issues before these misconceptions become fact. These misconceptions arise from the choice of popular estimation algorithms, namely, the first-order linearization method (FO) and Gaussian-Hermite quadrature (GHQ) methods, and how these choices necessarily lead to population-average (PA) or subject-specific (SS) interpretations of model parameters, respectively. These estimation approaches also affect the fitted function for the typical individual, the lack-of-fit of individuals' predicted trajectories, and vice versa.

  10. Analysis of categorical moderators in mixed-effects meta-analysis: Consequences of using pooled versus separate estimates of the residual between-studies variances.

    PubMed

    Rubio-Aparicio, María; Sánchez-Meca, Julio; López-López, José Antonio; Botella, Juan; Marín-Martínez, Fulgencio

    2017-11-01

    Subgroup analyses allow us to examine the influence of a categorical moderator on the effect size in meta-analysis. We conducted a simulation study using a dichotomous moderator, and compared the impact of pooled versus separate estimates of the residual between-studies variance on the statistical performance of the Q B (P) and Q B (S) tests for subgroup analyses assuming a mixed-effects model. Our results suggested that similar performance can be expected as long as there are at least 20 studies and these are approximately balanced across categories. Conversely, when subgroups were unbalanced, the practical consequences of having heterogeneous residual between-studies variances were more evident, with both tests leading to the wrong statistical conclusion more often than in the conditions with balanced subgroups. A pooled estimate should be preferred for most scenarios, unless the residual between-studies variances are clearly different and there are enough studies in each category to obtain precise separate estimates. © 2017 The British Psychological Society.

  11. Bus accident analysis of routes with/without bus priority.

    PubMed

    Goh, Kelvin Chun Keong; Currie, Graham; Sarvi, Majid; Logan, David

    2014-04-01

    This paper summarises findings on road safety performance and bus-involved accidents in Melbourne along roads where bus priority measures had been applied. Results from an empirical analysis of the accident types revealed significant reduction in the proportion of accidents involving buses hitting stationary objects and vehicles, which suggests the effect of bus priority in addressing manoeuvrability issues for buses. A mixed-effects negative binomial (MENB) regression and back-propagation neural network (BPNN) modelling of bus accidents considering wider influences on accident rates at a route section level also revealed significant safety benefits when bus priority is provided. Sensitivity analyses done on the BPNN model showed general agreement in the predicted accident frequency between both models. The slightly better performance recorded by the MENB model results suggests merits in adopting a mixed effects modelling approach for accident count prediction in practice given its capability to account for unobserved location and time-specific factors. A major implication of this research is that bus priority in Melbourne's context acts to improve road safety and should be a major consideration for road management agencies when implementing bus priority and road schemes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Drug awareness in adolescents attending a mental health service: analysis of longitudinal data.

    PubMed

    Arnau, Jaume; Bono, Roser; Díaz, Rosa; Goti, Javier

    2011-11-01

    One of the procedures used most recently with longitudinal data is linear mixed models. In the context of health research the increasing number of studies that now use these models bears witness to the growing interest in this type of analysis. This paper describes the application of linear mixed models to a longitudinal study of a sample of Spanish adolescents attending a mental health service, the aim being to investigate their knowledge about the consumption of alcohol and other drugs. More specifically, the main objective was to compare the efficacy of a motivational interviewing programme with a standard approach to drug awareness. The models used to analyse the overall indicator of drug awareness were as follows: (a) unconditional linear growth curve model; (b) growth model with subject-associated variables; and (c) individual curve model with predictive variables. The results showed that awareness increased over time and that the variable 'schooling years' explained part of the between-subjects variation. The effect of motivational interviewing was also significant.

  13. Comparison of mixed effects models of antimicrobial resistance metrics of livestock and poultry Salmonella isolates from a national monitoring system.

    PubMed

    Bjork, K E; Kopral, C A; Wagner, B A; Dargatz, D A

    2015-12-01

    Antimicrobial use in agriculture is considered a pathway for the selection and dissemination of resistance determinants among animal and human populations. From 1997 through 2003 the U.S. National Antimicrobial Resistance Monitoring System (NARMS) tested clinical Salmonella isolates from multiple animal and environmental sources throughout the United States for resistance to panels of 16-19 antimicrobials. In this study we applied two mixed effects models, the generalized linear mixed model (GLMM) and accelerated failure time frailty (AFT-frailty) model, to susceptible/resistant and interval-censored minimum inhibitory concentration (MIC) metrics, respectively, from Salmonella enterica subspecies enterica serovar Typhimurium isolates from livestock and poultry. Objectives were to compare characteristics of the two models and to examine the effects of time, species, and multidrug resistance (MDR) on the resistance of isolates to individual antimicrobials, as revealed by the models. Fixed effects were year of sample collection, isolate source species and MDR indicators; laboratory study site was included as a random effect. MDR indicators were significant for every antimicrobial and were dominant effects in multivariable models. Temporal trends and source species influences varied by antimicrobial. In GLMMs, the intra-class correlation coefficient ranged up to 0.8, indicating that the proportion of variance accounted for by laboratory study site could be high. AFT models tended to be more sensitive, detecting more curvilinear temporal trends and species differences; however, high levels of left- or right-censoring made some models unstable and results uninterpretable. Results from GLMMs may be biased by cutoff criteria used to collapse MIC data into binary categories, and may miss signaling important trends or shifts if the series of antibiotic dilutions tested does not span a resistance threshold. Our findings demonstrate the challenges of measuring the AMR ecosystem and the complexity of interacting factors, and have implications for future monitoring. We include suggestions for future data collection and analyses, including alternative modeling approaches. Published by Elsevier B.V.

  14. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.

  15. Decoherence effect in neutrinos produced in microquasar jets

    NASA Astrophysics Data System (ADS)

    Mosquera, M. E.; Civitarese, O.

    2018-04-01

    We study the effect of decoherence upon the neutrino spectra produced in microquasar jets. In order to analyse the precession of the polarization vector of neutrinos we have calculated its time evolution by solving the corresponding equations of motion, and by assuming two different scenarios, namely: (i) the mixing between two active neutrinos, and (ii) the mixing between one active and one sterile neutrino. The results of the calculations corresponding to these scenarios show that the onset of decoherence does not depends on the activation of neutrino-neutrino interactions when realistic values of the coupling are used in the calculations. We discuss also the case of neutrinos produced in windy microquasars and compare the results which those obtained with more conventional models of microquasars.

  16. Worldwide impact of economic cycles on suicide trends over 3 decades: differences according to level of development. A mixed effect model study

    PubMed Central

    Perez-Rodriguez, M Mercedes; Garcia-Nieto, Rebeca; Fernandez-Navarro, Pablo; Galfalvy, Hanga; de Leon, Jose; Baca-Garcia, Enrique

    2012-01-01

    Objectives To investigate the trends and correlations of gross domestic product (GDP) adjusted for purchasing power parity (PPP) per capita on suicide rates in 10 WHO regions during the past 30 years. Design Analyses of databases of PPP-adjusted GDP per capita and suicide rates. Countries were grouped according to the Global Burden of Disease regional classification system. Data sources World Bank's official website and WHO's mortality database. Statistical analyses After graphically displaying PPP-adjusted GDP per capita and suicide rates, mixed effect models were used for representing and analysing clustered data. Results Three different groups of countries, based on the correlation between the PPP-adjusted GDP per capita and suicide rates, are reported: (1) positive correlation: developing (lower middle and upper middle income) Latin-American and Caribbean countries, developing countries in the South East Asian Region including India, some countries in the Western Pacific Region (such as China and South Korea) and high-income Asian countries, including Japan; (2) negative correlation: high-income and developing European countries, Canada, Australia and New Zealand and (3) no correlation was found in an African country. Conclusions PPP-adjusted GDP per capita may offer a simple measure for designing the type of preventive interventions aimed at lowering suicide rates that can be used across countries. Public health interventions might be more suitable for developing countries. In high-income countries, however, preventive measures based on the medical model might prove more useful. PMID:22586285

  17. Synergistic and Antagonistic Effects of Salinity and pH on Germination in Switchgrass (Panicum virgatum L.)

    PubMed Central

    Liu, Yuan; Wang, Quanzhen; Zhang, Yunwei; Cui, Jian; Chen, Guo; Xie, Bao; Wu, Chunhui; Liu, Haitao

    2014-01-01

    The effects of salt-alkaline mixed stress on switchgrass were investigated by evaluating seed germination and the proline, malondialdehyde (MDA) and soluble sugar contents in three switchgrass (Panicum virgatum L.) cultivars in order to identify which can be successfully produced on marginal lands affected by salt-alkaline mixed stress. The experimental conditions consisted of four levels of salinity (10, 60, 110 and 160 mM) and four pH levels (7.1, 8.3, 9.5 and 10.7). The effects of salt-alkaline mixed stress with equivalent coupling of the salinity and pH level on the switchgrass were explored via model analyses. Switchgrass was capable of germinating and surviving well in all treatments under low-alkaline pH (pH≤8.3), regardless of the salinity. However, seed germination and seedling growth were sharply reduced at higher pH values in conjunction with salinity. The salinity and pH had synergetic effects on the germination percentage, germination index, plumular length and the soluble sugar and proline contents in switchgrass. However, these two factors exhibited antagonistic effects on the radicular length of switchgrass. The combined effects of salinity and pH and the interactions between them should be considered when evaluating the strength of salt-alkaline mixed stress. PMID:24454834

  18. Numerical Investigation Into Effect of Fuel Injection Timing on CAI/HCCI Combustion in a Four-Stroke GDI Engine

    NASA Astrophysics Data System (ADS)

    Cao, Li; Zhao, Hua; Jiang, Xi; Kalian, Navin

    2006-02-01

    The Controlled Auto-Ignition (CAI) combustion, also known as Homogeneous Charge Compression Ignition (HCCI), was achieved by trapping residuals with early exhaust valve closure in conjunction with direct injection. Multi-cycle 3D engine simulations have been carried out for parametric study on four different injection timings in order to better understand the effects of injection timings on in-cylinder mixing and CAI combustion. The full engine cycle simulation including complete gas exchange and combustion processes was carried out over several cycles in order to obtain the stable cycle for analysis. The combustion models used in the present study are the Shell auto-ignition model and the characteristic-time combustion model, which were modified to take the high level of EGR into consideration. A liquid sheet breakup spray model was used for the droplet breakup processes. The analyses show that the injection timing plays an important role in affecting the in-cylinder air/fuel mixing and mixture temperature, which in turn affects the CAI combustion and engine performance.

  19. The Influence of Thermodynamic Phase on the Retrieval of Mixed-Phase Cloud Microphysical and Optical Properties in the Visible and Near Infrared Region

    NASA Technical Reports Server (NTRS)

    Lee, Joonsuk; Yang, Ping; Dessler, Andrew E.; Baum, Bryan A.; Platnick, Steven

    2005-01-01

    Cloud microphysical and optical properties are inferred from the bidirectional reflectances simulated for a single-layered cloud consisting of an external mixture of ice particles and liquid droplets. The reflectances are calculated with a rigorous discrete ordinates radiative transfer model and are functions of the cloud effective particle size, the cloud optical thickness, and the values of the ice fraction in the cloud (i.e., the ratio of ice water content to total water content). In the present light scattering and radiative transfer simulations, the ice fraction is assumed to be vertically homogeneous; the habit (shape) percentage as a function of ice particle size is consistent with that used for the Moderate Resolution Imaging Spectroradiometer (MODIS) operational (Collection 4 and earlier) cloud products; and the surface is assumed to be Lambertian with an albedo of 0.03. Furthermore, error analyses pertaining to the inference of the effective particle sizes and optical thicknesses of mixed-phase clouds are performed. Errors are calculated with respect to the assumption of a cloud containing solely liquid or ice phase particles. The analyses suggest that the effective particle size inferred for a mixed-phase cloud can be underestimated (or overestimated) if pure liquid phase (or pure ice phase) is assumed for the cloud, whereas the corresponding cloud optical thickness can be overestimated (or underestimated).

  20. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. IMPACT: Investigating the impact of Models of Practice for Allied health Care in subacuTe settings. A protocol for a quasi-experimental mixed methods study of cost effectiveness and outcomes for patients exposed to different models of allied health care.

    PubMed

    Coker, Freya; Williams, Cylie M; Taylor, Nicholas F; Caspers, Kirsten; McAlinden, Fiona; Wilton, Anita; Shields, Nora; Haines, Terry P

    2018-05-10

    This protocol considers three allied health staffing models across public health subacute hospitals. This quasi-experimental mixed-methods study, including qualitative process evaluation, aims to evaluate the impact of additional allied health services in subacute care, in rehabilitation and geriatric evaluation management settings, on patient, health service and societal outcomes. This health services research will analyse outcomes of patients exposed to different allied health models of care at three health services. Each health service will have a control ward (routine care) and an intervention ward (additional allied health). This project has two parts. Part 1: a whole of site data extraction for included wards. Outcome measures will include: length of stay, rate of readmissions, discharge destinations, community referrals, patient feedback and staff perspectives. Part 2: Functional Independence Measure scores will be collected every 2-3 days for the duration of 60 patient admissions.Data from part 1 will be analysed by linear regression analysis for continuous outcomes using patient-level data and logistic regression analysis for binary outcomes. Qualitative data will be analysed using a deductive thematic approach. For part 2, a linear mixed model analysis will be conducted using therapy service delivery and days since admission to subacute care as fixed factors in the model and individual participant as a random factor. Graphical analysis will be used to examine the growth curve of the model and transformations. The days since admission factor will be used to examine non-linear growth trajectories to determine if they lead to better model fit. Findings will be disseminated through local reports and to the Department of Health and Human Services Victoria. Results will be presented at conferences and submitted to peer-reviewed journals. The Monash Health Human Research Ethics committee approved this multisite research (HREC/17/MonH/144 and HREC/17/MonH/547). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Effect of correlation on covariate selection in linear and nonlinear mixed effect models.

    PubMed

    Bonate, Peter L

    2017-01-01

    The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Probabilistic performance-assessment modeling of the mixed waste landfill at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne

    2007-01-01

    A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less

  4. Effect of river excavation on a bank filtration site - assessing transient surface water - groundwater interaction by 3D heat and solute transport modelling

    NASA Astrophysics Data System (ADS)

    Wang, W.; Oswald, S. E.; Munz, M.; Strasser, D.

    2017-12-01

    Bank filtration is widely used either as main- or pre-treatment process for water supply. The colmation of the river bottom as interface to groundwater plays a key role for hydraulic control of flow paths and location of several beneficial attenuation processes, such as pathogen filtration, mixing, biodegradation and sorption. Along the flow path, mixing happens between the `young' infiltrated water and ambient `old' groundwater. To clarify the mechanisms and their interaction, modelling is often used for analysing spatial and temporal distribution of the travelling time, quantifying mixing ratios, and estimating the biochemical reaction rates. As the most comprehensive tool, 2-D or 3-D spatially-explicit modelling is used in several studies, and for area with geological heterogeneity, the adaptation of different natural tracers could constrain the model in respect to model non-uniqueness and improve the interpretation of the flow field. In our study, we have evaluated the influence of a river excavation and bank reconstruction project on the groundwater-surface water exchange at a bank filtration site. With data from years of field site monitoring, we could include besides heads and temperature also the analysis of stable isotope data and ions to differentiate between infiltrated water and groundwater. Thus, we have set up a 3-D transient heat and mass transport groundwater model, taking the strong local geological heterogeneity into consideration, especially between river and water work wells. By transferring the effect of the river excavation into a changing hydraulic conductivity of the riverbed, model could be calibrated against both water head and temperature time-series observed. Finally, electrical conductivity dominated by river input was included as quasi-conservative tracer. The `triple' calibrated, transient model was then used to i) understand the flow field and quantify the long term changes in infiltration rate and distribution brought by the excavation ii) compare among temperature, electrical conductivity and stable isotope values calculated and interpret the performance and deviations iii) analyse from this modelling basis about the implications of the excavation induced changes on further water quality data and travelling time distributions, also with seasonal aspects.

  5. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. H2-broadening, shifting and mixing coefficients of the doublets in the ν2 and ν4 bands of PH3 at room temperature

    NASA Astrophysics Data System (ADS)

    Salem, Jamel; Blanquet, Ghislain; Lepère, Muriel; Younes, Rached ben

    2018-05-01

    The broadening, shifting and mixing coefficients of the doublet spectral lines in the ν2 and ν4 bands of PH3 perturbed by H2 have been determined at room temperature. Indeed, the collisional spectroscopic parameters: intensities, line widths, line shifts and line mixing parameters, are all grouped together in the collisional relaxation matrix. To analyse the collisional process and physical effects on spectra of phosphine (PH3), we have used the measurements carried out using a tunable diode-laser spectrometer in the ν2 and ν4 bands of PH3 perturbed by hydrogen (H2) at room temperature. The recorded spectra are fitted by the Voigt profile and the speed-dependent uncorrelated hard collision model of Rautian and Sobelman. These profiles are developed in the studies of isolated lines and are modified to account for the line mixing effects in the overlapping lines. The line widths, line shifts and line mixing parameters are given for six A1 and A2 doublet lines with quantum numbers K = 3n, (n = 1, 2, …) and overlapped by collisional broadening at pressures of less than 50 mbar.

  7. Physician-based activity counseling: intervention effects on mediators of motivational readiness for physical activity.

    PubMed

    Pinto, B M; Lynn, H; Marcus, B H; DePue, J; Goldstein, M G

    2001-01-01

    In theory-based interventions for behavior change, there is a need to examine the effects of interventions on the underlying theoretical constructs and the mediating role of such constructs. These two questions are addressed in the Physically Active for Life study, a randomized trial of physician-based exercise counseling for older adults. Three hundred fifty-five patients participated (intervention n = 181, control n = 174; mean age = 65.6 years). The underlying theories used were the Transtheoretical Model, Social Cognitive Theory and the constructs of decisional balance (benefits and barriers), self-efficacy, and behavioral and cognitive processes of change. Motivational readiness for physical activity and related constructs were assessed at baseline, 6 weeks, and 8 months. Linear or logistic mixed effects models were used to examine intervention effects on the constructs, and logistic mixed effects models were used for mediator analyses. At 6 weeks, the intervention had significant effects on decisional balance, self-efficacy, and behavioral processes, but these effects were not maintained at 8 months. At 6 weeks, only decisional balance and behavioral processes were identified as mediators of motivational readiness outcomes. Results suggest that interventions of greater intensity and duration may be needed for sustained changes in mediators and motivational readiness for physical activity among older adults.

  8. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    PubMed

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  9. Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications.

    PubMed

    Tsuruta, S; Misztal, I; Strandén, I

    2001-05-01

    Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.

  10. Machine learning to construct reduced-order models and scaling laws for reactive-transport applications

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.

    2017-12-01

    The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.

  11. Development and Validation of a 3-Dimensional CFB Furnace Model

    NASA Astrophysics Data System (ADS)

    Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti

    At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.

  12. Modelling the effect of environmental factors on resource allocation in mixed plants systems

    NASA Astrophysics Data System (ADS)

    Gayler, Sebastian; Priesack, Eckart

    2010-05-01

    In most cases, growth of plants is determined by competition against neighbours for the local resources light, water and nutrients and by defending against herbivores and pathogens. Consequently, it is important for a plant to grow fast without neglecting defence. However, plant internal substrates and energy required to support maintenance, growth and defence are limited and the total demand for these processes cannot be met in most cases. Therefore, allocation of carbohydrates to growth related primary metabolism or to defence related secondary metabolism can be seen as a trade-off between the demand of plants for being competitive against neighbours and for being more resistant against pathogens. A modelling approach is presented which can be used to simulate competition for light, water and nutrients between plant individuals in mixed canopies. The balance of resource allocation between growth processes and synthesis of secondary compounds is modelled by a concept originating from different plant defence hypothesis. The model is used to analyse the impact of environmental factors such as soil water and nitrogen availability, planting density and atmospheric concentration of CO2 on growth of plant individuals within mixed canopies and variations in concentration of carbon-based secondary metabolites in plant tissues.

  13. Ionic conductivity and mixed-ion effect in mixed alkali metaphosphate glasses.

    PubMed

    Tsuchida, Jefferson Esquina; Ferri, Fabio Aparecido; Pizani, Paulo Sergio; Martins Rodrigues, Ana Candida; Kundu, Swarup; Schneider, José Fabián; Zanotto, Edgar Dutra

    2017-03-01

    In this work, mixed alkali metaphosphate glasses based on K-Na, Rb-Na, Rb-Li, Cs-Na and Cs-Li combinations were studied by differential scanning calorimetry (DSC), complex impedance spectroscopy, and Raman spectroscopy. DSC analyses show that both the glass transition (T g ) and melting temperatures (T m ) exhibit a clear mixed-ion effect. The ionic conductivity shows a strong mixed-ion effect and decreases by more than six orders of magnitude at room temperature for Rb-Na or Cs-Li alkali pairs. This study confirms that the mixed-ion effect may be explained as a natural consequence of random ion mixing because ion transport is favoured between well-matched energy sites and is impeded due to the structural mismatch between neighbouring sites for dissimilar ions.

  14. Bi-phasic trends in mercury concentrations in blood of Wisconsin common loons during 1992–2010

    USGS Publications Warehouse

    Meyer, Michael W.; Rasmussen, Paul W.; Watras, Carl J.; Fevold, Brick M.; Kenow, Kevin P.

    2011-01-01

    Wisconsin Department of Natural Resources (WDNR) assessed the ecological risk of mercury (Hg) in aquatic systems by monitoring common loon (Gavia immer) population dynamics and blood Hg concentrations. We report temporal trends in blood Hg concentrations based on 334 samples collected from adults recaptured in subsequent years (resampled 2-9 times) and from 421 blood samples of chicks collected at lakes resampled 2-8 times 1992-2010.. Temporal trends were identified with generalized additive mixed effects models (GAMMs) and mixed effects models to account for the potential lack of independence among observations from the same loon or same lake. Trend analyses indicated that Hg concentrations in the blood of Wisconsin loons declined over the period 1992-2000, and increased during 2002-2010, but not to the level observed in the early 1990s. The best fitting linear mixed effects model included separate trends for the two time periods. The estimated trend in Hg concentration among the adult loon population during 1992-2000 was -2.6% per year and the estimated trend during 2002-2010 was +1.8% per year; chick blood Hg concentrations decreased by -6.5% per year during 1992-2000, but increased 1.8% per year during 2002-2010. This bi-phasic pattern is similar to trends observed for concentrations of methylmercury (meHg) and SO4 in lake water of a well studied seepage lake (Little Rock Lake, Vilas County) within our study area. A cause-effect relationship between these independent trends is hypothesized.

  15. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  16. Impact of Case Mix Severity on Quality Improvement in a Patient-centered Medical Home (PCMH) in the Maryland Multi-Payor Program.

    PubMed

    Khanna, Niharika; Shaya, Fadia T; Chirikov, Viktor V; Sharp, David; Steffen, Ben

    2016-01-01

    We present data on quality of care (QC) improvement in 35 of 45 National Quality Forum metrics reported annually by 52 primary care practices recognized as patient-centered medical homes (PCMHs) that participated in the Maryland Multi-Payor Program from 2011 to 2013. We assigned QC metrics to (1) chronic, (2) preventive, and (3) mental health care domains. The study used a panel data design with no control group. Using longitudinal fixed-effects regressions, we modeled QC and case mix severity in a PCMH. Overall, 35 of 45 quality metrics reported by 52 PCMHs demonstrated improvement over 3 years, and case mix severity did not affect the achievement of quality improvement. From 2011 to 2012, QC increased by 0.14 (P < .01) for chronic, 0.15 (P < .01) for preventive, and 0.34 (P < .01) for mental health care domains; from 2012 to 2013 these domains increased by 0.03 (P = .06), 0.04 (P = .05), and 0.07 (P = .12), respectively. In univariate analyses, lower National Commission on Quality Assurance PCMH level was associated with higher QC for the mental health care domain, whereas case mix severity did not correlate with QC. In multivariate analyses, higher QC correlated with larger practices, greater proportion of older patients, and readmission visits. Rural practices had higher proportions of Medicaid patients, lower QC, and higher QC improvement in interaction analyses with time. The gains in QC in the chronic disease domain, the preventive care domain, and, most significantly, the mental health care domain were observed over time regardless of patient case mix severity. QC improvement was generally not modified by practice characteristics, except for rurality. © Copyright 2016 by the American Board of Family Medicine.

  17. Intercomparison of aerosol-cloud-precipitation interactions in stratiform orographic mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Muhlbauer, A.; Hashino, T.; Xue, L.; Teller, A.; Lohmann, U.; Rasmussen, R. M.; Geresdi, I.; Pan, Z.

    2010-09-01

    Anthropogenic aerosols serve as a source of both cloud condensation nuclei (CCN) and ice nuclei (IN) and affect microphysical properties of clouds. Increasing aerosol number concentrations is hypothesized to retard the cloud droplet coalescence and the riming in mixed-phase clouds, thereby decreasing orographic precipitation. This study presents results from a model intercomparison of 2-D simulations of aerosol-cloud-precipitation interactions in stratiform orographic mixed-phase clouds. The sensitivity of orographic precipitation to changes in the aerosol number concentrations is analysed and compared for various dynamical and thermodynamical situations. Furthermore, the sensitivities of microphysical processes such as coalescence, aggregation, riming and diffusional growth to changes in the aerosol number concentrations are evaluated and compared. The participating numerical models are the model from the Consortium for Small-Scale Modeling (COSMO) with bulk microphysics, the Weather Research and Forecasting (WRF) model with bin microphysics and the University of Wisconsin modeling system (UWNMS) with a spectral ice habit prediction microphysics scheme. All models are operated on a cloud-resolving scale with 2 km horizontal grid spacing. The results of the model intercomparison suggest that the sensitivity of orographic precipitation to aerosol modifications varies greatly from case to case and from model to model. Neither a precipitation decrease nor a precipitation increase is found robustly in all simulations. Qualitative robust results can only be found for a subset of the simulations but even then quantitative agreement is scarce. Estimates of the aerosol effect on orographic precipitation are found to range from -19% to 0% depending on the simulated case and the model. Similarly, riming is shown to decrease in some cases and models whereas it increases in others, which implies that a decrease in riming with increasing aerosol load is not a robust result. Furthermore, it is found that neither a decrease in cloud droplet coalescence nor a decrease in riming necessarily implies a decrease in precipitation due to compensation effects by other microphysical pathways. The simulations suggest that mixed-phase conditions play an important role in buffering the effect of aerosol perturbations on cloud microphysics and reducing the overall susceptibility of clouds and precipitation to changes in the aerosol number concentrations. As a consequence the aerosol effect on precipitation is suggested to be less pronounced or even inverted in regions with high terrain (e.g., the Alps or Rocky Mountains) or in regions where mixed-phase microphysics is important for the climatology of orographic precipitation.

  18. CyTOF workflow: differential discovery in high-throughput high-dimensional cytometry datasets

    PubMed Central

    Nowicka, Malgorzata; Krieg, Carsten; Weber, Lukas M.; Hartmann, Felix J.; Guglietta, Silvia; Becher, Burkhard; Levesque, Mitchell P.; Robinson, Mark D.

    2017-01-01

    High dimensional mass and flow cytometry (HDCyto) experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots), reporting of clustering results (dimensionality reduction, heatmaps with dendrograms) and differential analyses (e.g. plots of aggregated signals). PMID:28663787

  19. Minimum number of clusters and comparison of analysis methods for cross sectional stepped wedge cluster randomised trials with binary outcomes: A simulation study.

    PubMed

    Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick

    2017-03-09

    Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.

  20. Assessing variation in life-history tactics within a population using mixture regression models: a practical guide for evolutionary ecologists.

    PubMed

    Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel

    2017-05-01

    Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.

  1. Non-replication of the association between 5HTTLPR and response to psychological therapy for child anxiety disorders

    PubMed Central

    Lester, Kathryn J.; Roberts, Susanna; Keers, Robert; Coleman, Jonathan R. I.; Breen, Gerome; Wong, Chloe C. Y.; Xu, Xiaohui; Arendt, Kristian; Blatter-Meunier, Judith; Bögels, Susan; Cooper, Peter; Creswell, Cathy; Heiervang, Einar R.; Herren, Chantal; Hogendoorn, Sanne M.; Hudson, Jennifer L.; Krause, Karen; Lyneham, Heidi J.; McKinnon, Anna; Morris, Talia; Nauta, Maaike H.; Rapee, Ronald M.; Rey, Yasmin; Schneider, Silvia; Schneider, Sophie C.; Silverman, Wendy K.; Smith, Patrick; Thastum, Mikael; Thirlwall, Kerstin; Waite, Polly; Wergeland, Gro Janne; Eley, Thalia C.

    2016-01-01

    Background We previously reported an association between 5HTTLPR genotype and outcome following cognitive–behavioural therapy (CBT) in child anxiety (Cohort 1). Children homozygous for the low-expression short-allele showed more positive outcomes. Other similar studies have produced mixed results, with most reporting no association between genotype and CBT outcome. Aims To replicate the association between 5HTTLPR and CBT outcome in child anxiety from the Genes for Treatment study (GxT Cohort 2, n = 829). Method Logistic and linear mixed effects models were used to examine the relationship between 5HTTLPR and CBT outcomes. Mega-analyses using both cohorts were performed. Results There was no significant effect of 5HTTLPR on CBT outcomes in Cohort 2. Mega-analyses identified a significant association between 5HTTLPR and remission from all anxiety disorders at follow-up (odds ratio 0.45, P = 0.014), but not primary anxiety disorder outcomes. Conclusions The association between 5HTTLPR genotype and CBT outcome did not replicate. Short-allele homozygotes showed more positive treatment outcomes, but with small, non-significant effects. Future studies would benefit from utilising whole genome approaches and large, homogenous samples. PMID:26294368

  2. Mixed conditional logistic regression for habitat selection studies.

    PubMed

    Duchesne, Thierry; Fortin, Daniel; Courbin, Nicolas

    2010-05-01

    1. Resource selection functions (RSFs) are becoming a dominant tool in habitat selection studies. RSF coefficients can be estimated with unconditional (standard) and conditional logistic regressions. While the advantage of mixed-effects models is recognized for standard logistic regression, mixed conditional logistic regression remains largely overlooked in ecological studies. 2. We demonstrate the significance of mixed conditional logistic regression for habitat selection studies. First, we use spatially explicit models to illustrate how mixed-effects RSFs can be useful in the presence of inter-individual heterogeneity in selection and when the assumption of independence from irrelevant alternatives (IIA) is violated. The IIA hypothesis states that the strength of preference for habitat type A over habitat type B does not depend on the other habitat types also available. Secondly, we demonstrate the significance of mixed-effects models to evaluate habitat selection of free-ranging bison Bison bison. 3. When movement rules were homogeneous among individuals and the IIA assumption was respected, fixed-effects RSFs adequately described habitat selection by simulated animals. In situations violating the inter-individual homogeneity and IIA assumptions, however, RSFs were best estimated with mixed-effects regressions, and fixed-effects models could even provide faulty conclusions. 4. Mixed-effects models indicate that bison did not select farmlands, but exhibited strong inter-individual variations in their response to farmlands. Less than half of the bison preferred farmlands over forests. Conversely, the fixed-effect model simply suggested an overall selection for farmlands. 5. Conditional logistic regression is recognized as a powerful approach to evaluate habitat selection when resource availability changes. This regression is increasingly used in ecological studies, but almost exclusively in the context of fixed-effects models. Fitness maximization can imply differences in trade-offs among individuals, which can yield inter-individual differences in selection and lead to departure from IIA. These situations are best modelled with mixed-effects models. Mixed-effects conditional logistic regression should become a valuable tool for ecological research.

  3. Multivariate Models for Normal and Binary Responses in Intervention Studies

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen

    2016-01-01

    Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…

  4. Study of Variable Turbulent Prandtl Number Model for Heat Transfer to Supercritical Fluids in Vertical Tubes

    NASA Astrophysics Data System (ADS)

    Tian, Ran; Dai, Xiaoye; Wang, Dabiao; Shi, Lin

    2018-06-01

    In order to improve the prediction performance of the numerical simulations for heat transfer of supercritical pressure fluids, a variable turbulent Prandtl number (Prt) model for vertical upward flow at supercritical pressures was developed in this study. The effects of Prt on the numerical simulation were analyzed, especially for the heat transfer deterioration conditions. Based on the analyses, the turbulent Prandtl number was modeled as a function of the turbulent viscosity ratio and molecular Prandtl number. The model was evaluated using experimental heat transfer data of CO2, water and Freon. The wall temperatures, including the heat transfer deterioration cases, were more accurately predicted by this model than by traditional numerical calculations with a constant Prt. By analyzing the predicted results with and without the variable Prt model, it was found that the predicted velocity distribution and turbulent mixing characteristics with the variable Prt model are quite different from that predicted by a constant Prt. When heat transfer deterioration occurs, the radial velocity profile deviates from the log-law profile and the restrained turbulent mixing then leads to the deteriorated heat transfer.

  5. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  6. A tutorial on Bayesian bivariate meta-analysis of mixed binary-continuous outcomes with missing treatment effects.

    PubMed

    Gajic-Veljanoski, Olga; Cheung, Angela M; Bayoumi, Ahmed M; Tomlinson, George

    2016-05-30

    Bivariate random-effects meta-analysis (BVMA) is a method of data synthesis that accounts for treatment effects measured on two outcomes. BVMA gives more precise estimates of the population mean and predicted values than two univariate random-effects meta-analyses (UVMAs). BVMA also addresses bias from incomplete reporting of outcomes. A few tutorials have covered technical details of BVMA of categorical or continuous outcomes. Limited guidance is available on how to analyze datasets that include trials with mixed continuous-binary outcomes where treatment effects on one outcome or the other are not reported. Given the advantages of Bayesian BVMA for handling missing outcomes, we present a tutorial for Bayesian BVMA of incompletely reported treatment effects on mixed bivariate outcomes. This step-by-step approach can serve as a model for our intended audience, the methodologist familiar with Bayesian meta-analysis, looking for practical advice on fitting bivariate models. To facilitate application of the proposed methods, we include our WinBUGS code. As an example, we use aggregate-level data from published trials to demonstrate the estimation of the effects of vitamin K and bisphosphonates on two correlated bone outcomes, fracture, and bone mineral density. We present datasets where reporting of the pairs of treatment effects on both outcomes was 'partially' complete (i.e., pairs completely reported in some trials), and we outline steps for modeling the incompletely reported data. To assess what is gained from the additional work required by BVMA, we compare the resulting estimates to those from separate UVMAs. We discuss methodological findings and make four recommendations. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: An example from a vertigo phase III study with longitudinal count data as primary endpoint

    PubMed Central

    2012-01-01

    Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944

  8. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: an example from a vertigo phase III study with longitudinal count data as primary endpoint.

    PubMed

    Adrion, Christine; Mansmann, Ulrich

    2012-09-10

    A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.

  9. Does Marriage Moderate Genetic Effects on Delinquency and Violence?

    PubMed Central

    Li, Yi; Liu, Hexuan; Guo, Guang

    2015-01-01

    Using data from the National Longitudinal Study of Adolescent to Adult Health (N = 1,254), the authors investigated whether marriage can foster desistance from delinquency and violence by moderating genetic effects. In contrast to existing gene–environment research that typically focuses on one or a few genetic polymorphisms, they extended a recently developed mixed linear model to consider the collective influence of 580 single nucleotide polymorphisms in 64 genes related to aggression and risky behavior. The mixed linear model estimates the proportion of variance in the phenotype that is explained by the single nucleotide polymorphisms. The authors found that the proportion of variance in delinquency/violence explained was smaller among married individuals than unmarried individuals. Because selection, confounding, and heterogeneity may bias the estimate of the Gene × Marriage interaction, they conducted a series of analyses to address these issues. The findings suggest that the Gene × Marriage interaction results were not seriously affected by these issues. PMID:26549892

  10. CFD analyses of combustor and nozzle flowfields

    NASA Astrophysics Data System (ADS)

    Tsuei, Hsin-Hua; Merkle, Charles L.

    1993-11-01

    The objectives of the research are to improve design capabilities for low thrust rocket engines through understanding of the detailed mixing and combustion processes. A Computational Fluid Dynamic (CFD) technique is employed to model the flowfields within the combustor, nozzle, and near plume field. The computational modeling of the rocket engine flowfields requires the application of the complete Navier-Stokes equations, coupled with species diffusion equations. Of particular interest is a small gaseous hydrogen-oxygen thruster which is considered as a coordinated part of an ongoing experimental program at NASA LeRC. The numerical procedure is performed on both time-marching and time-accurate algorithms, using an LU approximate factorization in time, flux split upwinding differencing in space. The integrity of fuel film cooling along the wall, its effectiveness in the mixing with the core flow including unsteady large scale effects, the resultant impact on performance and the assessment of the near plume flow expansion to finite pressure altitude chamber are addressed.

  11. 2009–2010 Seasonal Influenza Vaccination Coverage Among College Students From 8 Universities in North Carolina

    PubMed Central

    Poehling, Katherine A.; Blocker, Jill; Ip, Edward H.; Peters, Timothy R.; Wolfson, Mark

    2012-01-01

    Objective We sought to describe the 2009–2010 seasonal influenza vaccine coverage of college students. Participants 4090 college students from eight North Carolina universities participated in a confidential, web-based survey in October-November 2009. Methods Associations between self-reported 2009–2010 seasonal influenza vaccination and demographic characteristics, campus activities, parental education, and email usage were assessed by bivariate analyses and by a mixed-effects model adjusting for clustering by university. Results Overall, 20% of students (range 14%–30% by university) reported receiving 2009–2010 seasonal influenza vaccine. Being a freshman, attending a private university, having a college-educated parent, and participating in academic clubs/honor societies predicted receipt of influenza vaccine in the mixed-effects model. Conclusions The self-reported 2009–2010 influenza vaccine coverage was one-quarter of the 2020 Healthy People goal (80%) for healthy persons 18–64 years of age. College campuses have the opportunity to enhance influenza vaccine coverage among its diverse student populations. PMID:23157195

  12. 2009-2010 seasonal influenza vaccination coverage among college students from 8 universities in North Carolina.

    PubMed

    Poehling, Katherine A; Blocker, Jill; Ip, Edward H; Peters, Timothy R; Wolfson, Mark

    2012-01-01

    The authors sought to describe the 2009-2010 seasonal influenza vaccine coverage of college students. A total of 4,090 college students from 8 North Carolina universities participated in a confidential, Web-based survey in October-November 2009. Associations between self-reported 2009-2010 seasonal influenza vaccination and demographic characteristics, campus activities, parental education, and e-mail usage were assessed by bivariate analyses and by a mixed-effects model adjusting for clustering by university. Overall, 20% of students (range 14%-30% by university) reported receiving 2009-2010 seasonal influenza vaccine. Being a freshman, attending a private university, having a college-educated parent, and participating in academic clubs/honor societies predicted receipt of influenza vaccine in the mixed-effects model. The self-reported 2009-2010 influenza vaccine coverage was one-quarter of the 2020 Healthy People goal (80%) for healthy persons 18 to 64 years of age. College campuses have the opportunity to enhance influenza vaccine coverage among its diverse student populations.

  13. Buoyancy Driven Coolant Mixing Studies of Natural Circulation Flows at the ROCOM Test Facility Using ANSYS CFX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohne, Thomas; Kliem, Soren; Rohde, Ulrich

    2006-07-01

    Coolant mixing in the cold leg, downcomer and the lower plenum of pressurized water reactors is an important phenomenon mitigating the reactivity insertion into the core. Therefore, mixing of the de-borated slugs with the ambient coolant in the reactor pressure vessel was investigated at the four loop 1:5 scaled ROCOM mixing test facility. Thermal hydraulics analyses showed, that weakly borated condensate can accumulate in particular in the pump loop seal of those loops, which do not receive safety injection. After refilling of the primary circuit, natural circulation in the stagnant loops can re-establish simultaneously and the de-borated slugs are shiftedmore » towards the reactor pressure vessel (RPV). In the ROCOM experiments, the length of the flow ramp and the initial density difference between the slugs and the ambient coolant was varied. From the test matrix experiments with 0 resp. 2% density difference between the de-borated slugs and the ambient coolant were used to validate the CFD software ANSYS CFX. To model the effects of turbulence on the mean flow a higher order Reynolds stress turbulence model was employed and a mesh consisting of 6.4 million hybrid elements was utilized. Only the experiments and CFD calculations with modeled density differences show a stratification in the downcomer. Depending on the degree of density differences the less dense slugs flow around the core barrel at the top of the downcomer. At the opposite side the lower borated coolant is entrained by the colder safety injection water and transported to the core. The validation proves that ANSYS CFX is able to simulate appropriately the flow field and mixing effects of coolant with different densities. (authors)« less

  14. A Parameter Subset Selection Algorithm for Mixed-Effects Models

    DOE PAGES

    Schmidt, Kathleen L.; Smith, Ralph C.

    2016-01-01

    Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less

  15. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  16. Population Pharmacokinetic Analyses of Lithium: A Systematic Review.

    PubMed

    Methaneethorn, Janthima

    2018-02-01

    Even though lithium has been used for the treatment of bipolar disorder for several decades, its toxicities are still being reported. The major limitation in the use of lithium is its narrow therapeutic window. Several methods have been proposed to predict lithium doses essential to attain therapeutic levels. One of the methods used to guide lithium therapy is population pharmacokinetic approach which accounts for inter- and intra-individual variability in predicting lithium doses. Several population pharmacokinetic studies of lithium have been conducted. The objective of this review is to provide information on population pharmacokinetics of lithium focusing on nonlinear mixed effect modeling approach and to summarize significant factors affecting lithium pharmacokinetics. A literature search was conducted from PubMed database from inception to December, 2016. Studies conducted in humans, using lithium as a study drug, providing population pharmacokinetic analyses of lithium by means of nonlinear mixed effect modeling, were included in this review. Twenty-four articles were identified from the database. Seventeen articles were excluded based on the inclusion and exclusion criteria. A total of seven articles were included in this review. Of these, only one study reported a combined population pharmacokinetic-pharmacodynamic model of lithium. Lithium pharmacokinetics were explained using both one- and two-compartment models. The significant predictors of lithium clearance identified in most studies were renal function and body size. One study reported a significant effect of age on lithium clearance. The typical values of lithium clearance ranged from 0.41 to 9.39 L/h. The magnitude of inter-individual variability on lithium clearance ranged from 12.7 to 25.1%. Only two studies evaluated the models using external data sets. Model methodologies in each study are summarized and discussed in this review. For future perspective, a population pharmacokinetic-pharmacodynamic study of lithium is recommended. Moreover, external validation of previously published models should be performed.

  17. Effects of Precipitation on Ocean Mixed-Layer Temperature and Salinity as Simulated in a 2-D Coupled Ocean-Cloud Resolving Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.

    1999-01-01

    A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.

  18. Incidence and effects of endemic populations of forest pests in young mixed-conifer forests of the Sierra Nevada

    Treesearch

    Carroll B. Williams; David L. Azuma; George T. Ferrell

    1992-01-01

    Approximately 3.200 trees in young mixed-conifer stands were examined for pest activity and human-caused or mechanical injuries, and approximately 25 percent of these trees were randomly selected for stem analyses. The examination of trees felled for stem analyses showed that 409 (47 percent) were free of pests and 466 (53 percent) had one or more pest categories....

  19. Three Approaches to Modeling Gene-Environment Interactions in Longitudinal Family Data: Gene-Smoking Interactions in Blood Pressure.

    PubMed

    Basson, Jacob; Sung, Yun Ju; de Las Fuentes, Lisa; Schwander, Karen L; Vazquez, Ana; Rao, Dabeeru C

    2016-01-01

    Blood pressure (BP) has been shown to be substantially heritable, yet identified genetic variants explain only a small fraction of the heritability. Gene-smoking interactions have detected novel BP loci in cross-sectional family data. Longitudinal family data are available and have additional promise to identify BP loci. However, this type of data presents unique analysis challenges. Although several methods for analyzing longitudinal family data are available, which method is the most appropriate and under what conditions has not been fully studied. Using data from three clinic visits from the Framingham Heart Study, we performed association analysis accounting for gene-smoking interactions in BP at 31,203 markers on chromosome 22. We evaluated three different modeling frameworks: generalized estimating equations (GEE), hierarchical linear modeling, and pedigree-based mixed modeling. The three models performed somewhat comparably, with multiple overlaps in the most strongly associated loci from each model. Loci with the greatest significance were more strongly supported in the longitudinal analyses than in any of the component single-visit analyses. The pedigree-based mixed model was more conservative, with less inflation in the variant main effect and greater deflation in the gene-smoking interactions. The GEE, but not the other two models, resulted in substantial inflation in the tail of the distribution when variants with minor allele frequency <1% were included in the analysis. The choice of analysis method should depend on the model and the structure and complexity of the familial and longitudinal data. © 2015 WILEY PERIODICALS, INC.

  20. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  1. Genetic factors controlling wool shedding in a composite Easycare sheep flock.

    PubMed

    Matika, O; Bishop, S C; Pong-Wong, R; Riggio, V; Headon, D J

    2013-12-01

    Historically, sheep have been selectively bred for desirable traits including wool characteristics. However, recent moves towards extensive farming and reduced farm labour have seen a renewed interest in Easycare breeds. The aim of this study was to quantify the underlying genetic architecture of wool shedding in an Easycare flock. Wool shedding scores were collected from 565 pedigreed commercial Easycare sheep from 2002 to 2010. The wool scoring system was based on a 10-point (0-9) scale, with score 0 for animals retaining full fleece and 9 for those completely shedding. DNA was sampled from 200 animals of which 48 with extreme phenotypes were genotyped using a 50-k SNP chip. Three genetic analyses were performed: heritability analysis, complex segregation analysis to test for a major gene hypothesis and a genome-wide association study to map regions in the genome affecting the trait. Phenotypes were treated as a continuous or binary variable and categories. High estimates of heritability (0.80 when treated as a continuous, 0.65-0.75 as binary and 0.75 as categories) for shedding were obtained from linear mixed model analyses. Complex segregation analysis gave similar estimates (0.80 ± 0.06) to those above with additional evidence for a major gene with dominance effects. Mixed model association analyses identified four significant (P < 0.05) SNPs. Further analyses of these four SNPs in all 200 animals revealed that one of the SNPs displayed dominance effects similar to those obtained from the complex segregation analyses. In summary, we found strong genetic control for wool shedding, demonstrated the possibility of a single putative dominant gene controlling this trait and identified four SNPs that may be in partial linkage disequilibrium with gene(s) controlling shedding. © 2013 University of Edinburgh, Animal Genetics © 2013 Stichting International Foundation for Animal Genetics.

  2. Stand level height-diameter mixed effects models: parameters fitted using loblolly pine but calibrated for sweetgum

    Treesearch

    Curtis L. Vanderschaaf

    2008-01-01

    Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...

  3. Health-Related Quality-of-Life Findings for the Prostate Cancer Prevention Trial

    PubMed Central

    2012-01-01

    Background The Prostate Cancer Prevention Trial (PCPT)—a randomized placebo-controlled study of the efficacy of finasteride in preventing prostate cancer—offered the opportunity to prospectively study effects of finasteride and other covariates on the health-related quality of life of participants in a multiyear trial. Methods We assessed three health-related quality-of-life domains (measured with the Health Survey Short Form–36: Physical Functioning, Mental Health, and Vitality scales) via questionnaires completed by PCPT participants at enrollment (3 months before randomization), at 6 months after randomization, and annually for 7 years. Covariate data obtained at enrollment from patient-completed questionnaires were included in our model. Mixed-effects model analyses and a cross-sectional presentation at three time points began at 6 months after randomization. All statistical tests were two-sided. Results For the physical function outcome (n = 16 077), neither the finasteride main effect nor the finasteride interaction with time were statistically significant. The effects of finasteride on physical function were minor and accounted for less than a 1-point difference over time in Physical Functioning scores (mixed-effect estimate = 0.07, 95% confidence interval [CI] = −0.28 to 0.42, P = .71). Comorbidities such as congestive heart failure (estimate = −5.64, 95% CI = −7.96 to −3.32, P < .001), leg pain (estimate = −2.57, 95% CI = −3.04 to −2.10, P < .001), and diabetes (estimate = −1.31, 95% CI = −2.04 to −0.57, P < .001) had statistically significant negative effects on physical function, as did current smoking (estimate = −2.34, 95% CI = −2.97 to −1.71, P < .001) and time on study (estimate = −1.20, 95% CI = −1.36 to −1.03, P < .001). Finasteride did not have a statistically significant effect on the other two dependent variables, mental health and vitality, either in the mixed-effects analyses or in the cross-sectional analysis at any of the three time points. Conclusion Finasteride did not negatively affect SF–36 Physical Functioning, Mental Health, or Vitality scores. PMID:22972968

  4. An independent component analysis confounding factor correction framework for identifying broad impact expression quantitative trait loci

    PubMed Central

    Ju, Jin Hyun; Crystal, Ronald G.

    2017-01-01

    Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL. PMID:28505156

  5. An independent component analysis confounding factor correction framework for identifying broad impact expression quantitative trait loci.

    PubMed

    Ju, Jin Hyun; Shenoy, Sushila A; Crystal, Ronald G; Mezey, Jason G

    2017-05-01

    Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL.

  6. Underestimation of Variance of Predicted Health Utilities Derived from Multiattribute Utility Instruments.

    PubMed

    Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M

    2017-04-01

    Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.

  7. Effects of a job crafting intervention program on work engagement among Japanese employees: a pretest-posttest study.

    PubMed

    Sakuraya, Asuka; Shimazu, Akihito; Imamura, Kotaro; Namba, Katsuyuki; Kawakami, Norito

    2016-10-24

    Job crafting, an employee-initiated job design/redesign, has become important for employees' well-being such as work engagement. This study examined the effectiveness of a newly developed job crafting intervention program on work engagement (as primary outcome), as well as job crafting and psychological distress (as secondary outcomes), using a pretest-posttest study design among Japanese employees. Participants were managers of a private company and a private psychiatric hospital in Japan. The job crafting intervention program consisted of two 120-min sessions with a two-week interval between them. Outcomes were assessed at baseline (Time 1), post-intervention (Time 2), and a one-month follow-up (Time 3). The mixed growth model analyses were conducted using time (Time 1, Time 2, and Time 3) as an indicator of intervention effect. Effect sizes were calculated using Cohen's d. The program showed a significant positive effect on work engagement (t = 2.20, p = 0.03) in the mixed growth model analyses, but with only small effect sizes (Cohen's d = 0.33 at Time 2 and 0.26 at Time 3). The program also significantly improved job crafting (t = 2.36, p = 0.02: Cohen's d = 0.36 at Time 2 and 0.47 at Time 3) and reduced psychological distress (t = -2.06, p = 0.04: Cohen's d = -0.15 at Time 2 and -0.31 at Time 3). The study indicated that the newly developed job crafting intervention program was effective in increasing work engagement, as well as in improving job crafting and decreasing psychological distress, among Japanese managers. UMIN Clinical Trials Registry UMIN000024062 . Retrospectively registered 15 September 2016.

  8. Analysis of Cross-Sectional Univariate Measurements for Family Dyads Using Linear Mixed Modeling

    PubMed Central

    Knafl, George J.; Dixon, Jane K.; O'Malley, Jean P.; Grey, Margaret; Deatrick, Janet A.; Gallo, Agatha M.; Knafl, Kathleen A.

    2010-01-01

    Outcome measurements from members of the same family are likely correlated. Such intrafamilial correlation (IFC) is an important dimension of the family as a unit but is not always accounted for in analyses of family data. This article demonstrates the use of linear mixed modeling to account for IFC in the important special case of univariate measurements for family dyads collected at a single point in time. Example analyses of data from partnered parents having a child with a chronic condition on their child's adaptation to the condition and on the family's general functioning and management of the condition are provided. Analyses of this kind are reasonably straightforward to generate with popular statistical tools. Thus, it is recommended that IFC be reported as standard practice reflecting the fact that a family dyad is more than just the aggregate of two individuals. Moreover, not accounting for IFC can affect the conclusions. PMID:19307316

  9. Recognition of facial expressions of mixed emotions in school-age children exposed to terrorism.

    PubMed

    Scrimin, Sara; Moscardino, Ughetta; Capello, Fabia; Altoè, Gianmarco; Axia, Giovanna

    2009-09-01

    This exploratory study aims at investigating the effects of terrorism on children's ability to recognize emotions. A sample of 101 exposed and 102 nonexposed children (mean age = 11 years), balanced for age and gender, were assessed 20 months after a terrorist attack in Beslan, Russia. Two trials controlled for children's ability to match a facial emotional stimulus with an emotional label and their ability to match an emotional label with an emotional context. The experimental trial evaluated the relation between exposure to terrorism and children's free labeling of mixed emotion facial stimuli created by morphing between 2 prototypical emotions. Repeated measures analyses of covariance revealed that exposed children correctly recognized pure emotions. Four log-linear models were performed to explore the association between exposure group and category of answer given in response to different mixed emotion facial stimuli. Model parameters indicated that, compared with nonexposed children, exposed children (a) labeled facial expressions containing anger and sadness significantly more often than expected as anger, and (b) produced fewer correct answers in response to stimuli containing sadness as a target emotion.

  10. Metrics to quantify the importance of mixing state for CCN activity

    DOE PAGES

    Ching, Joseph; Fast, Jerome; West, Matthew; ...

    2017-06-21

    It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less

  11. Metrics to quantify the importance of mixing state for CCN activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ching, Joseph; Fast, Jerome; West, Matthew

    It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less

  12. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  13. MILP model for integrated balancing and sequencing mixed-model two-sided assembly line with variable launching interval and assignment restrictions

    NASA Astrophysics Data System (ADS)

    Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.

    2017-09-01

    This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.

  14. An investigation of the predictors of photoprotection and UVR dose to the face in patients with XP: a protocol using observational mixed methods

    PubMed Central

    Walburn, Jessica; Sarkany, Robert; Norton, Sam; Foster, Lesley; Morgan, Myfanwy; Sainsbury, Kirby; Araújo-Soares, Vera; Anderson, Rebecca; Garrood, Isabel; Heydenreich, Jakob; Sniehotta, Falko F; Vieira, Rute; Wulf, Hans Christian; Weinman, John

    2017-01-01

    Introduction Xeroderma pigmentosum (XP) is a rare genetic condition caused by defective nucleotide excision repair and characterised by skin cancer, ocular and neurological involvement. Stringent ultraviolet protection is the only way to prevent skin cancer. Despite the risks, some patients’ photoprotection is poor, with a potentially devastating impact on their prognosis. The aim of this research is to identify disease-specific and psychosocial predictors of photoprotection behaviour and ultraviolet radiation (UVR) dose to the face. Methods and analysis Mixed methods research based on 45 UK patients will involve qualitative interviews to identify individuals’ experience of XP and the influences on their photoprotection behaviours and a cross-sectional quantitative survey to assess biopsychosocial correlates of these behaviours at baseline. This will be followed by objective measurement of UVR exposure for 21 days by wrist-worn dosimeter and daily recording of photoprotection behaviours and psychological variables for up to 50 days in the summer months. This novel methodology will enable UVR dose reaching the face to be calculated and analysed as a clinically relevant endpoint. A range of qualitative and quantitative analytical approaches will be used, reflecting the mixed methods (eg, cross-sectional qualitative interviews, n-of-1 studies). Framework analysis will be used to analyse the qualitative interviews; mixed-effects longitudinal models will be used to examine the association of clinical and psychosocial factors with the average daily UVR dose; dynamic logistic regression models will be used to investigate participant-specific psychosocial factors associated with photoprotection behaviours. Ethics and dissemination This research has been approved by Camden and King’s Cross Research Ethics Committee 15/LO/1395. The findings will be published in peer-reviewed journals and presented at national and international scientific conferences. PMID:28827277

  15. Real longitudinal data analysis for real people: building a good enough mixed model.

    PubMed

    Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E

    2010-02-20

    Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.

  16. Scale model performance test investigation of mixed flow exhaust systems for an energy efficient engine /E3/ propulsion system

    NASA Technical Reports Server (NTRS)

    Kuchar, A. P.; Chamberlin, R.

    1983-01-01

    As part of the NASA Energy Efficient Engine program, scale-model performance tests of a mixed flow exhaust system were conducted. The tests were used to evaluate the performance of exhaust system mixers for high-bypass, mixed-flow turbofan engines. The tests indicated that: (1) mixer penetration has the most significant affect on both mixing effectiveness and mixer pressure loss; (2) mixing/tailpipe length improves mixing effectiveness; (3) gap reduction between the mixer and centerbody increases high mixing effectiveness; (4) mixer cross-sectional shape influences mixing effectiveness; (5) lobe number affects mixing degree; and (6) mixer aerodynamic pressure losses are a function of secondary flows inherent to the lobed mixer concept.

  17. Constraining Carbonaceous Aerosol Climate Forcing by Bridging Laboratory, Field and Modeling Studies

    NASA Astrophysics Data System (ADS)

    Dubey, M. K.; Aiken, A. C.; Liu, S.; Saleh, R.; Cappa, C. D.; Williams, L. R.; Donahue, N. M.; Gorkowski, K.; Ng, N. L.; Mazzoleni, C.; China, S.; Sharma, N.; Yokelson, R. J.; Allan, J. D.; Liu, D.

    2014-12-01

    Biomass and fossil fuel combustion emits black (BC) and brown carbon (BrC) aerosols that absorb sunlight to warm climate and organic carbon (OC) aerosols that scatter sunlight to cool climate. The net forcing depends strongly on the composition, mixing state and transformations of these carbonaceous aerosols. Complexities from large variability of fuel types, combustion conditions and aging processes have confounded their treatment in models. We analyse recent laboratory and field measurements to uncover fundamental mechanism that control the chemical, optical and microphysical properties of carbonaceous aerosols that are elaborated below: Wavelength dependence of absorption and the single scattering albedo (ω) of fresh biomass burning aerosols produced from many fuels during FLAME-4 was analysed to determine the factors that control the variability in ω. Results show that ω varies strongly with fire-integrated modified combustion efficiency (MCEFI)—higher MCEFI results in lower ω values and greater spectral dependence of ω (Liu et al GRL 2014). A parameterization of ω as a function of MCEFI for fresh BB aerosols is derived from the laboratory data and is evaluated by field data, including BBOP. Our laboratory studies also demonstrate that BrC production correlates with BC indicating that that they are produced by a common mechanism that is driven by MCEFI (Saleh et al NGeo 2014). We show that BrC absorption is concentrated in the extremely low volatility component that favours long-range transport. We observe substantial absorption enhancement for internally mixed BC from diesel and wood combustion near London during ClearFlo. While the absorption enhancement is due to BC particles coated by co-emitted OC in urban regions, it increases with photochemical age in rural areas and is simulated by core-shell models. We measure BrC absorption that is concentrated in the extremely low volatility components and attribute it to wood burning. Our results support enhanced light absorption by internally mixed BC parameterizations in models and identify mixed biomass and fossil combustion regions where this effect is large. We unify the treatment of carbonaceous aerosol components and their interactions to simplify and verify their representation in climate models, and re-evaluate their direct radiative forcing.

  18. Transcriptional responses of zebrafish to complex metal mixtures in laboratory studies overestimates the responses observed with environmental water.

    PubMed

    Pradhan, Ajay; Ivarsson, Per; Ragnvaldsson, Daniel; Berg, Håkan; Jass, Jana; Olsson, Per-Erik

    2017-04-15

    Metals released into the environment continue to be of concern for human health. However, risk assessment of metal exposure is often based on total metal levels and usually does not take bioavailability data, metal speciation or matrix effects into consideration. The continued development of biological endpoint analyses are therefore of high importance for improved eco-toxicological risk analyses. While there is an on-going debate concerning synergistic or additive effects of low-level mixed exposures there is little environmental data confirming the observations obtained from laboratory experiments. In the present study we utilized qRT-PCR analysis to identify key metal response genes to develop a method for biomonitoring and risk-assessment of metal pollution. The gene expression patterns were determined for juvenile zebrafish exposed to waters from sites down-stream of a closed mining operation. Genes representing different physiological processes including stress response, inflammation, apoptosis, drug metabolism, ion channels and receptors, and genotoxicity were analyzed. The gene expression patterns of zebrafish exposed to laboratory prepared metal mixes were compared to the patterns obtained with fish exposed to the environmental samples with the same metal composition and concentrations. Exposure to environmental samples resulted in fewer alterations in gene expression compared to laboratory mixes. A biotic ligand model (BLM) was used to approximate the bioavailability of the metals in the environmental setting. However, the BLM results were not in agreement with the experimental data, suggesting that the BLM may be overestimating the risk in the environment. The present study therefore supports the inclusion of site-specific biological analyses to complement the present chemical based assays used for environmental risk-assessment. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Effectiveness of a worksite mindfulness-based multi-component intervention on lifestyle behaviors

    PubMed Central

    2014-01-01

    Introduction Overweight and obesity are associated with an increased risk of morbidity. Mindfulness training could be an effective strategy to optimize lifestyle behaviors related to body weight gain. The aim of this study was to evaluate the effectiveness of a worksite mindfulness-based multi-component intervention on vigorous physical activity in leisure time, sedentary behavior at work, fruit intake and determinants of these behaviors. The control group received information on existing lifestyle behavior- related facilities that were already available at the worksite. Methods In a randomized controlled trial design (n = 257), 129 workers received a mindfulness training, followed by e-coaching, lunch walking routes and fruit. Outcome measures were assessed at baseline and after 6 and 12 months using questionnaires. Physical activity was also measured using accelerometers. Effects were analyzed using linear mixed effect models according to the intention-to-treat principle. Linear regression models (complete case analyses) were used as sensitivity analyses. Results There were no significant differences in lifestyle behaviors and determinants of these behaviors between the intervention and control group after 6 or 12 months. The sensitivity analyses showed effect modification for gender in sedentary behavior at work at 6-month follow-up, although the main analyses did not. Conclusions This study did not show an effect of a worksite mindfulness-based multi-component intervention on lifestyle behaviors and behavioral determinants after 6 and 12 months. The effectiveness of a worksite mindfulness-based multi-component intervention as a health promotion intervention for all workers could not be established. PMID:24467802

  20. Therapy preferences of patients with lung and colon cancer: a discrete choice experiment.

    PubMed

    Schmidt, Katharina; Damm, Kathrin; Vogel, Arndt; Golpon, Heiko; Manns, Michael P; Welte, Tobias; Graf von der Schulenburg, J-Matthias

    2017-01-01

    There is increasing interest in studies that examine patient preferences to measure health-related outcomes. Understanding patients' preferences can improve the treatment process and is particularly relevant for oncology. In this study, we aimed to identify the subgroup-specific treatment preferences of German patients with lung cancer (LC) or colorectal cancer (CRC). Six discrete choice experiment (DCE) attributes were established on the basis of a systematic literature review and qualitative interviews. The DCE analyses comprised generalized linear mixed-effects model and latent class mixed logit model. The study cohort comprised 310 patients (194 with LC, 108 with CRC, 8 with both types of cancer) with a median age of 63 (SD =10.66) years. The generalized linear mixed-effects model showed a significant ( P <0.05) degree of association for all of the tested attributes. "Strongly increased life expectancy" was the attribute given the greatest weight by all patient groups. Using latent class mixed logit model analysis, we identified three classes of patients. Patients who were better informed tended to prefer a more balanced relationship between length and health-related quality of life (HRQoL) than those who were less informed. Class 2 (LC patients with low HRQoL who had undergone surgery) gave a very strong weighting to increased length of life. We deduced from Class 3 patients that those with a relatively good life expectancy (CRC compared with LC) gave a greater weight to moderate effects on HRQoL than to a longer life. Overall survival was the most important attribute of therapy for patients with LC or CRC. Differences in treatment preferences between subgroups should be considered in regard to treatment and development of guidelines. Patients' preferences were not affected by sex or age, but were affected by the cancer type, HRQoL, surgery status, and the main source of information on the disease.

  1. Positive and negative generation effects in source monitoring.

    PubMed

    Riefer, David M; Chien, Yuchin; Reimer, Jason F

    2007-10-01

    Research is mixed as to whether self-generation improves memory for the source of information. We propose the hypothesis that positive generation effects (better source memory for self-generated information) occur in reality-monitoring paradigms, while negative generation effects (better source memory for externally presented information) tend to occur in external source-monitoring paradigms. This hypothesis was tested in an experiment in which participants read or generated words, followed by a memory test for the source of each word (read or generated) and the word's colour. Meiser and Bröder's (2002) multinomial model for crossed source dimensions was used to analyse the data, showing that source memory for generation (reality monitoring) was superior for the generated words, while source memory for word colour (external source monitoring) was superior for the read words. The model also revealed the influence of strong response biases in the data, demonstrating the usefulness of formal modelling when examining generation effects in source monitoring.

  2. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    NASA Astrophysics Data System (ADS)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  3. Modeling optimal treatment strategies in a heterogeneous mixing model.

    PubMed

    Choe, Seoyun; Lee, Sunmi

    2015-11-25

    Many mathematical models assume random or homogeneous mixing for various infectious diseases. Homogeneous mixing can be generalized to mathematical models with multi-patches or age structure by incorporating contact matrices to capture the dynamics of the heterogeneously mixing populations. Contact or mixing patterns are difficult to measure in many infectious diseases including influenza. Mixing patterns are considered to be one of the critical factors for infectious disease modeling. A two-group influenza model is considered to evaluate the impact of heterogeneous mixing on the influenza transmission dynamics. Heterogeneous mixing between two groups with two different activity levels includes proportionate mixing, preferred mixing and like-with-like mixing. Furthermore, the optimal control problem is formulated in this two-group influenza model to identify the group-specific optimal treatment strategies at a minimal cost. We investigate group-specific optimal treatment strategies under various mixing scenarios. The characteristics of the two-group influenza dynamics have been investigated in terms of the basic reproduction number and the final epidemic size under various mixing scenarios. As the mixing patterns become proportionate mixing, the basic reproduction number becomes smaller; however, the final epidemic size becomes larger. This is due to the fact that the number of infected people increases only slightly in the higher activity level group, while the number of infected people increases more significantly in the lower activity level group. Our results indicate that more intensive treatment of both groups at the early stage is the most effective treatment regardless of the mixing scenario. However, proportionate mixing requires more treated cases for all combinations of different group activity levels and group population sizes. Mixing patterns can play a critical role in the effectiveness of optimal treatments. As the mixing becomes more like-with-like mixing, treating the higher activity group in the population is almost as effective as treating the entire populations since it reduces the number of disease cases effectively but only requires similar treatments. The gain becomes more pronounced as the basic reproduction number increases. This can be a critical issue which must be considered for future pandemic influenza interventions, especially when there are limited resources available.

  4. Analyzing Longitudinal Data with Multilevel Models: An Example with Individuals Living with Lower Extremity Intra-articular Fractures

    PubMed Central

    Kwok, Oi-Man; Underhill, Andrea T.; Berry, Jack W.; Luo, Wen; Elliott, Timothy R.; Yoon, Myeongsun

    2008-01-01

    The use and quality of longitudinal research designs has increased over the past two decades, and new approaches for analyzing longitudinal data, including multi-level modeling (MLM) and latent growth modeling (LGM), have been developed. The purpose of this paper is to demonstrate the use of MLM and its advantages in analyzing longitudinal data. Data from a sample of individuals with intra-articular fractures of the lower extremity from the University of Alabama at Birmingham’s Injury Control Research Center is analyzed using both SAS PROC MIXED and SPSS MIXED. We start our presentation with a discussion of data preparation for MLM analyses. We then provide example analyses of different growth models, including a simple linear growth model and a model with a time-invariant covariate, with interpretation for all the parameters in the models. More complicated growth models with different between- and within-individual covariance structures and nonlinear models are discussed. Finally, information related to MLM analysis such as online resources is provided at the end of the paper. PMID:19649151

  5. The effects of mixed layer dynamics on ice growth in the central Arctic

    NASA Astrophysics Data System (ADS)

    Kitchen, Bruce R.

    1992-09-01

    The thermodynamic model of Thorndike (1992) is coupled to a one dimensional, two layer ocean entrainment model to study the effect of mixed layer dynamics on ice growth and the variation in the ocean heat flux into the ice due to mixed layer entrainment. Model simulations show the existence of a negative feedback between the ice growth and the mixed layer entrainment, and that the underlying ocean salinity has a greater effect on the ocean beat flux than does variations in the underlying ocean temperature. Model simulations for a variety of surface forcings and initial conditions demonstrate the need to include mixed layer dynamics for realistic ice prediction in the arctic.

  6. Modelling of upper ocean mixing by wave-induced turbulence

    NASA Astrophysics Data System (ADS)

    Ghantous, Malek; Babanin, Alexander

    2013-04-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into models, and real gains have been made in terms of increased fidelity to observational data. However our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing models need refinement and propose an alternative model. We use two of the models to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  7. Restructuring in response to case mix reimbursement in nursing homes: A contingency approach

    PubMed Central

    Zinn, Jacqueline; Feng, Zhanlian; Mor, Vincent; Intrator, Orna; Grabowski, David

    2013-01-01

    Background Resident-based case mix reimbursement has become the dominant mechanism for publicly funded nursing home care. In 1998 skilled nursing facility reimbursement changed from cost-based to case mix adjusted payments under the Medicare Prospective Payment System for the costs of all skilled nursing facility care provided to Medicare recipients. In addition, as of 2004, 35 state Medicaid programs had implemented some form of case mix reimbursement. Purpose The purpose of the study is to determine if the implementation of Medicare and Medicaid case mix reimbursement increased the administrative burden on nursing homes, as evidenced by increased levels of nurses in administrative functions. Methodology/Approach The primary data for this study come from the Centers for Medicare and Medicaid Services Online Survey Certification and Reporting database from 1997 through 2004, a national nursing home database containing aggregated facility-level information, including staffing, organizational characteristics and resident conditions, on all Medicare/Medicaid certified nursing facilities in the country. We conducted multivariate regression analyses using a facility fixed-effects model to examine the effects of the implementation of Medicaid case mix reimbursement and Medicare Prospective Payment System on changes in the level of total administrative nurse staffing in nursing homes. Findings Both Medicaid case mix reimbursement and Medicare Prospective Payment System increased the level of administrative nurse staffing, on average by 5.5% and 4.0% respectively. However, lack of evidence for a substitution effect suggests that any decline in direct care staffing after the introduction of case mix reimbursement is not attributable to a shift from clinical nursing resources to administrative functions. Practice Implications Our findings indicate that the administrative burden posed by case mix reimbursement has resource implications for all freestanding facilities. At the margin, the increased administrative burden imposed by case mix may become a factor influencing a range of decisions, including resident admission and staff hiring. PMID:18360162

  8. Restructuring in response to case mix reimbursement in nursing homes: a contingency approach.

    PubMed

    Zinn, Jacqueline; Feng, Zhanlian; Mor, Vincent; Intrator, Orna; Grabowski, David

    2008-01-01

    Resident-based case mix reimbursement has become the dominant mechanism for publicly funded nursing home care. In 1998 skilled nursing facility reimbursement changed from cost-based to case mix adjusted payments under the Medicare Prospective Payment System for the costs of all skilled nursing facility care provided to Medicare recipients. In addition, as of 2004, 35 state Medicaid programs had implemented some form of case mix reimbursement. The purpose of the study is to determine if the implementation of Medicare and Medicaid case mix reimbursement increased the administrative burden on nursing homes, as evidenced by increased levels of nurses in administrative functions. The primary data for this study come from the Centers for Medicare and Medicaid Services Online Survey Certification and Reporting database from 1997 through 2004, a national nursing home database containing aggregated facility-level information, including staffing, organizational characteristics and resident conditions, on all Medicare/Medicaid certified nursing facilities in the country. We conducted multivariate regression analyses using a facility fixed-effects model to examine the effects of the implementation of Medicaid case mix reimbursement and Medicare Prospective Payment System on changes in the level of total administrative nurse staffing in nursing homes. Both Medicaid case mix reimbursement and Medicare Prospective Payment System increased the level of administrative nurse staffing, on average by 5.5% and 4.0% respectively. However, lack of evidence for a substitution effect suggests that any decline in direct care staffing after the introduction of case mix reimbursement is not attributable to a shift from clinical nursing resources to administrative functions. Our findings indicate that the administrative burden posed by case mix reimbursement has resource implications for all freestanding facilities. At the margin, the increased administrative burden imposed by case mix may become a factor influencing a range of decisions, including resident admission and staff hiring.

  9. Mixed effects versus fixed effects modelling of binary data with inter-subject variability.

    PubMed

    Murphy, Valda; Dunne, Adrian

    2005-04-01

    The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.

  10. Measuring trends of outpatient antibiotic use in Europe: jointly modelling longitudinal data in defined daily doses and packages.

    PubMed

    Bruyndonckx, Robin; Hens, Niel; Aerts, Marc; Goossens, Herman; Molenberghs, Geert; Coenen, Samuel

    2014-07-01

    To complement analyses of the linear trend and seasonal fluctuation of European outpatient antibiotic use expressed in defined daily doses (DDD) by analyses of data in packages, to assess the agreement between both measures and to study changes in the number of DDD per package over time. Data on outpatient antibiotic use, aggregated at the level of the active substance (WHO version 2011) were collected from 2000 to 2007 for 31 countries and expressed in DDD and packages per 1000 inhabitants per day (DID and PID, respectively). Data expressed in DID and PID were analysed separately using non-linear mixed models while the agreement between these measurements was analysed through a joint non-linear mixed model. The change in DDD per package over time was studied with a linear mixed model. Total outpatient antibiotic and penicillin use in Europe and their seasonal fluctuation significantly increased in DID, but not in PID. The use of combinations of penicillins significantly increased in DID and in PID. Broad-spectrum penicillin use did not increase significantly in DID and decreased significantly in PID. For all but one subgroup, country-specific deviations moved in the same direction whether measured in DID or PID. The correlations are not perfect. The DDD per package increased significantly over time for all but one subgroup. Outpatient antibiotic use in Europe shows contrasting trends, depending on whether DID or PID is used as the measure. The increase of the DDD per package corroborates the recommendation to adopt PID to monitor outpatient antibiotic use in Europe. © The Author 2014. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  12. Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures.

    PubMed

    Palva, J Matias; Wang, Sheng H; Palva, Satu; Zhigalov, Alexander; Monto, Simo; Brookes, Matthew J; Schoffelen, Jan-Mathijs; Jerbi, Karim

    2018-06-01

    When combined with source modeling, magneto- (MEG) and electroencephalography (EEG) can be used to study long-range interactions among cortical processes non-invasively. Estimation of such inter-areal connectivity is nevertheless hindered by instantaneous field spread and volume conduction, which artificially introduce linear correlations and impair source separability in cortical current estimates. To overcome the inflating effects of linear source mixing inherent to standard interaction measures, alternative phase- and amplitude-correlation based connectivity measures, such as imaginary coherence and orthogonalized amplitude correlation have been proposed. Being by definition insensitive to zero-lag correlations, these techniques have become increasingly popular in the identification of correlations that cannot be attributed to field spread or volume conduction. We show here, however, that while these measures are immune to the direct effects of linear mixing, they may still reveal large numbers of spurious false positive connections through field spread in the vicinity of true interactions. This fundamental problem affects both region-of-interest-based analyses and all-to-all connectome mappings. Most importantly, beyond defining and illustrating the problem of spurious, or "ghost" interactions, we provide a rigorous quantification of this effect through extensive simulations. Additionally, we further show that signal mixing also significantly limits the separability of neuronal phase and amplitude correlations. We conclude that spurious correlations must be carefully considered in connectivity analyses in MEG/EEG source space even when using measures that are immune to zero-lag correlations. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  13. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  14. Exploring business process modelling paradigms and design-time to run-time transitions

    NASA Astrophysics Data System (ADS)

    Caron, Filip; Vanthienen, Jan

    2016-09-01

    The business process management literature describes a multitude of approaches (e.g. imperative, declarative or event-driven) that each result in a different mix of process flexibility, compliance, effectiveness and efficiency. Although the use of a single approach over the process lifecycle is often assumed, transitions between approaches at different phases in the process lifecycle may also be considered. This article explores several business process strategies by analysing the approaches at different phases in the process lifecycle as well as the various transitions.

  15. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  16. Application of mixing-controlled combustion models to gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung Lee

    1990-01-01

    Gas emissions were studied from a staged Rich Burn/Quick-Quench Mix/Lean Burn combustor were studied under test conditions encountered in High Speed Research engines. The combustor was modeled at conditions corresponding to different engine power settings, and the effect of primary dilution airflow split on emissions, flow field, flame size and shape, and combustion intensity, as well as mixing, was investigated. A mathematical model was developed from a two-equation model of turbulence, a quasi-global kinetics mechanism for the oxidation of propane, and the Zeldovich mechanism for nitric oxide formation. A mixing-controlled combustion model was used to account for turbulent mixing effects on the chemical reaction rate. This model assumes that the chemical reaction rate is much faster than the turbulent mixing rate.

  17. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    PubMed

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.

  18. Composition and structure of Pinus koraiensis mixed forest respond to spatial climatic changes.

    PubMed

    Zhang, Jingli; Zhou, Yong; Zhou, Guangsheng; Xiao, Chunwang

    2014-01-01

    Although some studies have indicated that climate changes can affect Pinus koraiensis mixed forest, the responses of composition and structure of Pinus koraiensis mixed forests to climatic changes are unknown and the key climatic factors controlling the composition and structure of Pinus koraiensis mixed forest are uncertain. Field survey was conducted in the natural Pinus koraiensis mixed forests along a latitudinal gradient and an elevational gradient in Northeast China. In order to build the mathematical models for simulating the relationships of compositional and structural attributes of the Pinus koraiensis mixed forest with climatic and non-climatic factors, stepwise linear regression analyses were performed, incorporating 14 dependent variables and the linear and quadratic components of 9 factors. All the selected new models were computed under the +2°C and +10% precipitation and +4°C and +10% precipitation scenarios. The Max Temperature of Warmest Month, Mean Temperature of Warmest Quarter and Precipitation of Wettest Month were observed to be key climatic factors controlling the stand densities and total basal areas of Pinus koraiensis mixed forest. Increased summer temperatures and precipitations strongly enhanced the stand densities and total basal areas of broadleaf trees but had little effect on Pinus koraiensis under the +2°C and +10% precipitation scenario and +4°C and +10% precipitation scenario. These results show that the Max Temperature of Warmest Month, Mean Temperature of Warmest Quarter and Precipitation of Wettest Month are key climatic factors which shape the composition and structure of Pinus koraiensis mixed forest. Although the Pinus koraiensis would persist, the current forests dominated by Pinus koraiensis in the region would all shift and become broadleaf-dominated forests due to the dramatic increase of broadleaf trees under the future global warming and increased precipitation.

  19. Different Trophic Tracers Give Different Answers for the Same Bugs - Comparing a Stable Isotope and Fatty Acid Based Analysis of Resource Utilization in a Marine Isopod

    NASA Astrophysics Data System (ADS)

    Galloway, A. W. E.; Eisenlord, M. E.; Brett, M. T.

    2016-02-01

    Stable isotope (SI) based mixing models are the most common approach used to infer resource pathways in consumers. However, SI based analyses are often underdetermined, and consumer SI fractionation is usually unknown. The use of fatty acid (FA) tracers in mixing models offers an alternative approach that can resolve the underdetermined constraint. A limitation to both methods is the considerable uncertainty about consumer `trophic modification' (TM) of dietary FA or SI, which occurs as consumers transform dietary resources into tissues. We tested the utility of SI and FA approaches for inferring the diets of the marine benthic isopod (Idotea wosnesenskii) fed various marine macroalgae in controlled feeding trials. Our analyses quantified how the accuracy and precision of Bayesian mixing models was influenced by choice of algorithm (SIAR vs MixSIR), fractionation (assumed or known), and whether the model was under or overdetermined (seven sources and two vs 26 tracers) for cases where isopods were fed an exclusive diet of one of the seven different macroalgae. Using the conventional approach (i.e., 2 SI with assumed TM) resulted in average model outputs, i.e., the contribution from the exclusive resource = 0.20 ± 0.23 (0.00-0.79), mean ± SD (95% credible interval), that only differed slightly from the prior assumption. Using the FA based approach with known TM greatly improved model performance, i.e., the contribution from the exclusive resource = 0.91 ± 0.10 (0.58-0.99). The choice of algorithm only made a difference when fractionation was known and the model was overdetermined (FA approach). In this case SIAR and MixSIR had outputs of 0.86 ± 0.11 (0.48-0.96) and 0.96 ± 0.05 (0.79-1.00), respectively. This analysis shows the choice of dietary tracers and the assumption of consumer trophic modification greatly influence the performance of mixing model dietary reconstructions, and ultimately our understanding of what resources actually support aquatic consumers.

  20. Accounting for dropout bias using mixed-effects models.

    PubMed

    Mallinckrodt, C H; Clark, W S; David, S R

    2001-01-01

    Treatment effects are often evaluated by comparing change over time in outcome measures. However, valid analyses of longitudinal data can be problematic when subjects discontinue (dropout) prior to completing the study. This study assessed the merits of likelihood-based repeated measures analyses (MMRM) compared with fixed-effects analysis of variance where missing values were imputed using the last observation carried forward approach (LOCF) in accounting for dropout bias. Comparisons were made in simulated data and in data from a randomized clinical trial. Subject dropout was introduced in the simulated data to generate ignorable and nonignorable missingness. Estimates of treatment group differences in mean change from baseline to endpoint from MMRM were, on average, markedly closer to the true value than estimates from LOCF in every scenario simulated. Standard errors and confidence intervals from MMRM accurately reflected the uncertainty of the estimates, whereas standard errors and confidence intervals from LOCF underestimated uncertainty.

  1. Clustering of longitudinal data by using an extended baseline: A new method for treatment efficacy clustering in longitudinal data.

    PubMed

    Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine

    2018-01-01

    Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.

  2. Conditional random slope: A new approach for estimating individual child growth velocity in epidemiological research.

    PubMed

    Leung, Michael; Bassani, Diego G; Racine-Poon, Amy; Goldenberg, Anna; Ali, Syed Asad; Kang, Gagandeep; Premkumar, Prasanna S; Roth, Daniel E

    2017-09-10

    Conditioning child growth measures on baseline accounts for regression to the mean (RTM). Here, we present the "conditional random slope" (CRS) model, based on a linear-mixed effects model that incorporates a baseline-time interaction term that can accommodate multiple data points for a child while also directly accounting for RTM. In two birth cohorts, we applied five approaches to estimate child growth velocities from 0 to 12 months to assess the effect of increasing data density (number of measures per child) on the magnitude of RTM of unconditional estimates, and the correlation and concordance between the CRS and four alternative metrics. Further, we demonstrated the differential effect of the choice of velocity metric on the magnitude of the association between infant growth and stunting at 2 years. RTM was minimally attenuated by increasing data density for unconditional growth modeling approaches. CRS and classical conditional models gave nearly identical estimates with two measures per child. Compared to the CRS estimates, unconditional metrics had moderate correlation (r = 0.65-0.91), but poor agreement in the classification of infants with relatively slow growth (kappa = 0.38-0.78). Estimates of the velocity-stunting association were the same for CRS and classical conditional models but differed substantially between conditional versus unconditional metrics. The CRS can leverage the flexibility of linear mixed models while addressing RTM in longitudinal analyses. © 2017 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc.

  3. Who Is Overeducated and Why? Probit and Dynamic Mixed Multinomial Logit Analyses of Vertical Mismatch in East and West Germany

    ERIC Educational Resources Information Center

    Boll, Christina; Leppin, Julian Sebastian; Schömann, Klaus

    2016-01-01

    Overeducation potentially signals a productivity loss. With Socio-Economic Panel data from 1984 to 2011 we identify drivers of educational mismatch for East and West medium and highly educated Germans. Addressing measurement error, state dependence and unobserved heterogeneity, we run dynamic mixed multinomial logit models for three different…

  4. Women and Men Together in Recruit Training.

    PubMed

    Orme, Geoffrey J; Kehoe, E James

    2018-05-01

    Although men and women recruits to the Australian Army have trained in mixed-gender platoons since 1995, restrictions on women joining the combat arms were only removed in 2016. As part of a longitudinal study starting with recruit training, this article examined recruit records collected before 2016 with the aims of delineating (1) the relative performance of women versus men in mixed-gender platoons and (2) the relative performance of men in mixed-gender platoons versus all-male platoons. De-identified instructor ratings for 630 females and 4,505 males who completed training between 2011 and 2015 were obtained. Recruits were distributed across 128 platoons (averaging 41.6 members, SD = 8.3) of which 75% contained females, in proportions from 5% to 45%. These analyses were conducted under defense ethics approval DPR-LREP 069-15. Factor analyses revealed that instructor ratings generally loaded onto a single factor, accounting 77.2% of the variance. Consequently, a composite recruit performance score (range 1-5) was computed for 16 of 19 competencies. Analyses of the scores revealed that the distributions of the scores for females and males overlapped considerably. Observed effects were negligible to small in size. The distributions were all centered between 3.0 and 3.5. In mixed-gender platoons, 51% of the females and 52% of the males fell in this band, and 44% of recruits in all-male platoons had scores in this band. The lower three bands (1.0-3.0) contained a slightly greater proportion of females (18%) than males in either mixed-gender platoons (12%) or all-male platoons (12%). Conversely, the upper three bands (3.5-5.0) contained a slightly smaller percentage of females (31%) than males in either mixed-gender platoons (36%) or all-male platoons (44%). Although scores for females were reliably lower than those of males in mixed-gender platoons, χ2 (4) = 16.01, p < 0.01, the effect size (V = 0.07) did not reach the criterion for even a small effect (0.10). For male recruits, those in mixed-gender platoons had scores that were reliably lower than in all-male platoons, χ2 (4) = 48.38, p < 0.001; its effect size (V = 0.11) just exceeded the criterion for a small effect (0.10). Further analyses revealed that male scores had a near-zero correlation (r = -0.033) with the proportion of females in platoons (0-45%). This large-scale secondary analysis of instructor ratings of female and male recruits provides a platform for monitoring the integration of women into the combat arms. The analyses revealed nearly complete overlap in the performance of female versus male recruits. The detected gender-related differences were negligible to small in size. These small differences must be viewed with considerable caution. They may be artifacts of rater bias or other uncontrolled features of the rating system, which was designed for reporting individual recruit performance rather than aggregate analyses. Even with these limitations, this baseline snapshot of recruit performance suggests that, at recruit training, women and men are already working well together, which bodes well for their subsequent integration into the combat arms.

  5. Privatization and environmental pollution in an international mixed Cournot model

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernanda A.

    2016-06-01

    In this paper, we consider a competition between a domestic public firm and a foreign private firm, supposing that the production processes generates environmental pollution. Introducing the residents' environmental preference into the public firm's objective function, we analyse its economic impacts. We also analyse the economic impacts of the privatization.

  6. General practice performance in referral for suspected cancer: influence of number of cases and case-mix on publicly reported data.

    PubMed

    Murchie, P; Chowdhury, A; Smith, S; Campbell, N C; Lee, A J; Linden, D; Burton, C D

    2015-05-26

    Publicly available data show variation in GPs' use of urgent suspected cancer (USC) referral pathways. We investigated whether this could be due to small numbers of cancer cases and random case-mix, rather than due to true variation in performance. We analysed individual GP practice USC referral detection rates (proportion of the practice's cancer cases that are detected via USC) and conversion rates (proportion of the practice's USC referrals that prove to be cancer) in routinely collected data from GP practices in all of England (over 4 years) and northeast Scotland (over 7 years). We explored the effect of pooling data. We then modelled the effects of adding random case-mix to practice variation. Correlations between practice detection rate and conversion rate became less positive when data were aggregated over several years. Adding random case-mix to between-practice variation indicated that the median proportion of poorly performing practices correctly identified after 25 cancer cases were examined was 20% (IQR 17 to 24) and after 100 cases was 44% (IQR 40 to 47). Much apparent variation in GPs' use of suspected cancer referral pathways can be attributed to random case-mix. The methods currently used to assess the quality of GP-suspected cancer referral performance, and to compare individual practices, are misleading. These should no longer be used, and more appropriate and robust methods should be developed.

  7. A Systematic Review of Cardiovascular Outcomes-Based Cost-Effectiveness Analyses of Lipid-Lowering Therapies.

    PubMed

    Wei, Ching-Yun; Quek, Ruben G W; Villa, Guillermo; Gandra, Shravanthi R; Forbes, Carol A; Ryder, Steve; Armstrong, Nigel; Deshpande, Sohan; Duffy, Steven; Kleijnen, Jos; Lindgren, Peter

    2017-03-01

    Previous reviews have evaluated economic analyses of lipid-lowering therapies using lipid levels as surrogate markers for cardiovascular disease. However, drug approval and health technology assessment agencies have stressed that surrogates should only be used in the absence of clinical endpoints. The aim of this systematic review was to identify and summarise the methodologies, weaknesses and strengths of economic models based on atherosclerotic cardiovascular disease event rates. Cost-effectiveness evaluations of lipid-lowering therapies using cardiovascular event rates in adults with hyperlipidaemia were sought in Medline, Embase, Medline In-Process, PubMed and NHS EED and conference proceedings. Search results were independently screened, extracted and quality checked by two reviewers. Searches until February 2016 retrieved 3443 records, from which 26 studies (29 publications) were selected. Twenty-two studies evaluated secondary prevention (four also assessed primary prevention), two considered only primary prevention and two included mixed primary and secondary prevention populations. Most studies (18) based treatment-effect estimates on single trials, although more recent evaluations deployed meta-analyses (5/10 over the last 10 years). Markov models (14 studies) were most commonly used and only one study employed discrete event simulation. Models varied particularly in terms of health states and treatment-effect duration. No studies used a systematic review to obtain utilities. Most studies took a healthcare perspective (21/26) and sourced resource use from key trials instead of local data. Overall, reporting quality was suboptimal. This review reveals methodological changes over time, but reporting weaknesses remain, particularly with respect to transparency of model reporting.

  8. MIXOR: a computer program for mixed-effects ordinal regression analysis.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-03-01

    MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.

  9. No compelling positive association between ovarian hormones and wearing red clothing when using multinomial analyses.

    PubMed

    Blake, Khandis R; Dixson, Barnaby J W; O'Dean, Siobhan M; Denson, Thomas F

    2017-04-01

    Several studies report that wearing red clothing enhances women's attractiveness and signals sexual proceptivity to men. The associated hypothesis that women will choose to wear red clothing when fertility is highest, however, has received mixed support from empirical studies. One possible cause of these mixed findings may be methodological. The current study aimed to replicate recent findings suggesting a positive association between hormonal profiles associated with high fertility (high estradiol to progesterone ratios) and the likelihood of wearing red. We compared the effect of the estradiol to progesterone ratio on the probability of wearing: red versus non-red (binary logistic regression); red versus neutral, black, blue, green, orange, multi-color, and gray (multinomial logistic regression); and each of these same colors in separate binary models (e.g., green versus non-green). Red versus non-red analyses showed a positive trend between a high estradiol to progesterone ratio and wearing red, but the effect only arose for younger women and was not robust across samples. We found no compelling evidence for ovarian hormones increasing the probability of wearing red in the other analyses. However, we did find that the probability of wearing neutral was positively associated with the estradiol to progesterone ratio, though the effect did not reach conventional levels of statistical significance. Findings suggest that although ovarian hormones may affect younger women's preference for red clothing under some conditions, the effect is not robust when differentiating amongst other colors of clothing. In addition, the effect of ovarian hormones on clothing color preference may not be specific to the color red. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  11. Modulation of additive and interactive effects in lexical decision by trial history.

    PubMed

    Masson, Michael E J; Kliegl, Reinhold

    2013-05-01

    Additive and interactive effects of word frequency, stimulus quality, and semantic priming have been used to test theoretical claims about the cognitive architecture of word-reading processes. Additive effects among these factors have been taken as evidence for discrete-stage models of word reading. We present evidence from linear mixed-model analyses applied to 2 lexical decision experiments indicating that apparent additive effects can be the product of aggregating over- and underadditive interaction effects that are modulated by recent trial history, particularly the lexical status and stimulus quality of the previous trial's target. Even a simple practice effect expressed as improved response speed across trials was powerfully modulated by the nature of the previous target item. These results suggest that additivity and interaction between factors may reflect trial-to-trial variation in stimulus representations and decision processes rather than fundamental differences in processing architecture.

  12. Observing and Simulating Diapycnal Mixing in the Canadian Arctic Archipelago

    NASA Astrophysics Data System (ADS)

    Hughes, K.; Klymak, J. M.; Hu, X.; Myers, P. G.; Williams, W. J.; Melling, H.

    2016-12-01

    High-spatial-resolution observations in the central Canadian Arctic Archipelago are analysed in conjunction with process-oriented modelling to estimate the flow pathways among the constricted waterways, understand the nature of the hydraulic control(s), and assess the influence of smaller scale (metres to kilometres) phenomena such as internal waves and topographically induced eddies. The observations repeatedly display isopycnal displacements of 50 m as dense water plunges over a sill. Depth-averaged turbulent dissipation rates near the sill estimated from these observations are typically 10-6-10-5 W kg-1, a range that is three orders of magnitude larger than that for the open ocean. These and other estimates are compared against a 1/12° basin-scale model from which we estimate diapycnal mixing rates using a volume-integrated advection-diffusion equation. Much of the mixing in this simulation is concentrated near constrictions within Barrow Strait and Queens Channel, the latter being our observational site. This suggests the model is capable of capturing topographically induced mixing. However, such mixing is expected to be enhanced in the presence of tides, a process not included in our basin scale simulation or other similar models. Quantifying this enhancement is another objective of our process-oriented modelling.

  13. Decision-case mix model for analyzing variation in cesarean rates.

    PubMed

    Eldenburg, L; Waller, W S

    2001-01-01

    This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.

  14. The Causes and Evolutionary Consequences of Mixed Singing in Two Hybridizing Songbird Species (Luscinia spp.)

    PubMed Central

    Vokurková, Jana; Petrusková, Tereza; Reifová, Radka; Kozman, Alexandra; Mořkovský, Libor; Kipper, Silke; Weiss, Michael; Reif, Jiří; Dolata, Paweł T.; Petrusek, Adam

    2013-01-01

    Bird song plays an important role in the establishment and maintenance of prezygotic reproductive barriers. When two closely related species come into secondary contact, song convergence caused by acquisition of heterospecific songs into the birds’ repertoires is often observed. The proximate mechanisms responsible for such mixed singing, and its effect on the speciation process, are poorly understood. We used a combination of genetic and bioacoustic analyses to test whether mixed singing observed in the secondary contact zone of two passerine birds, the Thrush Nightingale (Luscinia luscinia) and the Common Nightingale (L. megarhynchos), is caused by introgressive hybridization. We analysed song recordings of both species from allopatric and sympatric populations together with genotype data from one mitochondrial and seven nuclear loci. Semi-automated comparisons of our recordings with an extensive catalogue of Common Nightingale song types confirmed that most of the analysed sympatric Thrush Nightingale males were ‘mixed singers’ that use heterospecific song types in their repertoires. None of these ‘mixed singers’ possessed any alleles introgressed from the Common Nightingale, suggesting that they were not backcross hybrids. We also analysed songs of five individuals with intermediate phenotype, which were identified as F1 hybrids between the Thrush Nightingale female and the Common Nightingale male by genetic analysis. Songs of three of these hybrids corresponded to the paternal species (Common Nightingale) but the remaining two sung a mixed song. Our results suggest that although hybridization might increase the tendency for learning songs from both parental species, interspecific cultural transmission is the major proximate mechanism explaining the occurrence of mixed singers among the sympatric Thrush Nightingales. We also provide evidence that mixed singing does not substantially increase the rate of interspecific hybridization and discuss the possible adaptive value of this phenomenon in nightingales. PMID:23577089

  15. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    ERIC Educational Resources Information Center

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  16. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  17. A necessarily complex model to explain the biogeography of the amphibians and reptiles of Madagascar.

    PubMed

    Brown, Jason L; Cameron, Alison; Yoder, Anne D; Vences, Miguel

    2014-10-09

    Pattern and process are inextricably linked in biogeographic analyses, though we can observe pattern, we must infer process. Inferences of process are often based on ad hoc comparisons using a single spatial predictor. Here, we present an alternative approach that uses mixed-spatial models to measure the predictive potential of combinations of hypotheses. Biodiversity patterns are estimated from 8,362 occurrence records from 745 species of Malagasy amphibians and reptiles. By incorporating 18 spatially explicit predictions of 12 major biogeographic hypotheses, we show that mixed models greatly improve our ability to explain the observed biodiversity patterns. We conclude that patterns are influenced by a combination of diversification processes rather than by a single predominant mechanism. A 'one-size-fits-all' model does not exist. By developing a novel method for examining and synthesizing spatial parameters such as species richness, endemism and community similarity, we demonstrate the potential of these analyses for understanding the diversification history of Madagascar's biota.

  18. Generalized functional linear models for gene-based case-control association studies.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.

  19. Generalized Functional Linear Models for Gene-based Case-Control Association Studies

    PubMed Central

    Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao

    2014-01-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  20. Mixed models and reduced/selective integration displacement models for nonlinear analysis of curved beams

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Peters, J. M.

    1981-01-01

    Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.

  1. Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration

    NASA Technical Reports Server (NTRS)

    Groce, Alex; Joshi, Rajeev

    2008-01-01

    Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.

  2. Predicting the multi-domain progression of Parkinson's disease: a Bayesian multivariate generalized linear mixed-effect model.

    PubMed

    Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei

    2017-09-25

    It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).

  3. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.

    PubMed

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.

  4. The disconnected values model improves mental well-being and fitness in an employee wellness program.

    PubMed

    Anshel, Mark H; Brinthaupt, Thomas M; Kang, Minsoo

    2010-01-01

    This study examined the effect of a 10-week wellness program on changes in physical fitness and mental well-being. The conceptual framework for this study was the Disconnected Values Model (DVM). According to the DVM, detecting the inconsistencies between negative habits and values (e.g., health, family, faith, character) and concluding that these "disconnects" are unacceptable promotes the need for health behavior change. Participants were 164 full-time employees at a university in the southeastern U.S. The program included fitness coaching and a 90-minute orientation based on the DVM. Multivariate Mixed Model analyses indicated significantly improved scores from pre- to post-intervention on selected measures of physical fitness and mental well-being. The results suggest that the Disconnected Values Model provides an effective cognitive-behavioral approach to generating health behavior change in a 10-week workplace wellness program.

  5. Effect of electrode positions on the mixing characteristics of an electroosmotic micromixer.

    PubMed

    Seo, H S; Kim, Y J

    2014-08-01

    In this study, an electrokinetic microchannel with a ring-type mixing chamber is introduced for fast mixing. The modeled micromixer that is used for the study of the electroosmotic effect takes two fluids from different inlets and combines them in a ring-type mixing chamber and, then, they are mixed by the electric fields at the electrodes. In order to compare the mixing performance in the modeled micromixer, we numerically investigated the flow characteristics with different positions of the electrodes in the mixing chamber using the commercial code, COMSOL. In addition, we discussed the concentration distributions of the dissolved substances in the flow fields and compared the mixing efficiency in the modeled micromixer with different electrode positions and operating conditions, such as the frequencies and electric potentials at the electrodes.

  6. One-dimensional modelling of upper ocean mixing by turbulence due to wave orbital motion

    NASA Astrophysics Data System (ADS)

    Ghantous, M.; Babanin, A. V.

    2014-02-01

    Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into ocean mixing models, and real gains have been made in terms of increased fidelity to observational data. However, our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing parameterisations need refinement and propose an alternative one. We use two of the parameterisations to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.

  7. Effect of feeding on the pharmacokinetics of oral minocycline in healthy research dogs.

    PubMed

    Hnot, Melanie L; Cole, Lynette K; Lorch, Gwendolen; Rajala-Schultz, Paivi J; Papich, Mark G

    2015-12-01

    The effect of food on minocycline oral absorption in dogs is unknown. The objective was to determine the pharmacokinetics of minocycline after administration of a single oral dose in fed and fasted dogs. Ten research hounds were administered oral minocycline (approximately 5 mg/kg) with and without food, in a crossover study, with a one-week wash-out between treatments. Blood samples were collected immediately prior to minocycline administration and over 24 h. Minocycline plasma drug concentrations were measured using high-performance liquid chromatography using ultraviolet detection and were analysed with compartmental modelling to determine primary pharmacokinetic parameters. Each dog was analysed independently, followed by calculation of means and variation of the dogs. The Wilcoxon signed-rank test [analysing secondary pharmacokinetic parameters - peak concentration (CMAX ), area under the concentration versus time curve (AUC)] was used to compare the two groups. A population pharmacokinetic modelling approach was performed using nonlinear mixed effects modelling of primary parameters for the population as fixed effects and the difference between subjects as a random effect. Covariate analysis was used to identify the source of variability in the population. No significant difference was found between treatments for AUC (P = 0.0645), although AUC was higher in fasted dogs. A significant difference was found for CMAX (P = 0.0059), with fasted dogs attaining a higher CMAX . The covariate of fed versus fasted accounted for a significant variation in the pharmacokinetics. Because feeding was a significant source of variation for the population's primary pharmacokinetic parameters and fasted dogs had higher minocycline concentrations, we recommend administering minocycline without food. © 2015 ESVD and ACVD.

  8. Analyses of turbulent flow fields and aerosol dynamics of diesel engine exhaust inside two dilution sampling tunnels using the CTAG model.

    PubMed

    Wang, Yan Jason; Yang, Bo; Lipsky, Eric M; Robinson, Allen L; Zhang, K Max

    2013-01-15

    Experimental results from laboratory emission testing have indicated that particulate emission measurements are sensitive to the dilution process of exhaust using fabricated dilution systems. In this paper, we first categorize the dilution parameters into two groups: (1) aerodynamics (e.g., mixing types, mixing enhancers, dilution ratios, residence time); and (2) mixture properties (e.g., temperature, relative humidity, particle size distributions of both raw exhaust and dilution gas). Then we employ the Comprehensive Turbulent Aerosol Dynamics and Gas Chemistry (CTAG) model to investigate the effects of those parameters on a set of particulate emission measurements comparing two dilution tunnels, i.e., a T-mixing lab dilution tunnel and a portable field dilution tunnel with a type of coaxial mixing. The turbulent flow fields and aerosol dynamics of particles are simulated inside two dilution tunnels. Particle size distributions under various dilution conditions predicted by CTAG are evaluated against the experimental data. It is found that in the area adjacent to the injection of exhaust, turbulence plays a crucial role in mixing the exhaust with the dilution air, and the strength of nucleation dominates the level of particle number concentrations. Further downstream, nucleation terminates and the growth of particles by condensation and coagulation continues. Sensitivity studies reveal that a potential unifying parameter for aerodynamics, i.e., the dilution rate of exhaust, plays an important role in new particle formation. The T-mixing lab tunnel tends to favor the nucleation due to a larger dilution rate of the exhaust than the coaxial mixing field tunnel. Our study indicates that numerical simulation tools can be potentially utilized to develop strategies to reduce the uncertainties associated with dilution samplings of emission sources.

  9. A test-retest assessment of the effects of mental load on ratings of affect, arousal and perceived exertion during submaximal cycling.

    PubMed

    Vera, Jesús; Perales, José C; Jiménez, Raimundo; Cárdenas, David

    2018-04-24

    This study aimed to test the effects of mental (i.e. executive) load during a dual physical-mental task on ratings of perceived exertion (RPE), affective valence, and arousal. The protocol included two dual tasks with matched physical demands but different executive demands (2-back and oddball), carried out on different days. The procedure was run twice to assess the sensitivity and stability of RPE, valence and arousal across the two trials. Linear mixed-effects analyses showed less positive valence (-0.44 points on average in a 1-9 scale; R β 2  = 0.074 [CI90%, 0.052-0.098]), and heightened arousal (+0.13 points on average in a 1-9 scale; R β 2  = 0.006 [CI90%, 0.001-0.015]), for the high executive load condition, but showed no effect of mental load on RPE. Separated analyses for the two task trials yielded best-fitting models that were identical across trials for RPE and valence, but not for arousal. Model fitting was improved by assuming a 1-level autoregressive covariance structure for all analyses. In conclusion, executive load during a dual physical-mental task modulates the emotional response to effort, but not RPE. The autoregressive covariance suggests that people tend to anchor estimates on prior ones, which imposes certain limits on scales' usability.

  10. Associations of Family and Peer Experiences with Masculinity Attitude Trajectories at the Individual and Group Level in Adolescent and Young Adult Males

    PubMed Central

    Marcell, Arik V.; Eftim, Sorina E.; Sonenstein, Freya L.; Pleck, Joseph H.

    2013-01-01

    Data were drawn from 845 males in the National Survey of Adolescent Males who were initially aged 15–17, and followed-up 2.5 and 4.5 years later, to their early twenties. Mixed-effects regression models (MRM) and semiparametric trajectory analyses (STA) modeled patterns of change in masculinity attitudes at the individual and group levels, guided by gender intensification theory and cognitive-developmental theory. Overall, men’s masculinity attitudes became significantly less traditional between middle adolescence and early adulthood. In MRM analyses using time-varying covariates, maintaining paternal coresidence and continuing to have first sex in uncommitted heterosexual relationships were significantly associated with masculinity attitudes remaining relatively traditional. The STA modeling identified three distinct patterns of change in masculinity attitudes. A traditional-liberalizing trajectory of masculinity attitudes was most prevalent, followed by traditional-stable and nontraditional-stable trajectories. Implications for gender intensification and cognitive-developmental approaches to masculinity attitudes are discussed. PMID:24187483

  11. Use of non-linear mixed-effects modelling and regression analysis to predict the number of somatic coliphages by plaque enumeration after 3 hours of incubation.

    PubMed

    Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco

    2017-10-01

    The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was <4 PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.

  12. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  13. Effect of long-term antibiotic use on weight in adolescents with acne

    PubMed Central

    Contopoulos-Ioannidis, Despina G.; Ley, Catherine; Wang, Wei; Ma, Ting; Olson, Clifford; Shi, Xiaoli; Luft, Harold S.; Hastie, Trevor; Parsonnet, Julie

    2016-01-01

    Objectives Antibiotics increase weight in farm animals and may cause weight gain in humans. We used electronic health records from a large primary care organization to determine the effect of antibiotics on weight and BMI in healthy adolescents with acne. Methods We performed a retrospective cohort study of adolescents with acne prescribed ≥4 weeks of oral antibiotics with weight measurements within 18 months pre-antibiotics and 12 months post-antibiotics. We compared within-individual changes in weight-for-age Z-scores (WAZs) and BMI-for-age Z-scores (BMIZs). We used: (i) paired t-tests to analyse changes between the last pre-antibiotics versus the first post-antibiotic measurements; (ii) piecewise-constant-mixed models to capture changes between mean measurements pre- versus post-antibiotics; (iii) piecewise-linear-mixed models to capture changes in trajectory slopes pre- versus post-antibiotics; and (iv) χ2 tests to compare proportions of adolescents with ≥0.2 Z-scores WAZ or BMIZ increase or decrease. Results Our cohort included 1012 adolescents with WAZs; 542 also had BMIZs. WAZs decreased post-antibiotics in all analyses [change between last WAZ pre-antibiotics versus first WAZ post-antibiotics = −0.041 Z-scores (P < 0.001); change between mean WAZ pre- versus post-antibiotics = −0.050 Z-scores (P < 0.001); change in WAZ trajectory slopes pre- versus post-antibiotics = −0.025 Z-scores/6 months (P = 0.002)]. More adolescents had a WAZ decrease post-antibiotics ≥0.2 Z-scores than an increase (26% versus 18%; P < 0.001). Trends were similar, though not statistically significant, for BMIZ changes. Conclusions Contrary to original expectations, long-term antibiotic use in healthy adolescents with acne was not associated with weight gain. This finding, which was consistent across all analyses, does not support a weight-promoting effect of antibiotics in adolescents. PMID:26782773

  14. Mixing with applications to inertial-confinement-fusion implosions

    NASA Astrophysics Data System (ADS)

    Rana, V.; Lim, H.; Melvin, J.; Glimm, J.; Cheng, B.; Sharp, D. H.

    2017-01-01

    Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.

  15. Mixing with applications to inertial-confinement-fusion implosions.

    PubMed

    Rana, V; Lim, H; Melvin, J; Glimm, J; Cheng, B; Sharp, D H

    2017-01-01

    Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.

  16. Detailed study of precipitation of a poorly water soluble test compound using methodologies as in activity and solubility screening - mixing and automation effects.

    PubMed

    Gillespie, Cheska; Kennedy, Alan R; Edwards, Darren; Dowden, Lee; Daublain, Pierre; Halling, Peter

    2013-09-01

    Storage of pharmaceutical discovery compounds dissolved in dimethylsulfoxide (DMSO) is commonplace within industry. Often, the DMSO stock solution is added to an aqueous system (e.g. in bioassay or kinetic solubility testing)- since most test compounds are hydrophobic, precipitation could occur. Little is known about the factors affecting this precipitation process at the low (µM) concentrations used in screening analyses. Here, a poorly water soluble test compound (tolnaftate) was used to compare manual and automated pipetting, and explore the effect of mixing variables on precipitation. The amount of drug present in the supernatant after precipitation and centrifugation of the samples was quantified. An unusual result was obtained in three different laboratories: results of experiments performed initially were statistically significantly higher than those performed after a few days in the same lab. No significant differences were found between automated and manual pipetting, including in variability. Vortex mixing was found to give significantly lower supernatant amounts compared to milder mixing types. The mixing employed affects the particle growth of the precipitate. These findings are of relevance to discovery stage bioassay and kinetic solubility analyses.

  17. Optimization of extraction procedures for ecotoxicity analyses: Use of TNT contaminated soil as a model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunahara, G.I.; Renoux, A.Y.; Dodard, S.

    1995-12-31

    The environmental impact of energetic substances (TNT, RDX, GAP, NC) in soil is being examined using ecotoxicity bioassays. An extraction method was characterized to optimize bioassay assessment of TNT toxicity in different soil types. Using the Microtox{trademark} (Photobacterium phosphoreum) assay and non-extracted samples, TNT was most acutely toxic (IC{sub 50} = 1--9 PPM) followed by RDX and GAP; NC did not show obvious toxicity (probably due to solubility limitations). TNT (in 0.25% DMSO) yielded an IC{sub 50} 0.98 + 0.10 (SD) ppm. The 96h-EC{sub 50} (Selenastrum capricornutum growth inhibition) of TNT (1. 1 ppm) was higher than GAP and RDX;more » NC was not apparently toxic (probably due to solubility limitations). Soil samples (sand or a silt-sand mix) were spiked with either 2,000 or 20,000 mg TNT/kg soil, and were adjusted to 20% moisture. Samples were later mixed with acetonitrile, sonicated, and then treated with CaCl{sub 2} before filtration, HPLC and ecotoxicity analyses. Results indicated that: the recovery of TNT from soil (97.51% {+-} 2.78) was independent of the type of soil or moisture content; CaCl{sub 2} interfered with TNT toxicity and acetonitrile extracts could not be used directly for algal testing. When TNT extracts were diluted to fixed concentrations, similar TNT-induced ecotoxicities were generally observed and suggested that, apart from the expected effects of TNT concentrations in the soil, the soil texture and the moisture effects were minimal. The extraction procedure permits HPLC analyses as well as ecotoxicity testing and minimizes secondary soil matrix effects. Studies will be conducted to study the toxic effects of other energetic substances present in soil using this approach.« less

  18. Model free simulations of a high speed reacting mixing layer

    NASA Technical Reports Server (NTRS)

    Steinberger, Craig J.

    1992-01-01

    The effects of compressibility, chemical reaction exothermicity and non-equilibrium chemical modeling in a combusting plane mixing layer were investigated by means of two-dimensional model free numerical simulations. It was shown that increased compressibility generally had a stabilizing effect, resulting in reduced mixing and chemical reaction conversion rate. The appearance of 'eddy shocklets' in the flow was observed at high convective Mach numbers. Reaction exothermicity was found to enhance mixing at the initial stages of the layer's growth, but had a stabilizing effect at later times. Calculations were performed for a constant rate chemical rate kinetics model and an Arrhenius type kinetics prototype. The Arrhenius model was found to cause a greater temperature increase due to reaction than the constant kinetics model. This had the same stabilizing effect as increasing the exothermicity of the reaction. Localized flame quenching was also observed when the Zeldovich number was relatively large.

  19. Analyses and simulations of the upper ocean's response to Hurricane Felix at the Bermuda Testbed Mooring site: 13-23 August 1995

    NASA Astrophysics Data System (ADS)

    Zedler, S. E.; Dickey, T. D.; Doney, S. C.; Price, J. F.; Yu, X.; Mellor, G. L.

    2002-12-01

    The center of Hurricane Felix passed 85 km to the southwest of the Bermuda Testbed Mooring (BTM; 31°44'N, 64°10'W) site on 15 August 1995. Data collected in the upper ocean from the BTM during this encounter provide a rare opportunity to investigate the physical processes that occur in a hurricane's wake. Data analyses indicate that the storm caused a large increase in kinetic energy at near-inertial frequencies, internal gravity waves in the thermocline, and inertial pumping, mixed layer deepening, and significant vertical redistribution of heat, with cooling of the upper 30 m and warming at depths of 30-70 m. The temperature evolution was simulated using four one-dimensional mixed layer models: Price-Weller-Pinkel (PWP), K Profile Parameterization (KPP), Mellor-Yamada 2.5 (MY), and a modified version of MY2.5 (MY2). The primary differences in the model results were in their simulations of temperature evolution. In particular, when forced using a drag coefficient that had a linear dependence on wind speed, the KPP model predicted sea surface cooling, mixed layer currents, and the maximum depth of cooling closer to the observations than any of the other models. This was shown to be partly because of a special parameterization for gradient Richardson number (RgKPP) shear instability mixing in response to resolved shear in the interior. The MY2 model predicted more sea surface cooling and greater depth penetration of kinetic energy than the MY model. In the MY2 model the dissipation rate of turbulent kinetic energy is parameterized as a function of a locally defined Richardson number (RgMY2) allowing for a reduction in dissipation rate for stable Richardson numbers (RgMY2) when internal gravity waves are likely to be present. Sensitivity simulations with the PWP model, which has specifically defined mixing procedures, show that most of the heat lost from the upper layer was due to entrainment (parameterized as a function of bulk Richardson number RbPWP), with the remainder due to local Richardson number (RgPWP) instabilities. With the exception of the MY model the models predicted reasonable estimates of the north and east current components during and after the hurricane passage at 25 and 45 m. Although the results emphasize differences between the modeled responses to a given wind stress, current controversy over the formulation of wind stress from wind speed measurements (including possible sea state and wave age and sheltering effects) cautions against using our results for assessing model skill. In particular, sensitivity studies show that MY2 simulations of the temperature evolution are excellent when the wind stress is increased, albeit with currents that are larger than observed. Sensitivity experiments also indicate that preexisting inertial motion modulated the amplitude of poststorm currents, but that there was probably not a significant resonant response because of clockwise wind rotation for our study site.

  20. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  1. An analysis of tree mortality using high resolution remotely-sensed data for mixed-conifer forests in San Diego county

    NASA Astrophysics Data System (ADS)

    Freeman, Mary Pyott

    ABSTRACT An Analysis of Tree Mortality Using High Resolution Remotely-Sensed Data for Mixed-Conifer Forests in San Diego County by Mary Pyott Freeman The montane mixed-conifer forests of San Diego County are currently experiencing extensive tree mortality, which is defined as dieback where whole stands are affected. This mortality is likely the result of the complex interaction of many variables, such as altered fire regimes, climatic conditions such as drought, as well as forest pathogens and past management strategies. Conifer tree mortality and its spatial pattern and change over time were examined in three components. In component 1, two remote sensing approaches were compared for their effectiveness in delineating dead trees, a spatial contextual approach and an OBIA (object based image analysis) approach, utilizing various dates and spatial resolutions of airborne image data. For each approach transforms and masking techniques were explored, which were found to improve classifications, and an object-based assessment approach was tested. In component 2, dead tree maps produced by the most effective techniques derived from component 1 were utilized for point pattern and vector analyses to further understand spatio-temporal changes in tree mortality for the years 1997, 2000, 2002, and 2005 for three study areas: Palomar, Volcan and Laguna mountains. Plot-based fieldwork was conducted to further assess mortality patterns. Results indicate that conifer mortality was significantly clustered, increased substantially between 2002 and 2005, and was non-random with respect to tree species and diameter class sizes. In component 3, multiple environmental variables were used in Generalized Linear Model (GLM-logistic regression) and decision tree classifier model development, revealing the importance of climate and topographic factors such as precipitation and elevation, in being able to predict areas of high risk for tree mortality. The results from this study highlight the importance of multi-scale spatial as well as temporal analyses, in order to understand mixed-conifer forest structure, dynamics, and processes of decline, which can lead to more sustainable management of forests with continued natural and anthropogenic disturbance.

  2. Neurodevelopment in Early Childhood Affected by Prenatal Lead Exposure and Iron Intake.

    PubMed

    Shah-Kulkarni, Surabhi; Ha, Mina; Kim, Byung-Mi; Kim, Eunjeong; Hong, Yun-Chul; Park, Hyesook; Kim, Yangho; Kim, Bung-Nyun; Chang, Namsoo; Oh, Se-Young; Kim, Young Ju; Kimʼs, Young Ju; Lee, Boeun; Ha, Eun-Hee

    2016-01-01

    No safe threshold level of lead exposure in children has been recognized. Also, the information on shielding effect of maternal dietary iron intake during pregnancy on the adverse effects of prenatal lead exposure on children's postnatal neurocognitive development is very limited. We examined the association of prenatal lead exposure and neurodevelopment in children at 6, 12, 24, and 36 months and the protective action of maternal dietary iron intake against the impact of lead exposure. The study participants comprise 965 pregnant women and their subsequent offspring of the total participants enrolled in the Mothers and Children's environmental health study: a prospective birth cohort study. Generalized linear model and linear mixed model analysis were performed to analyze the effect of prenatal lead exposure and mother's dietary iron intake on children's cognitive development at 6, 12, 24, and 36 months. Maternal late pregnancy lead was marginally associated with deficits in mental development index (MDI) of children at 6 months. Mothers having less than 75th percentile of dietary iron intake during pregnancy showed significant increase in the harmful effect of late pregnancy lead exposure on MDI at 6 months. Linear mixed model analyses showed the significant detrimental effect of prenatal lead exposure in late pregnancy on cognitive development up to 36 months in children of mothers having less dietary iron intake during pregnancy. Thus, our findings imply importance to reduce prenatal lead exposure and have adequate iron intake for better neurodevelopment in children.

  3. Neurodevelopment in Early Childhood Affected by Prenatal Lead Exposure and Iron Intake

    PubMed Central

    Shah-Kulkarni, Surabhi; Ha, Mina; Kim, Byung-Mi; Kim, Eunjeong; Hong, Yun-Chul; Park, Hyesook; Kim, Yangho; Kim, Bung-Nyun; Chang, Namsoo; Oh, Se-Young; Kim, Young Ju; Lee, Boeun; Ha, Eun-Hee

    2016-01-01

    Abstract No safe threshold level of lead exposure in children has been recognized. Also, the information on shielding effect of maternal dietary iron intake during pregnancy on the adverse effects of prenatal lead exposure on children's postnatal neurocognitive development is very limited. We examined the association of prenatal lead exposure and neurodevelopment in children at 6, 12, 24, and 36 months and the protective action of maternal dietary iron intake against the impact of lead exposure. The study participants comprise 965 pregnant women and their subsequent offspring of the total participants enrolled in the Mothers and Children's environmental health study: a prospective birth cohort study. Generalized linear model and linear mixed model analysis were performed to analyze the effect of prenatal lead exposure and mother's dietary iron intake on children's cognitive development at 6, 12, 24, and 36 months. Maternal late pregnancy lead was marginally associated with deficits in mental development index (MDI) of children at 6 months. Mothers having less than 75th percentile of dietary iron intake during pregnancy showed significant increase in the harmful effect of late pregnancy lead exposure on MDI at 6 months. Linear mixed model analyses showed the significant detrimental effect of prenatal lead exposure in late pregnancy on cognitive development up to 36 months in children of mothers having less dietary iron intake during pregnancy. Thus, our findings imply importance to reduce prenatal lead exposure and have adequate iron intake for better neurodevelopment in children. PMID:26825887

  4. Analysis and modeling of subgrid scalar mixing using numerical data

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  5. The role of CSP in the electricity system of South Africa - technical operation, grid constraints, market structure and economics

    NASA Astrophysics Data System (ADS)

    Kost, Christoph; Friebertshäuser, Chris; Hartmann, Niklas; Fluri, Thomas; Nitz, Peter

    2017-06-01

    This paper analyses the role of solar technologies (CSP and PV) and their interaction in the South African electricity system by using a fundamental electricity system modelling (ENTIGRIS-SouthAfrica). The model is used to analyse the South African long-term electricity generation portfolio mix, optimized site selection and required transmission capacities until the year 2050. Hereby especially the location and grid integration of solar technology (PV and CSP) and wind power plants is analysed. This analysis is carried out by using detailed resource assessment of both technologies. A cluster approach is presented to reduce complexity by integrating the data in an optimization model.

  6. Analysing Instrument Mixes in Quality Assurance: The Czech and Slovak Accreditation Commissions in the Era of Mass Higher Education

    ERIC Educational Resources Information Center

    Kohoutek, Jan

    2014-01-01

    Utilising insights from policy instrument theory, the article analyses the design, functioning and effects of the tools used by the Czech Accreditation Commission (CAC) and the Slovak Accreditation Commission (SAC) in the 2000s. Aside from programme accreditation, the other tools analysed are: institutional approval, institutional evaluations,…

  7. Investigating the Relationships Among Resilience, Social Anxiety, and Procrastination in a Sample of College Students.

    PubMed

    Ko, Chen-Yi Amy; Chang, Yuhsuan

    2018-01-01

    This study investigated the relationships among resilience, social anxiety, and procrastination in a sample of college students. Specifically, structural equation modeling analyses were applied to examine the effect of resilience on procrastination and to test the mediating effect of social anxiety. The results of this study suggested that social anxiety partially mediated the relationship between resilience and procrastination. Students with higher levels of resilience reported a lower frequency of procrastination behavior, and resilience had an indirect effect on procrastination through social anxiety. The results of this study clarify the current knowledge of the mixed results on resilience and procrastination behaviors and offer practical learning strategies and psychological interventions.

  8. Using the Mixed Rasch Model to analyze data from the beliefs and attitudes about memory survey.

    PubMed

    Smith, Everett V; Ying, Yuping; Brown, Scott W

    2012-01-01

    In this study, we used the Mixed Rasch Model (MRM) to analyze data from the Beliefs and Attitudes About Memory Survey (BAMS; Brown, Garry, Silver, and Loftus, 1997). We used the original 5-point BAMS data to investigate the functioning of the "Neutral" category via threshold analysis under a 2-class MRM solution. The "Neutral" category was identified as not eliciting the model expected responses and observations in the "Neutral" category were subsequently treated as missing data. For the BAMS data without the "Neutral" category, exploratory MRM analyses specifying up to 5 latent classes were conducted to evaluate data-model fit using the consistent Akaike information criterion (CAIC). For each of three BAMS subscales, a two latent class solution was identified as fitting the mixed Rasch rating scale model the best. Results regarding threshold analysis, person parameters, and item fit based on the final models are presented and discussed as well as the implications of this study.

  9. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model.

    PubMed

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.

  10. On the validity of effective formulations for transport through heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    de Dreuzy, Jean-Raynald; Carrera, Jesus

    2016-04-01

    Geological heterogeneity enhances spreading of solutes and causes transport to be anomalous (i.e., non-Fickian), with much less mixing than suggested by dispersion. This implies that modeling transport requires adopting either stochastic approaches that model heterogeneity explicitly or effective transport formulations that acknowledge the effects of heterogeneity. A number of such formulations have been developed and tested as upscaled representations of enhanced spreading. However, their ability to represent mixing has not been formally tested, which is required for proper reproduction of chemical reactions and which motivates our work. We propose that, for an effective transport formulation to be considered a valid representation of transport through heterogeneous porous media (HPM), it should honor mean advection, mixing and spreading. It should also be flexible enough to be applicable to real problems. We test the capacity of the multi-rate mass transfer (MRMT) model to reproduce mixing observed in HPM, as represented by the classical multi-Gaussian log-permeability field with a Gaussian correlation pattern. Non-dispersive mixing comes from heterogeneity structures in the concentration fields that are not captured by macrodispersion. These fine structures limit mixing initially, but eventually enhance it. Numerical results show that, relative to HPM, MRMT models display a much stronger memory of initial conditions on mixing than on dispersion because of the sensitivity of the mixing state to the actual values of concentration. Because MRMT does not restitute the local concentration structures, it induces smaller non-dispersive mixing than HPM. However long-lived trapping in the immobile zones may sustain the deviation from dispersive mixing over much longer times. While spreading can be well captured by MRMT models, in general non-dispersive mixing cannot.

  11. Manpower Mix for Health Services

    PubMed Central

    Shuman, Larry J.; Young, John P.; Naddor, Eliezer

    1971-01-01

    A model is formulated to determine the mix of manpower and technology needed to provide health services of acceptable quality at a minimum total cost to the community. Total costs include both the direct costs associated with providing the services and with developing additional manpower and the indirect costs (shortage costs) resulting from not providing needed services. The model is applied to a hypothetical neighborhood health center, and its sensitivity to alternative policies is investigated by cost-benefit analyses. Possible extensions of the model to include dynamic elements in health delivery systems are discussed, as is its adaptation for use in hospital planning, with a changed objective function. PMID:5095652

  12. Planktonic events may cause polymictic-dimictic regime shifts in temperate lakes

    PubMed Central

    Shatwell, Tom; Adrian, Rita; Kirillin, Georgiy

    2016-01-01

    Water transparency affects the thermal structure of lakes, and within certain lake depth ranges, it can determine whether a lake mixes regularly (polymictic regime) or stratifies continuously (dimictic regime) from spring through summer. Phytoplankton biomass can influence transparency but the effect of its seasonal pattern on stratification is unknown. Therefore we analysed long term field data from two lakes of similar depth, transparency and climate but one polymictic and one dimictic, and simulated a conceptual lake with a hydrodynamic model. Transparency in the study lakes was typically low during spring and summer blooms and high in between during the clear water phase (CWP), caused when zooplankton graze the spring bloom. The effect of variability of transparency on thermal structure was stronger at intermediate transparency and stronger during a critical window in spring when the rate of lake warming is highest. Whereas the spring bloom strengthened stratification in spring, the CWP weakened it in summer. The presence or absence of the CWP influenced stratification duration and under some conditions determined the mixing regime. Therefore seasonal plankton dynamics, including biotic interactions that suppress the CWP, can influence lake temperatures, stratification duration, and potentially also the mixing regime. PMID:27074883

  13. Planktonic events may cause polymictic-dimictic regime shifts in temperate lakes.

    PubMed

    Shatwell, Tom; Adrian, Rita; Kirillin, Georgiy

    2016-04-14

    Water transparency affects the thermal structure of lakes, and within certain lake depth ranges, it can determine whether a lake mixes regularly (polymictic regime) or stratifies continuously (dimictic regime) from spring through summer. Phytoplankton biomass can influence transparency but the effect of its seasonal pattern on stratification is unknown. Therefore we analysed long term field data from two lakes of similar depth, transparency and climate but one polymictic and one dimictic, and simulated a conceptual lake with a hydrodynamic model. Transparency in the study lakes was typically low during spring and summer blooms and high in between during the clear water phase (CWP), caused when zooplankton graze the spring bloom. The effect of variability of transparency on thermal structure was stronger at intermediate transparency and stronger during a critical window in spring when the rate of lake warming is highest. Whereas the spring bloom strengthened stratification in spring, the CWP weakened it in summer. The presence or absence of the CWP influenced stratification duration and under some conditions determined the mixing regime. Therefore seasonal plankton dynamics, including biotic interactions that suppress the CWP, can influence lake temperatures, stratification duration, and potentially also the mixing regime.

  14. Mixed-method research protocol: defining and operationalizing patient-related complexity of nursing care in acute care hospitals.

    PubMed

    Huber, Evelyn; Kleinknecht-Dolf, Michael; Müller, Marianne; Kugler, Christiane; Spirig, Rebecca

    2017-06-01

    To define the concept of patient-related complexity of nursing care in acute care hospitals and to operationalize it in a questionnaire. The concept of patient-related complexity of nursing care in acute care hospitals has not been conclusively defined in the literature. The operationalization in a corresponding questionnaire is necessary, given the increased significance of the topic, due to shortened lengths of stay and increased patient morbidity. Hybrid model of concept development and embedded mixed-methods design. The theoretical phase of the hybrid model involved a literature review and the development of a working definition. In the fieldwork phase of 2015 and 2016, an embedded mixed-methods design was applied with complexity assessments of all patients at five Swiss hospitals using our newly operationalized questionnaire 'Complexity of Nursing Care' over 1 month. These data will be analysed with structural equation modelling. Twelve qualitative case studies will be embedded. They will be analysed using a structured process of constructing case studies and content analysis. In the final analytic phase, the quantitative and qualitative data will be merged and added to the results of the theoretical phase for a common interpretation. Cantonal Ethics Committee Zurich judged the research programme as unproblematic in December 2014 and May 2015. Following the phases of the hybrid model and using an embedded mixed-methods design can reach an in-depth understanding of patient-related complexity of nursing care in acute care hospitals, a final version of the questionnaire and an acknowledged definition of the concept. © 2016 John Wiley & Sons Ltd.

  15. Development of a Reduced-Order Three-Dimensional Flow Model for Thermal Mixing and Stratification Simulation during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    2017-09-03

    Mixing, thermal-stratification, and mass transport phenomena in large pools or enclosures play major roles for the safety of reactor systems. Depending on the fidelity requirement and computational resources, various modeling methods, from the 0-D perfect mixing model to 3-D Computational Fluid Dynamics (CFD) models, are available. Each is associated with its own advantages and shortcomings. It is very desirable to develop an advanced and efficient thermal mixing and stratification modeling capability embedded in a modern system analysis code to improve the accuracy of reactor safety analyses and to reduce modeling uncertainties. An advanced system analysis tool, SAM, is being developedmore » at Argonne National Laboratory for advanced non-LWR reactor safety analysis. While SAM is being developed as a system-level modeling and simulation tool, a reduced-order three-dimensional module is under development to model the multi-dimensional flow and thermal mixing and stratification in large enclosures of reactor systems. This paper provides an overview of the three-dimensional finite element flow model in SAM, including the governing equations, stabilization scheme, and solution methods. Additionally, several verification and validation tests are presented, including lid-driven cavity flow, natural convection inside a cavity, laminar flow in a channel of parallel plates. Based on the comparisons with the analytical solutions and experimental results, it is demonstrated that the developed 3-D fluid model can perform very well for a wide range of flow problems.« less

  16. Inclusion of surface gravity wave effects in vertical mixing parameterizations with application to Chesapeake Bay, USA

    NASA Astrophysics Data System (ADS)

    Fisher, A. W.; Sanford, L. P.; Scully, M. E.; Suttles, S. E.

    2016-02-01

    Enhancement of wind-driven mixing by Langmuir turbulence (LT) may have important implications for exchanges of mass and momentum in estuarine and coastal waters, but the transient nature of LT and observational constraints make quantifying its impact on vertical exchange difficult. Recent studies have shown that wind events can be of first order importance to circulation and mixing in estuaries, prompting this investigation into the ability of second-moment turbulence closure schemes to model wind-wave enhanced mixing in an estuarine environment. An instrumented turbulence tower was deployed in middle reaches of Chesapeake Bay in 2013 and collected observations of coherent structures consistent with LT that occurred under regions of breaking waves. Wave and turbulence measurements collected from a vertical array of Acoustic Doppler Velocimeters (ADVs) provided direct estimates of TKE, dissipation, turbulent length scale, and the surface wave field. Direct measurements of air-sea momentum and sensible heat fluxes were collected by a co-located ultrasonic anemometer deployed 3m above the water surface. Analyses of the data indicate that the combined presence of breaking waves and LT significantly influences air-sea momentum transfer, enhancing vertical mixing and acting to align stress in the surface mixed layer in the direction of Lagrangian shear. Here these observations are compared to the predictions of commonly used second-moment turbulence closures schemes, modified to account for the influence of wave breaking and LT. LT parameterizations are evaluated under neutrally stratified conditions and buoyancy damping parameterizations are evaluated under stably stratified conditions. We compare predicted turbulent quantities to observations for a variety of wind, wave, and stratification conditions. The effects of fetch-limited wave growth, surface buoyancy flux, and tidal distortion on wave mixing parameterizations will also be discussed.

  17. Linear mixed-effects modeling approach to FMRI group analysis

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.

    2013-01-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789

  18. Linear mixed-effects modeling approach to FMRI group analysis.

    PubMed

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.

  19. Dynamically heterogenous partitions and phylogenetic inference: an evaluation of analytical strategies with cytochrome b and ND6 gene sequences in cranes.

    PubMed

    Krajewski, C; Fain, M G; Buckley, L; King, D G

    1999-11-01

    ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.

  20. Early treatment of posterior crossbite - a randomised clinical trial

    PubMed Central

    2013-01-01

    Background The aim of this randomised clinical trial was to assess the effect of early orthodontic treatment in contrast to normal growth effects for functional unilateral posterior crossbite in the late deciduous and early mixed dentition by means of three-dimensional digital model analysis. Methods This randomised clinical trial was assessed to analyse the orthodontic treatment effects for patients with functional unilateral posterior crossbite in the late deciduous and early mixed dentition using a two-step procedure: initial maxillary expansion followed by a U-bow activator therapy. In the treatment group 31 patients and in the control group 35 patients with a mean age of 7.3 years (SD 2.1) were monitored. The time between the initial assessment (T1) and the follow-up (T2) was one year. The orthodontic analysis was done by a three-dimensional digital model analysis. Using the ‘Digimodel’ software, the orthodontic measurements in the maxilla and mandible and for the midline deviation, the overjet and overbite were recorded. Results Significant differences between the control and the therapy group at T2 were detected for the anterior, median and posterior transversal dimensions of the maxilla, the palatal depth, the palatal base arch length, the maxillary arch length and inclination, the midline deviation, the overjet and the overbite. Conclusions Orthodontic treatment of a functional unilateral posterior crossbite with a bonded maxillary expansion device followed by U-bow activator therapy in the late deciduous and early mixed dentition is an effective therapeutic method, as evidenced by the results of this RCT. It leads to three-dimensional therapeutically induced maxillary growth effects. Dental occlusion is significantly improved, and the prognosis for normal craniofacial growth is enhanced. Trial registration Registration trial DRKS00003497 on DRKS PMID:23339736

  1. Mixed Single/Double Precision in OpenIFS: A Detailed Study of Energy Savings, Scaling Effects, Architectural Effects, and Compilation Effects

    NASA Astrophysics Data System (ADS)

    Fagan, Mike; Dueben, Peter; Palem, Krishna; Carver, Glenn; Chantry, Matthew; Palmer, Tim; Schlacter, Jeremy

    2017-04-01

    It has been shown that a mixed precision approach that judiciously replaces double precision with single precision calculations can speed-up global simulations. In particular, a mixed precision variation of the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) showed virtually the same quality model results as the standard double precision version (Vana et al., Single precision in weather forecasting models: An evaluation with the IFS, Monthly Weather Review, in print). In this study, we perform detailed measurements of savings in computing time and energy using a mixed precision variation of the -OpenIFS- model. The mixed precision variation of OpenIFS is analogous to the IFS variation used in Vana et al. We (1) present results for energy measurements for simulations in single and double precision using Intel's RAPL technology, (2) conduct a -scaling- study to quantify the effects that increasing model resolution has on both energy dissipation and computing cycles, (3) analyze the differences between single core and multicore processing, and (4) compare the effects of different compiler technologies on the mixed precision OpenIFS code. In particular, we compare intel icc/ifort with gnu gcc/gfortran.

  2. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    PubMed

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  3. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  4. A brief measure of attitudes toward mixed methods research in psychology.

    PubMed

    Roberts, Lynne D; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; 'Limited Exposure,' '(in)Compatibility,' 'Validity,' and 'Tokenistic Qualitative Component'; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs.

  5. Phylogeny of sipunculan worms: A combined analysis of four gene regions and morphology.

    PubMed

    Schulze, Anja; Cutler, Edward B; Giribet, Gonzalo

    2007-01-01

    The intra-phyletic relationships of sipunculan worms were analyzed based on DNA sequence data from four gene regions and 58 morphological characters. Initially we analyzed the data under direct optimization using parsimony as optimality criterion. An implied alignment resulting from the direct optimization analysis was subsequently utilized to perform a Bayesian analysis with mixed models for the different data partitions. For this we applied a doublet model for the stem regions of the 18S rRNA. Both analyses support monophyly of Sipuncula and most of the same clades within the phylum. The analyses differ with respect to the relationships among the major groups but whereas the deep nodes in the direct optimization analysis generally show low jackknife support, they are supported by 100% posterior probability in the Bayesian analysis. Direct optimization has been useful for handling sequences of unequal length and generating conservative phylogenetic hypotheses whereas the Bayesian analysis under mixed models provided high resolution in the basal nodes of the tree.

  6. Effects of a Web-Based Computer-Tailored Game to Reduce Binge Drinking Among Dutch Adolescents: A Cluster Randomized Controlled Trial

    PubMed Central

    Crutzen, Rik; Mercken, Liesbeth; Candel, Math; de Vries, Hein

    2016-01-01

    Background Binge drinking among Dutch adolescents is among the highest in Europe. Few interventions so far have focused on adolescents aged 15 to 19 years. Because binge drinking increases significantly during those years, it is important to develop binge drinking prevention programs for this group. Web-based computer-tailored interventions can be an effective tool for reducing this behavior in adolescents. Embedding the computer-tailored intervention in a serious game may make it more attractive to adolescents. Objective The aim was to assess whether a Web-based computer-tailored intervention is effective in reducing binge drinking in Dutch adolescents aged 15 to 19 years. Secondary outcomes were reduction in excessive drinking and overall consumption during the previous week. Personal characteristics associated with program adherence were also investigated. Methods A cluster randomized controlled trial was conducted among 34 Dutch schools. Each school was randomized into either an experimental (n=1622) or a control (n=1027) condition. Baseline assessment took place in January and February 2014. At baseline, demographic variables and alcohol use were assessed. Follow-up assessment of alcohol use took place 4 months later (May and June 2014). After the baseline assessment, participants in the experimental condition started with the intervention consisting of a game about alcohol in which computer-tailored feedback regarding motivational characteristics was embedded. Participants in the control condition only received the baseline questionnaire. Both groups received the 4-month follow-up questionnaire. Effects of the intervention were assessed using logistic regression mixed models analyses for binge and excessive drinking and linear regression mixed models analyses for weekly consumption. Factors associated with intervention adherence in the experimental condition were explored by means of a linear regression model. Results In total, 2649 adolescents participated in the baseline assessment. At follow-up, 824 (31.11%) adolescents returned. The intervention was effective in reducing binge drinking among adolescents aged 15 years (P=.03) and those aged 16 years when they participated in at least 2 intervention sessions (P=.04). Interaction effects between excessive drinking and educational level (P=.08) and between weekly consumption and age (P=.09) were found; however, in-depth analyses revealed no significant subgroup effects for both interaction effects. Additional analyses revealed that prolonged use of the intervention was associated with stronger effects for binge drinking. Yet, overall adherence to the intervention was low. Analyses revealed that being Protestant, female, younger, a nonbinge drinker, and having a higher educational background were associated with adherence. Conclusions The intervention was effective for adolescents aged 15 and 16 years concerning binge drinking. Prevention messages may be more effective for those at the start of their drinking career, whereas other methods may be needed for those with a longer history of alcohol consumption. Despite using game elements, intervention completion was low. Trial Registration Dutch Trial Register: NTR4048; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=4048 (Archived by WebCite® at http://www.webcitation.org/6eSJD3FiY) PMID:26842694

  7. Effects of a Web-Based Computer-Tailored Game to Reduce Binge Drinking Among Dutch Adolescents: A Cluster Randomized Controlled Trial.

    PubMed

    Jander, Astrid; Crutzen, Rik; Mercken, Liesbeth; Candel, Math; de Vries, Hein

    2016-02-03

    Binge drinking among Dutch adolescents is among the highest in Europe. Few interventions so far have focused on adolescents aged 15 to 19 years. Because binge drinking increases significantly during those years, it is important to develop binge drinking prevention programs for this group. Web-based computer-tailored interventions can be an effective tool for reducing this behavior in adolescents. Embedding the computer-tailored intervention in a serious game may make it more attractive to adolescents. The aim was to assess whether a Web-based computer-tailored intervention is effective in reducing binge drinking in Dutch adolescents aged 15 to 19 years. Secondary outcomes were reduction in excessive drinking and overall consumption during the previous week. Personal characteristics associated with program adherence were also investigated. A cluster randomized controlled trial was conducted among 34 Dutch schools. Each school was randomized into either an experimental (n=1622) or a control (n=1027) condition. Baseline assessment took place in January and February 2014. At baseline, demographic variables and alcohol use were assessed. Follow-up assessment of alcohol use took place 4 months later (May and June 2014). After the baseline assessment, participants in the experimental condition started with the intervention consisting of a game about alcohol in which computer-tailored feedback regarding motivational characteristics was embedded. Participants in the control condition only received the baseline questionnaire. Both groups received the 4-month follow-up questionnaire. Effects of the intervention were assessed using logistic regression mixed models analyses for binge and excessive drinking and linear regression mixed models analyses for weekly consumption. Factors associated with intervention adherence in the experimental condition were explored by means of a linear regression model. In total, 2649 adolescents participated in the baseline assessment. At follow-up, 824 (31.11%) adolescents returned. The intervention was effective in reducing binge drinking among adolescents aged 15 years (P=.03) and those aged 16 years when they participated in at least 2 intervention sessions (P=.04). Interaction effects between excessive drinking and educational level (P=.08) and between weekly consumption and age (P=.09) were found; however, in-depth analyses revealed no significant subgroup effects for both interaction effects. Additional analyses revealed that prolonged use of the intervention was associated with stronger effects for binge drinking. Yet, overall adherence to the intervention was low. Analyses revealed that being Protestant, female, younger, a nonbinge drinker, and having a higher educational background were associated with adherence. The intervention was effective for adolescents aged 15 and 16 years concerning binge drinking. Prevention messages may be more effective for those at the start of their drinking career, whereas other methods may be needed for those with a longer history of alcohol consumption. Despite using game elements, intervention completion was low. Dutch Trial Register: NTR4048; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=4048 (Archived by WebCite® at http://www.webcitation.org/6eSJD3FiY).

  8. A Methodological Review of US Budget-Impact Models for New Drugs.

    PubMed

    Mauskopf, Josephine; Earnshaw, Stephanie

    2016-11-01

    A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.

  9. Do effects of common case-mix adjusters on patient experiences vary across patient groups?

    PubMed

    de Boer, Dolf; van der Hoek, Lucas; Rademakers, Jany; Delnoij, Diana; van den Berg, Michael

    2017-11-22

    Many survey studies in health care adjust for demographic characteristics such as age, gender, educational attainment and general health when performing statistical analyses. Whether the effects of these demographic characteristics are consistent between patient groups remains to be determined. This is important as the rationale for adjustment is often that demographic sub-groups differ in their so-called 'response tendency'. This rationale may be less convincing if the effects of response tendencies vary across patient groups. The present paper examines whether the impact of these characteristics on patients' global rating of care varies across patient groups. Secondary analyses using multi-level regression models were performed on a dataset including 32 different patient groups and 145,578 observations. For each demographic variable, the 95% expected range of case-mix coefficients across patient groups is presented. In addition, we report whether the variance of coefficients for demographic variables across patient groups is significant. Overall, men, elderly, lower educated people and people in good health tend to give higher global ratings. However, these effects varied significantly across patient groups and included the possibility of no effect or an opposite effect in some patient groups. The response tendency attributed to demographic characteristics - such as older respondents being milder, or higher educated respondents being more critical - is not general or universal. As such, the mechanism linking demographic characteristics to survey results on patient experiences with quality of care is more complicated than a general response tendency. It is possible that the response tendency interacts with patient group, but it is also possible that other mechanisms are at play.

  10. Estimating spatial and temporal components of variation in count data using negative binomial mixed models

    USGS Publications Warehouse

    Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.

    2013-01-01

    Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.

  11. Longitudinal measurements of oxygen consumption in growing infants during the first weeks after birth: old data revisited.

    PubMed

    Sinclair, J C; Thorlund, K; Walter, S D

    2013-01-01

    In a study conducted in 1966-1969, longitudinal measurements were made of the metabolic rate in growing infants. Statistical methods for analyzing longitudinal data weren't readily accessible at that time. To measure minimal rates of oxygen consumption (V·O2, ml/min) in growing infants during the first postnatal weeks and to determine the relationships between postnatal increases in V·O2, body size and postnatal age. We studied 61 infants of any birth weight or gestational age, including 19 of very low birth weight. The infants, nursed in incubators, were clinically well and without need of oxygen supplementation or respiratory assistance. Serial measures of V·O2 using a closed-circuit method were obtained at approximately weekly intervals. V·O2 was measured under thermoneutral conditions with the infant asleep or resting quietly. Data were analyzed using mixed-effects models. During early postnatal growth, V·O2 rises as surface area (m(2))(1.94) (standard error, SE 0.054) or body weight (kg)(1.24) (SE 0.033). Multivariate analyses show statistically significant effects of both size and age. Reference intervals (RIs) for V·O2 for fixed values of body weight and postnatal age are presented. As V·O2 rises with increasing size and age, there is an increase in the skin-operative environmental temperature gradient (T skin-op) required for heat loss. Required T skin-op can be predicted from surface area and heat loss (heat production minus heat storage). Generation of RIs for minimal rates of V·O2 in growing infants from the 1960s was enabled by application of mixed-effects statistical models for analyses of longitudinal data. Results apply to the precaffeine era of neonatal care. Copyright © 2013 S. Karger AG, Basel.

  12. An investigation of the predictors of photoprotection and UVR dose to the face in patients with XP: a protocol using observational mixed methods.

    PubMed

    Walburn, Jessica; Sarkany, Robert; Norton, Sam; Foster, Lesley; Morgan, Myfanwy; Sainsbury, Kirby; Araújo-Soares, Vera; Anderson, Rebecca; Garrood, Isabel; Heydenreich, Jakob; Sniehotta, Falko F; Vieira, Rute; Wulf, Hans Christian; Weinman, John

    2017-08-21

    Xeroderma pigmentosum (XP) is a rare genetic condition caused by defective nucleotide excision repair and characterised by skin cancer, ocular and neurological involvement. Stringent ultraviolet protection is the only way to prevent skin cancer. Despite the risks, some patients' photoprotection is poor, with a potentially devastating impact on their prognosis. The aim of this research is to identify disease-specific and psychosocial predictors of photoprotection behaviour and ultraviolet radiation (UVR) dose to the face. Mixed methods research based on 45 UK patients will involve qualitative interviews to identify individuals' experience of XP and the influences on their photoprotection behaviours and a cross-sectional quantitative survey to assess biopsychosocial correlates of these behaviours at baseline. This will be followed by objective measurement of UVR exposure for 21 days by wrist-worn dosimeter and daily recording of photoprotection behaviours and psychological variables for up to 50 days in the summer months. This novel methodology will enable UVR dose reaching the face to be calculated and analysed as a clinically relevant endpoint. A range of qualitative and quantitative analytical approaches will be used, reflecting the mixed methods (eg, cross-sectional qualitative interviews, n-of-1 studies). Framework analysis will be used to analyse the qualitative interviews; mixed-effects longitudinal models will be used to examine the association of clinical and psychosocial factors with the average daily UVR dose; dynamic logistic regression models will be used to investigate participant-specific psychosocial factors associated with photoprotection behaviours. This research has been approved by Camden and King's Cross Research Ethics Committee 15/LO/1395. The findings will be published in peer-reviewed journals and presented at national and international scientific conferences. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Differential expression analysis for RNAseq using Poisson mixed models.

    PubMed

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. A Vignette (User's Guide) for “An R Package for Statistical ...

    EPA Pesticide Factsheets

    StatCharrms is a graphical user front-end for ease of use in analyzing data generated from OCSPP 890.2200, Medaka Extended One Generation Reproduction Test (MEOGRT) and OCSPP 890.2300, Larval Amphibian Gonad Development Assay (LAGDA). The analyses StatCharrms is capable of performing are: Rao-Scott adjusted Cochran-Armitage test for trend By Slices (RSCABS), a Standard Cochran-Armitage test for trend By Slices (SCABS), mixed effects Cox proportional model, Jonckheere-Terpstra step down trend test, Dunn test, one way ANOVA, weighted ANOVA, mixed effects ANOVA, repeated measures ANOVA, and Dunnett test. This document provides a User’s Manual (termed a Vignette by the Comprehensive R Archive Network (CRAN)) for the previously created R-code tool called StatCharrms (Statistical analysis of Chemistry, Histopathology, and Reproduction endpoints using Repeated measures and Multi-generation Studies). The StatCharrms R-code has been publically available directly from EPA staff since the approval of OCSPP 890.2200 and 890.2300, and now is available publically available at the CRAN.

  15. Analyses of a heterogeneous lattice hydrodynamic model with low and high-sensitivity vehicles

    NASA Astrophysics Data System (ADS)

    Kaur, Ramanpreet; Sharma, Sapna

    2018-06-01

    Basic lattice model is extended to study the heterogeneous traffic by considering the optimal current difference effect on a unidirectional single lane highway. Heterogeneous traffic consisting of low- and high-sensitivity vehicles is modeled and their impact on stability of mixed traffic flow has been examined through linear stability analysis. The stability of flow is investigated in five distinct regions of the neutral stability diagram corresponding to the amount of higher sensitivity vehicles present on road. In order to investigate the propagating behavior of density waves non linear analysis is performed and near the critical point, the kink antikink soliton is obtained by driving mKdV equation. The effect of fraction parameter corresponding to high sensitivity vehicles is investigated and the results indicates that the stability rise up due to the fraction parameter. The theoretical findings are verified via direct numerical simulation.

  16. The Effect of Transcranial Direct Current Stimulation (tDCS) Electrode Size and Current Intensity on Motor Cortical Excitability: Evidence From Single and Repeated Sessions.

    PubMed

    Ho, Kerrie-Anne; Taylor, Janet L; Chew, Taariq; Gálvez, Verònica; Alonzo, Angelo; Bai, Siwei; Dokos, Socrates; Loo, Colleen K

    2016-01-01

    Current density is considered an important factor in determining the outcomes of tDCS, and is determined by the current intensity and electrode size. Previous studies examining the effect of these parameters on motor cortical excitability with small sample sizes reported mixed results. This study examined the effect of current intensity (1 mA, 2 mA) and electrode size (16 cm(2), 35 cm(2)) on motor cortical excitability over single and repeated tDCS sessions. Data from seven studies in 89 healthy participants were pooled for analysis. Single-session data were analyzed using mixed effects models and repeated-session data were analyzed using mixed design analyses of variance. Computational modeling was used to examine the electric field generated. The magnitude of increases in excitability after anodal tDCS was modest. For single-session tDCS, the 35 cm(2) electrodes produced greater increases in cortical excitability compared to the 16 cm(2) electrodes. There were no differences in the magnitude of cortical excitation produced by 1 mA and 2 mA tDCS. The repeated-sessions data also showed that there were greater increases in excitability with the 35 cm(2) electrodes. Further, repeated sessions of tDCS with the 35 cm(2) electrodes resulted in a cumulative increase in cortical excitability. Computational modeling predicted higher electric field at the motor hotspot for the 35 cm(2) electrodes. 2 mA tDCS does not necessarily produce larger effects than 1 mA tDCS in healthy participants. Careful consideration should be given to the exact positioning, size and orientation of tDCS electrodes relative to cortical regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Treesearch

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  18. Deliberate practice predicts performance over time in adolescent chess players and drop-outs: a linear mixed models analysis.

    PubMed

    de Bruin, Anique B H; Smits, Niels; Rikers, Remy M J P; Schmidt, Henk G

    2008-11-01

    In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training, were analysed since they had started playing chess seriously. The results revealed that deliberate practice (i.e. serious chess study alone and serious chess play) strongly contributed to chess performance. The influence of deliberate practice was not only observable in current performance, but also over chess players' careers. Moreover, although the drop-outs' chess ratings developed more slowly over time, both the persistent and drop-out chess players benefited to the same extent from investments in deliberate practice. Finally, the effect of gender on chess performance proved to be much smaller than the effect of deliberate practice. This study provides longitudinal support for the monotonic benefits assumption of deliberate practice, by showing that over chess players' careers, deliberate practice has a significant effect on performance, and to the same extent for chess players of different ultimate performance levels. The results of this study are not in line with critique raised against the deliberate practice theory that the factors deliberate practice and talent could be confounded.

  19. Effective Stochastic Model for Reactive Transport

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A. M.; Zheng, B.; Barajas-Solano, D. A.

    2017-12-01

    We propose an effective stochastic advection-diffusion-reaction (SADR) model. Unlike traditional advection-dispersion-reaction models, the SADR model describes mechanical and diffusive mixing as two separate processes. In the SADR model, the mechanical mixing is driven by random advective velocity with the variance given by the coefficient of mechanical dispersion. The diffusive mixing is modeled as a fickian diffusion with the effective diffusion coefficient. Both coefficients are given in terms of Peclet number (Pe) and the coefficient of molecular diffusion. We use the experimental results of to demonstrate that for transport and bimolecular reactions in porous media the SADR model is significantly more accurate than the traditional dispersion model, which overestimates the mass of the reaction product by as much as 25%.

  20. Mixed ethnicity and behavioural problems in the Millennium Cohort Study

    PubMed Central

    Zilanawala, Afshin; Sacker, Amanda; Kelly, Yvonne

    2018-01-01

    Background The population of mixed ethnicity individuals in the UK is growing. Despite this demographic trend, little is known about mixed ethnicity children and their problem behaviours. We examine trajectories of behavioural problems among non-mixed and mixed ethnicity children from early to middle childhood using nationally representative cohort data in the UK. Methods Data from 16 330 children from the Millennium Cohort Study with total difficulties scores were analysed. We estimated trajectories of behavioural problems by mixed ethnicity using growth curve models. Results White mixed (mean total difficulties score: 8.3), Indian mixed (7.7), Pakistani mixed (8.9) and Bangladeshi mixed (7.2) children had fewer problem behaviours than their non-mixed counterparts at age 3 (9.4, 10.1, 13.1 and 11.9, respectively). White mixed, Pakistani mixed and Bangladeshi mixed children had growth trajectories in problem behaviours significantly different from that of their non-mixed counterparts. Conclusions Using a detailed mixed ethnic classification revealed diverging trajectories between some non-mixed and mixed children across the early life course. Future studies should investigate the mechanisms, which may influence increasing behavioural problems in mixed ethnicity children. PMID:26912571

  1. Perceived Risk of Burglary and Fear of Crime: Individual- and Country-Level Mixed Modeling.

    PubMed

    Chon, Don Soo; Wilson, Mary

    2016-02-01

    Given the scarcity of prior studies, the current research introduced country-level variables, along with individual-level ones, to test how they are related to an individual's perceived risk of burglary (PRB) and fear of crime (FC), separately, by using mixed-level logistic regression analyses. The analyses of 104,218 individuals, residing in 50 countries, showed that country-level poverty was positively associated with FC only. However, individual-level variables, such as prior property crime victimization and female gender, had consistently positive relationships with both PRB and FC. However, age group and socioeconomic status were inconsistent between those two models, suggesting that PRB and FC are two different concepts. Finally, no significant difference in the pattern of PRB and FC was found between a highly developed group of countries and a less developed one. © The Author(s) 2014.

  2. Eliciting mixed emotions: a meta-analysis comparing models, types, and measures

    PubMed Central

    Berrios, Raul; Totterdell, Peter; Kellett, Stephen

    2015-01-01

    The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805

  3. Lepton masses and mixings in orbifold models with three Higgs families

    NASA Astrophysics Data System (ADS)

    Escudero, N.; Muñoz, C.; Teixeira, A. M.

    2007-12-01

    We analyse the phenomenological viability of heterotic Z3 orbifolds with two Wilson lines, which naturally predict three supersymmetric families of matter and Higgs fields. Given that these models can accommodate realistic scenarios for the quark sector avoiding potentially dangerous flavour-changing neutral currents, we now address the leptonic sector, finding that viable orbifold configurations can in principle be obtained. In particular, it is possible to accomodate present data on charged lepton masses, while avoiding conflict with lepton flavour-violating decays. Concerning the generation of neutrino masses and mixings, we find that Z3 orbifolds offer several interesting possibilities.

  4. Elucidating the fate of a mixed toluene, DHM, methanol, and i-propanol plume during in situ bioremediation

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Prommer, H.

    2017-06-01

    Organic pollutants such as solvents or petroleum products are widespread contaminants in soil and groundwater systems. In-situ bioremediation is a commonly used remediation technology to clean up the subsurface to eliminate the risks of toxic substances to reach potential receptors in surface waters or drinking water wells. This study discusses the development of a subsurface model to analyse the performance of an actively operating field-scale enhanced bioremediation scheme. The study site was affected by a mixed toluene, dihydromyrcenol (DHM), methanol, and i-propanol plume. A high-resolution, time-series of data was used to constrain the model development and calibration. The analysis shows that the observed failure of the treatment system is linked to an inefficient oxygen injection pattern. Moreover, the model simulations also suggest that additional contaminant spillages have occurred in 2012. Those additional spillages and their associated additional oxygen demand resulted in a significant increase in contaminant fluxes that remained untreated. The study emphasises the important role that reactive transport modelling can play in data analyses and for enhancing remediation efficiency.

  5. The Effect of Coach Expectations on Female Athletes' Motivation to Play: A Mixed Methods Approach

    ERIC Educational Resources Information Center

    Buning, Megan Matthews

    2013-01-01

    This concurrent, embedded mixed methods study used predominantly quantitative analyses to examine coach expectations and behaviors on female athletes' intrinsic motivation to play softball. Qualitative methods in the form of structured, open-ended questions were used to enhance the data by examining athletes' perceptions of coaching…

  6. Exploring the Relationship between Fidelity of Implementation and Academic Achievement in a Third-Grade Gifted Curriculum: A Mixed-Methods Study

    ERIC Educational Resources Information Center

    Azano, Amy; Missett, Tracy C.; Callahan, Carolyn M.; Oh, Sarah; Brunner, Marguerite; Foster, Lisa H.; Moon, Tonya R.

    2011-01-01

    This study used sequential mixed-methods analyses to investigate the effectiveness of a research-based language arts curriculum for gifted third graders. Using analytic induction, researchers found that teachers' beliefs and expectations (time, sense of autonomy, expectations for students, professional expertise) influenced the degree to which…

  7. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    PubMed

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  8. Theoretical and experimental investigation of turbulent mixing on ejector configuration and performance in a solar-driven organic-vapor ejector cycle chiller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucha, E.I.

    1984-01-01

    A general method was developed to calculate two dimensional (axisymmetric) mixing of a compressible jet in a variable cross-sectional area mixing channel of the ejector. The analysis considers mixing of the primary and secondary fluids at constant pressure and incorporates finite difference approximations to the conservation equations. The flow model is based on the mixing length approximations. A detailed study and modeling of the flow phenomenon determines the best (optimum) mixing channel geometry of the ejector. The detailed ejector performance characteristics are predicted by incorporating the flow model into a solar-powered ejector cycle cooling system computer model. Freon-11 is usedmore » as both the primary and secondary fluids. Performance evaluation of the cooling system is examined for its coefficient of performance (COP) under a variety of operating conditions. A study is also conducted on a modified ejector cycle in which a secondary pump is introduced at the exit of the evaporator. Results show a significant improvement in the overall performance over that of the conventional ejector cycle (without a secondary pump). Comparison between one and two-dimensional analyses indicates that the two-dimensional ejector fluid flow analysis predicts a better overall system performance. This is true for both the conventional and modified ejector cycles.« less

  9. In-depth study of 16CygB using inversion techniques

    NASA Astrophysics Data System (ADS)

    Buldgen, G.; Salmon, S. J. A. J.; Reese, D. R.; Dupret, M. A.

    2016-12-01

    Context. The 16Cyg binary system hosts the solar-like Kepler targets with the most stringent observational constraints. Indeed, we benefit from very high quality oscillation spectra, as well as spectroscopic and interferometric observations. Moreover, this system is particularly interesting since both stars are very similar in mass but the A component is orbited by a red dwarf, whereas the B component is orbited by a Jovian planet and thus could have formed a more complex planetary system. In our previous study, we showed that seismic inversions of integrated quantities could be used to constrain microscopic diffusion in the A component. In this study, we analyse the B component in the light of a more regularised inversion. Aims: We wish to analyse independently the B component of the 16Cyg binary system using the inversion of an indicator dedicated to analyse core conditions, denoted tu. Using this independent determination, we wish to analyse any differences between both stars due to the potential influence of planetary formation on stellar structure and/or their respective evolution. Methods: First, we recall the observational constraints for 16CygB and the method we used to generate reference stellar models of this star. We then describe how we improved the inversion and how this approach could be used for future targets with a sufficient number of observed frequencies. The inversion results were then used to analyse the differences between the A and B components. Results: The inversion of the tu indicator for 16CygB shows a disagreement with models including microscopic diffusion and sharing the chemical composition previously derived for 16CygA. We show that small changes in chemical composition are insufficient to solve the problem but that extra mixing can account for the differences seen between both stars. We use a parametric approach to analyse the impact of extra mixing in the form of turbulent diffusion on the behaviour of the tu values. We conclude on the necessity of further investigations using models with a physically motivated implementation of extra mixing processes including additional constraints to further improve the accuracy with which the fundamental parameters of this system are determined.

  10. Ill-posedness in modeling mixed sediment river morphodynamics

    NASA Astrophysics Data System (ADS)

    Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid

    2018-04-01

    In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.

  11. Effects of desiccation stress on adult female longevity in Aedes aegypti and Ae. albopictus (Diptera: Culicidae): results of a systematic review and pooled survival analysis.

    PubMed

    Schmidt, Chris A; Comeau, Genevieve; Monaghan, Andrew J; Williamson, Daniel J; Ernst, Kacey C

    2018-04-25

    Transmission dynamics of mosquito-borne viruses such as dengue, Zika and chikungunya are affected by the longevity of the adult female mosquito. Environmental conditions influence the survival of adult female Aedes mosquitoes, the primary vectors of these viruses. While the association of temperature with Aedes mortality has been relatively well-explored, the role of humidity is less established. The current study's goals were to compile knowledge of the influence of humidity on adult survival in the important vector species Aedes aegypti and Ae. albopictus, and to quantify this relationship while accounting for the modifying effect of temperature. We performed a systematic literature review to identify studies reporting experimental results informing the relationships among temperature, humidity and adult survival in Ae. aegypti and Ae. albopictus. Using a novel simulation approach to harmonize disparate survival data, we conducted pooled survival analyses via stratified and mixed effects Cox regression to estimate temperature-dependent associations between humidity and mortality risk for these species across a broad range of temperatures and vapor pressure deficits. After screening 1517 articles, 17 studies (one in semi-field and 16 in laboratory settings) met inclusion criteria and collectively reported results for 192 survival experiments. We review and synthesize relevant findings from these studies. Our stratified model estimated a strong temperature-dependent association of humidity with mortality in both species, though associations were not significant for Ae. albopictus in the mixed effects model. Lowest mortality risks were estimated around 27.5 °C and 21.5 °C for Ae. aegypti and Ae. albopictus, respectively, and mortality increased non-linearly with decreasing humidity. Aedes aegypti had a survival advantage relative to Ae. albopictus in the stratified model under most conditions, but species differences were not significant in the mixed effects model. Humidity is associated with mortality risk in adult female Ae. aegypti in controlled settings. Data are limited at low humidities, temperature extremes, and for Ae. albopictus, and further studies should be conducted to reduce model uncertainty in these contexts. Desiccation is likely an important factor in Aedes population dynamics and viral transmission in arid regions. Models of Aedes-borne virus transmission may be improved by more comprehensively representing humidity effects.

  12. Semiparametric mixed-effects analysis of PK/PD models using differential equations.

    PubMed

    Wang, Yi; Eskridge, Kent M; Zhang, Shunpu

    2008-08-01

    Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.

  13. Multiple component end-member mixing model of dilution: hydrochemical effects of construction water at Yucca Mountain, Nevada, USA

    NASA Astrophysics Data System (ADS)

    Lu, Guoping; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.

    2008-12-01

    The standard dual-component and two-member linear mixing model is often used to quantify water mixing of different sources. However, it is no longer applicable whenever actual mixture concentrations are not exactly known because of dilution. For example, low-water-content (low-porosity) rock samples are leached for pore-water chemical compositions, which therefore are diluted in the leachates. A multicomponent, two-member mixing model of dilution has been developed to quantify mixing of water sources and multiple chemical components experiencing dilution in leaching. This extended mixing model was used to quantify fracture-matrix interaction in construction-water migration tests along the Exploratory Studies Facility (ESF) tunnel at Yucca Mountain, Nevada, USA. The model effectively recovers the spatial distribution of water and chemical compositions released from the construction water, and provides invaluable data on the matrix fracture interaction. The methodology and formulations described here are applicable to many sorts of mixing-dilution problems, including dilution in petroleum reservoirs, hydrospheres, chemical constituents in rocks and minerals, monitoring of drilling fluids, and leaching, as well as to environmental science studies.

  14. Mixing and non-equilibrium chemical reaction in a compressible mixing layer. M.S. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Steinberger, Craig J.

    1991-01-01

    The effects of compressibility, chemical reaction exothermicity, and non-equilibrium chemical modeling in a reacting plane mixing layer were investigated by means of two dimensional direct numerical simulations. The chemical reaction was irreversible and second order of the type A + B yields Products + Heat. The general governing fluid equations of a compressible reacting flow field were solved by means of high order finite difference methods. Physical effects were then determined by examining the response of the mixing layer to variation of the relevant non-dimensionalized parameters. The simulations show that increased compressibility generally results in a suppressed mixing, and consequently a reduced chemical reaction conversion rate. Reaction heat release was found to enhance mixing at the initial stages of the layer growth, but had a stabilizing effect at later times. The increased stability manifested itself in the suppression or delay of the formation of large coherent structures within the flow. Calculations were performed for a constant rate chemical kinetics model and an Arrhenius type kinetic prototype. The choice of the model was shown to have an effect on the development of the flow. The Arrhenius model caused a greater temperature increase due to reaction than the constant kinetic model. This had the same effect as increasing the exothermicity of the reaction. Localized flame quenching was also observed when the Zeldovich number was relatively large.

  15. Effect of Blockage and Location on Mixing of Swirling Coaxial Jets in a Non-expanding Circular Confinement

    NASA Astrophysics Data System (ADS)

    Patel, V. K.; Singh, S. N.; Seshadri, V.

    2013-06-01

    A study is conducted to evolve an effective design concept to improve mixing in a combustor chamber to reduce the amount of intake air. The geometry used is that of a gas turbine combustor model. For simplicity, both the jets have been considered as air jets and effect of heat release and chemical reaction has not been modeled. Various contraction shapes and blockage have been investigated by placing them downstream at different locations with respect to inlet to obtain better mixing. A commercial CFD code `Fluent 6.3' which is based on finite volume method has been used to solve the flow in the combustor model. Validation is done with the experimental data available in literature using standard k-ω turbulence model. The study has shown that contraction and blockage at optimum location enhances the mixing process. Further, the effect of swirl in the jets has also investigated.

  16. Progress Report on SAM Reduced-Order Model Development for Thermal Stratification and Mixing during Reactor Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, R.

    This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.

  17. On the validity of effective formulations for transport through heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    de Dreuzy, J.-R.; Carrera, J.

    2015-11-01

    Geological heterogeneity enhances spreading of solutes, and causes transport to be anomalous (i.e., non-Fickian), with much less mixing than suggested by dispersion. This implies that modeling transport requires adopting either stochastic approaches that model heterogeneity explicitly or effective transport formulations that acknowledge the effects of heterogeneity. A number of such formulations have been developed and tested as upscaled representations of enhanced spreading. However, their ability to represent mixing has not been formally tested, which is required for proper reproduction of chemical reactions and which motivates our work. We propose that, for an effective transport formulation to be considered a valid representation of transport through Heterogeneous Porous Media (HPM), it should honor mean advection, mixing and spreading. It should also be flexible enough to be applicable to real problems. We test the capacity of the Multi-Rate Mass Transfer (MRMT) to reproduce mixing observed in HPM, as represented by the classical multi-Gaussian log-permeability field with a Gaussian correlation pattern. Non-dispersive mixing comes from heterogeneity structures in the concentration fields that are not captured by macrodispersion. These fine structures limit mixing initially, but eventually enhance it. Numerical results show that, relative to HPM, MRMT models display a much stronger memory of initial conditions on mixing than on dispersion because of the sensitivity of the mixing state to the actual values of concentration. Because MRMT does not restitute the local concentration structures, it induces smaller non-dispersive mixing than HPM. However long-lived trapping in the immobile zones may sustain the deviation from dispersive mixing over much longer times. While spreading can be well captured by MRMT models, non-dispersive mixing cannot.

  18. Prediction of hemoglobin in blood donors using a latent class mixed-effects transition model.

    PubMed

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Rizopoulos, Dimitris; Lesaffre, Emmanuel

    2016-02-20

    Blood donors experience a temporary reduction in their hemoglobin (Hb) value after donation. At each visit, the Hb value is measured, and a too low Hb value leads to a deferral for donation. Because of the recovery process after each donation as well as state dependence and unobserved heterogeneity, longitudinal data of Hb values of blood donors provide unique statistical challenges. To estimate the shape and duration of the recovery process and to predict future Hb values, we employed three models for the Hb value: (i) a mixed-effects models; (ii) a latent-class mixed-effects model; and (iii) a latent-class mixed-effects transition model. In each model, a flexible function was used to model the recovery process after donation. The latent classes identify groups of donors with fast or slow recovery times and donors whose recovery time increases with the number of donations. The transition effect accounts for possible state dependence in the observed data. All models were estimated in a Bayesian way, using data of new entrant donors from the Donor InSight study. Informative priors were used for parameters of the recovery process that were not identified using the observed data, based on results from the clinical literature. The results show that the latent-class mixed-effects transition model fits the data best, which illustrates the importance of modeling state dependence, unobserved heterogeneity, and the recovery process after donation. The estimated recovery time is much longer than the current minimum interval between donations, suggesting that an increase of this interval may be warranted. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Modeling of molecular diffusion and thermal conduction with multi-particle interaction in compressible turbulence

    NASA Astrophysics Data System (ADS)

    Tai, Y.; Watanabe, T.; Nagata, K.

    2018-03-01

    A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.

  20. Is mixed-handedness a marker of treatment response in posttraumatic stress disorder?: a pilot study.

    PubMed

    Forbes, David; Carty, Jessica; Elliott, Peter; Creamer, Mark; McHugh, Tony; Hopwood, Malcolm; Chemtob, Claude M

    2006-12-01

    Recent studies suggest that mixed-handedness is a risk factor for posttraumatic stress disorder (PTSD). This study examined whether mixed-handed veterans with combat-related PTSD respond more poorly to psychosocial treatment. Consistency of hand preference was assessed in 150 Vietnam combat veterans with PTSD using the Edinburgh Handedness Inventory (R. C. Oldfield, 1971). Growth modeling analyses using Mplus (L. K. Muthén & B. Muthén, 2002) identified that PTSD veterans with mixed-handedness reported significantly less treatment improvement on the PTSD Checklist (F. W. Weathers, B. T. Litz, D. S. Herman, J. A. Huska, & T. M. Keane, 1993) than did veterans with consistent handedness. These data suggest that mixed-handedness is associated with poorer PTSD treatment response. Several possible explanations for this finding are discussed.

  1. Spatio-temporal models to determine association between Campylobacter cases and environment

    PubMed Central

    Sanderson, Roy A; Maas, James A; Blain, Alasdair P; Gorton, Russell; Ward, Jessica; O’Brien, Sarah J; Hunter, Paul R; Rushton, Stephen P

    2018-01-01

    Abstract Background Campylobacteriosis is a major cause of gastroenteritis in the UK, and although 70% of cases are associated with food sources, the remainder are probably associated with wider environmental exposure. Methods In order to investigate wider environmental transmission, we conducted a spatio-temporal analysis of the association of human cases of Campylobacter in the Tyne catchment with weather, climate, hydrology and land use. A hydrological model was used to predict surface-water flow in the Tyne catchment over 5 years. We analysed associations between population-adjusted Campylobacter case rate and environmental factors hypothesized to be important in disease using a two-stage modelling framework. First, we investigated associations between temporal variation in case rate in relation to surface-water flow, temperature, evapotranspiration and rainfall, using linear mixed-effects models. Second, we used the random effects for the first model to quantify how spatial variation in static landscape features of soil and land use impacted on the likely differences between subcatchment associations of case rate with the temporal variables. Results Population-adjusted Campylobacter case rates were associated with periods of high predicted surface-water flow, and during above average temperatures. Subcatchments with cattle on stagnogley soils, and to a lesser extent sheep plus cattle grazing, had higher Campylobacter case rates. Conclusions Areas of stagnogley soils with mixed livestock grazing may be more vulnerable to both Campylobacter spread and exposure during periods of high rainfall, with resultant increased risk of human cases of the disease. PMID:29069406

  2. K →π matrix elements of the chromomagnetic operator on the lattice

    NASA Astrophysics Data System (ADS)

    Constantinou, M.; Costa, M.; Frezzotti, R.; Lubicz, V.; Martinelli, G.; Meloni, D.; Panagopoulos, H.; Simula, S.; ETM Collaboration

    2018-04-01

    We present the results of the first lattice QCD calculation of the K →π matrix elements of the chromomagnetic operator OCM=g s ¯ σμ νGμ νd , which appears in the effective Hamiltonian describing Δ S =1 transitions in and beyond the standard model. Having dimension five, the chromomagnetic operator is characterized by a rich pattern of mixing with operators of equal and lower dimensionality. The multiplicative renormalization factor as well as the mixing coefficients with the operators of equal dimension have been computed at one loop in perturbation theory. The power divergent coefficients controlling the mixing with operators of lower dimension have been determined nonperturbatively, by imposing suitable subtraction conditions. The numerical simulations have been carried out using the gauge field configurations produced by the European Twisted Mass Collaboration with Nf=2 +1 +1 dynamical quarks at three values of the lattice spacing. Our result for the B parameter of the chromomagnetic operator at the physical pion and kaon point is BCMOK π=0.273 (69 ) , while in the SU(3) chiral limit we obtain BCMO=0.076 (23 ) . Our findings are significantly smaller than the model-dependent estimate BCMO˜1 - 4 , currently used in phenomenological analyses, and improve the uncertainty on this important phenomenological quantity.

  3. Regional health workforce monitoring as governance innovation: a German model to coordinate sectoral demand, skill mix and mobility.

    PubMed

    Kuhlmann, E; Lauxen, O; Larsen, C

    2016-11-28

    As health workforce policy is gaining momentum, data sources and monitoring systems have significantly improved in the European Union and internationally. Yet data remain poorly connected to policy-making and implementation and often do not adequately support integrated approaches. This brings the importance of governance and the need for innovation into play. The present case study introduces a regional health workforce monitor in the German Federal State of Rhineland-Palatinate and seeks to explore the capacity of monitoring to innovate health workforce governance. The monitor applies an approach from the European Network on Regional Labour Market Monitoring to the health workforce. The novel aspect of this model is an integrated, procedural approach that promotes a 'learning system' of governance based on three interconnected pillars: mixed methods and bottom-up data collection, strong stakeholder involvement with complex communication tools and shared decision- and policy-making. Selected empirical examples illustrate the approach and the tools focusing on two aspects: the connection between sectoral, occupational and mobility data to analyse skill/qualification mixes and the supply-demand matches and the connection between monitoring and stakeholder-driven policy. Regional health workforce monitoring can promote effective governance in high-income countries like Germany with overall high density of health workers but maldistribution of staff and skills. The regional stakeholder networks are cost-effective and easily accessible and might therefore be appealing also to low- and middle-income countries.

  4. Random effects coefficient of determination for mixed and meta-analysis models

    PubMed Central

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2011-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070

  5. First learned words are not forgotten: Age-of-acquisition effects in the tip-of-the-tongue experience.

    PubMed

    Navarrete, Eduardo; Pastore, Massimiliano; Valentini, Rosa; Peressotti, Francesca

    2015-10-01

    A large body of evidence indicates that the age at which a word is acquired predicts the time required to retrieve that word during speech production. Here we explored whether age of acquisition also predicts the experience of being unable to produce a known word at a particular moment. Italian speakers named a sequence of pictures in Experiment 1 or retrieved a word as a response to a definition in Experiment 2. In both experiments, the participants were instructed to indicate when they were in a tip-of-the-tongue (TOT) state. Generalized mixed-effects models performed on the TOT and correct responses revealed that word frequency and age of acquisition predicted the TOT states. Specifically, low-frequency words elicited more TOTs than did high-frequency words, replicating previous findings. In addition, late-acquired words elicited more TOTs than did early-acquired words. Further analyses revealed that the age of acquisition was a better predictor of TOTs than was word frequency. The effects of age of acquisition were similar with subjective and objective measures of age of acquisition, and persisted when several psycholinguistic variables were taken into consideration as predictors in the generalized mixed-effects models. We explained these results in terms of weaker semantic-to-phonological connections in the speech production system for late-acquired words.

  6. The influence of the rs6295 gene polymorphism on serotonin-1A receptor distribution investigated with PET in patients with major depression applying machine learning.

    PubMed

    Kautzky, A; James, G M; Philippe, C; Baldinger-Melich, P; Kraus, C; Kranz, G S; Vanicek, T; Gryglewski, G; Wadsak, W; Mitterhauser, M; Rujescu, D; Kasper, S; Lanzenberger, R

    2017-06-13

    Major depressive disorder (MDD) is the most common neuropsychiatric disease and despite extensive research, its genetic substrate is still not sufficiently understood. The common polymorphism rs6295 of the serotonin-1A receptor gene (HTR1A) is affecting the transcriptional regulation of the 5-HT 1A receptor and has been closely linked to MDD. Here, we used positron emission tomography (PET) exploiting advances in data mining and statistics by using machine learning in 62 healthy subjects and 19 patients with MDD, which were scanned with PET using the radioligand [carbonyl- 11 C]WAY-100635. All the subjects were genotyped for rs6295 and genotype was grouped in GG vs C allele carriers. Mixed model was applied in a ROI-based (region of interest) approach. ROI binding potential (BP ND ) was divided by dorsal raphe BP ND as a specific measure to highlight rs6295 effects (BP Div ). Mixed model produced an interaction effect of ROI and genotype in the patients' group but no effects in healthy controls. Differences of BP Div was demonstrated in seven ROIs; parahippocampus, hippocampus, fusiform gyrus, gyrus rectus, supplementary motor area, inferior frontal occipital gyrus and lingual gyrus. For classification of genotype, 'RandomForest' and Support Vector Machines were used, however, no model with sufficient predictive capability could be computed. Our results are in line with preclinical data, mouse model knockout studies as well as previous clinical analyses, demonstrating the two-pronged effect of the G allele on 5-HT 1A BP ND for, we believe, the first time. Future endeavors should address epigenetic effects and allosteric heteroreceptor complexes. Replication in larger samples of MDD patients is necessary to substantiate our findings.

  7. The influence of the rs6295 gene polymorphism on serotonin-1A receptor distribution investigated with PET in patients with major depression applying machine learning

    PubMed Central

    Kautzky, A; James, G M; Philippe, C; Baldinger-Melich, P; Kraus, C; Kranz, G S; Vanicek, T; Gryglewski, G; Wadsak, W; Mitterhauser, M; Rujescu, D; Kasper, S; Lanzenberger, R

    2017-01-01

    Major depressive disorder (MDD) is the most common neuropsychiatric disease and despite extensive research, its genetic substrate is still not sufficiently understood. The common polymorphism rs6295 of the serotonin-1A receptor gene (HTR1A) is affecting the transcriptional regulation of the 5-HT1A receptor and has been closely linked to MDD. Here, we used positron emission tomography (PET) exploiting advances in data mining and statistics by using machine learning in 62 healthy subjects and 19 patients with MDD, which were scanned with PET using the radioligand [carbonyl-11C]WAY-100635. All the subjects were genotyped for rs6295 and genotype was grouped in GG vs C allele carriers. Mixed model was applied in a ROI-based (region of interest) approach. ROI binding potential (BPND) was divided by dorsal raphe BPND as a specific measure to highlight rs6295 effects (BPDiv). Mixed model produced an interaction effect of ROI and genotype in the patients’ group but no effects in healthy controls. Differences of BPDiv was demonstrated in seven ROIs; parahippocampus, hippocampus, fusiform gyrus, gyrus rectus, supplementary motor area, inferior frontal occipital gyrus and lingual gyrus. For classification of genotype, ‘RandomForest’ and Support Vector Machines were used, however, no model with sufficient predictive capability could be computed. Our results are in line with preclinical data, mouse model knockout studies as well as previous clinical analyses, demonstrating the two-pronged effect of the G allele on 5-HT1A BPND for, we believe, the first time. Future endeavors should address epigenetic effects and allosteric heteroreceptor complexes. Replication in larger samples of MDD patients is necessary to substantiate our findings. PMID:28608854

  8. The Mixed Effects Trend Vector Model

    ERIC Educational Resources Information Center

    de Rooij, Mark; Schouteden, Martijn

    2012-01-01

    Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…

  9. Cost-Effectiveness of Mirabegron Compared with Antimuscarinic Agents for the Treatment of Adults with Overactive Bladder in the United Kingdom.

    PubMed

    Nazir, Jameel; Maman, Khaled; Neine, Mohamed-Elmoctar; Briquet, Benjamin; Odeyemi, Isaac A O; Hakimi, Zalmai; Garnham, Andy; Aballéa, Samuel

    2015-09-01

    Mirabegron, a first-in-class selective oral β3-adrenoceptor agonist, has similar efficacy to most antimuscarinic agents and a lower incidence of dry mouth in patients with overactive bladder (OAB). To evaluate the cost-effectiveness of mirabegron 50 mg compared with oral antimuscarinic agents in adults with OAB from a UK National Health Service perspective. A Markov model including health states for symptom severity, treatment status, and adverse events was developed. Cycle length was 1 month, and the time horizon was 5 years. Antimuscarinic comparators were tolterodine extended release, solifenacin, fesoterodine, oxybutynin extended release and immediate release (IR), darifenacin, and trospium chloride modified release. Transition probabilities for symptom severity levels and adverse events were estimated from a mirabegron trial and a mixed treatment comparison. Estimates for other inputs were obtained from published literature or expert opinion. Quality-adjusted life-years (QALYs) and total health care costs, including costs of drug acquisition, physician visits, incontinence pad use, and botox injections, were modeled. Deterministic and probabilistic sensitivity analyses were performed. Base-case incremental cost-effectiveness ratios ranged from £367 (vs. solifenacin 10 mg) to £15,593 (vs. oxybutynin IR 10 mg) per QALY gained. Probabilistic sensitivity analyses showed that at a willingness-to-pay threshold of £20,000/QALY gained, the probability of mirabegron 50 mg being cost-effective ranged from 70.2% versus oxybutynin IR 10 mg to 97.8% versus darifenacin 15 mg. A limitation of our analysis is the uncertainty due to the lack of direct comparisons of mirabegron with other agents; a mixed treatment comparison using rigorous methodology provided the data for the analysis, but the studies involved showed heterogeneity. Mirabegron 50 mg appears to be cost-effective compared with standard oral antimuscarinic agents for the treatment of adults with OAB from a UK National Health Service perspective. Copyright © 2015. Published by Elsevier Inc.

  10. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  11. Large eddy simulation and direct numerical simulation of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Adumitroaie, V.; Frankel, S. H.; Madnia, C. K.; Givi, P.

    1993-01-01

    The objective of this research is to make use of Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first phase of this research conducted within the past three years have been directed in several issues pertaining to intricate physics of turbulent reacting flows. In our previous 5 semi-annual reports submitted to NASA LaRC, as well as several technical papers in archival journals, the results of our investigations have been fully described. In this progress report which is different in format as compared to our previous documents, we focus only on the issue of LES. The reason for doing so is that LES is the primary issue of interest to our Technical Monitor and that our other findings were needed to support the activities conducted under this prime issue. The outcomes of our related investigations, nevertheless, are included in the appendices accompanying this report. The relevance of the materials in these appendices are, therefore, discussed only briefly within the body of the report. Here, results are presented of a priori and a posterior analyses for validity assessments of assumed Probability Density Function (PDF) methods as potential subgrid scale (SGS) closures for LES of turbulent reacting flows. Simple non-premixed reacting systems involving an isothermal reaction of the type A + B yields Products under both chemical equilibrium and non-equilibrium conditions are considered. A priori analyses are conducted of a homogeneous box flow, and a spatially developing planar mixing layer to investigate the performance of the Pearson Family of PDF's as SGS models. A posteriori analyses are conducted of the mixing layer using a hybrid one-equation Smagorinsky/PDF SGS closure. The Smagorinsky closure augmented by the solution of the subgrid turbulent kinetic energy (TKE) equation is employed to account for hydrodynamic fluctuations, and the PDF is employed for modeling the effects of scalar fluctuations. The implementation of the model requires the knowledge of the local values of the first two SGS moments. These are provided by additional modeled transport equations. In both a priori and a posteriori analyses, the predicted results are appraised by comparison with subgrid averaged results generated by DNS. Based on these results, the paths to be followed in future investigations are identified.

  12. Pattern or process? Evaluating the peninsula effect as a determinant of species richness in coastal dune forests

    PubMed Central

    Olivier, Pieter I.; van Aarde, Rudi J.

    2017-01-01

    The peninsula effect predicts that the number of species should decline from the base of a peninsula to the tip. However, evidence for the peninsula effect is ambiguous, as different analytical methods, study taxa, and variations in local habitat or regional climatic conditions influence conclusions on its presence. We address this uncertainty by using two analytical methods to investigate the peninsula effect in three taxa that occupy different trophic levels: trees, millipedes, and birds. We surveyed 81 tree quadrants, 102 millipede transects, and 152 bird points within 150 km of coastal dune forest that resemble a habitat peninsula along the northeast coast of South Africa. We then used spatial (trend surface analyses) and non-spatial regressions (generalized linear mixed models) to test for the presence of the peninsula effect in each of the three taxa. We also used linear mixed models to test if climate (temperature and precipitation) and/or local habitat conditions (water availability associated with topography and landscape structural variables) could explain gradients in species richness. Non-spatial models suggest that the peninsula effect was present in all three taxa. However, spatial models indicated that only bird species richness declined from the peninsula base to the peninsula tip. Millipede species richness increased near the centre of the peninsula, while tree species richness increased near the tip. Local habitat conditions explained species richness patterns of birds and trees, but not of millipedes, regardless of model type. Our study highlights the idiosyncrasies associated with the peninsula effect—conclusions on the presence of the peninsula effect depend on the analytical methods used and the taxon studied. The peninsula effect might therefore be better suited to describe a species richness pattern where the number of species decline from a broader habitat base to a narrow tip, rather than a process that drives species richness. PMID:28376096

  13. Individualising Chronic Care Management by Analysing Patients' Needs - A Mixed Method Approach.

    PubMed

    Timpel, P; Lang, C; Wens, J; Contel, J C; Gilis-Januszewska, A; Kemple, K; Schwarz, P E

    2017-11-13

    Modern health systems are increasingly faced with the challenge to provide effective, affordable and accessible health care for people with chronic conditions. As evidence on the specific unmet needs and their impact on health outcomes is limited, practical research is needed to tailor chronic care to individual needs of patients with diabetes. Qualitative approaches to describe professional and informal caregiving will support understanding the complexity of chronic care. Results are intended to provide practical recommendations to be used for systematic implementation of sustainable chronic care models. A mixed method study was conducted. A standardised survey (n = 92) of experts in chronic care using mail responses to open-ended questions was conducted to analyse existing chronic care programs focusing on effective, problematic and missing components. An expert workshop (n = 22) of professionals and scientists of a European funded research project MANAGE CARE was used to define a limited number of unmet needs and priorities of elderly patients with type 2 diabetes mellitus and comorbidities. This list was validated and ranked using a multilingual online survey (n = 650). Participants of the online survey included patients, health care professionals and other stakeholders from 56 countries. The survey indicated that current care models need to be improved in terms of financial support, case management and the consideration of social care. The expert workshop identified 150 patient needs which were summarised in 13 needs dimensions. The online survey of these pre-defined dimensions revealed that financial issues, education of both patients and professionals, availability of services as well as health promotion are the most important unmet needs for both patients and professionals. The study uncovered competing demands which are not limited to medical conditions. The findings emphasise that future care models need to focus stronger on individual patient needs and promote their active involvement in co-design and implementation. Future research is needed to develop new chronic care models providing evidence-based and practical implications for the regional care setting.

  14. A brief measure of attitudes toward mixed methods research in psychology

    PubMed Central

    Roberts, Lynne D.; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; ‘Limited Exposure,’ ‘(in)Compatibility,’ ‘Validity,’ and ‘Tokenistic Qualitative Component’; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs. PMID:25429281

  15. Integrating Stomach Content and Stable Isotope Analyses to Quantify the Diets of Pygoscelid Penguins

    PubMed Central

    Polito, Michael J.; Trivelpiece, Wayne Z.; Karnovsky, Nina J.; Ng, Elizabeth; Patterson, William P.; Emslie, Steven D.

    2011-01-01

    Stomach content analysis (SCA) and more recently stable isotope analysis (SIA) integrated with isotopic mixing models have become common methods for dietary studies and provide insight into the foraging ecology of seabirds. However, both methods have drawbacks and biases that may result in difficulties in quantifying inter-annual and species-specific differences in diets. We used these two methods to simultaneously quantify the chick-rearing diet of Chinstrap (Pygoscelis antarctica) and Gentoo (P. papua) penguins and highlight methods of integrating SCA data to increase accuracy of diet composition estimates using SIA. SCA biomass estimates were highly variable and underestimated the importance of soft-bodied prey such as fish. Two-source, isotopic mixing model predictions were less variable and identified inter-annual and species-specific differences in the relative amounts of fish and krill in penguin diets not readily apparent using SCA. In contrast, multi-source isotopic mixing models had difficulty estimating the dietary contribution of fish species occupying similar trophic levels without refinement using SCA-derived otolith data. Overall, our ability to track inter-annual and species-specific differences in penguin diets using SIA was enhanced by integrating SCA data to isotopic mixing modes in three ways: 1) selecting appropriate prey sources, 2) weighting combinations of isotopically similar prey in two-source mixing models and 3) refining predicted contributions of isotopically similar prey in multi-source models. PMID:22053199

  16. Effects of mixing states on the multiple-scattering properties of soot aerosols.

    PubMed

    Cheng, Tianhai; Wu, Yu; Gu, Xingfa; Chen, Hao

    2015-04-20

    The radiative properties of soot aerosols are highly sensitive to the mixing states of black carbon particles and other aerosol components. Light absorption properties are enhanced by the mixing state of soot aerosols. Quantification of the effects of mixing states on the scattering properties of soot aerosol are still not completely resolved, especially for multiple-scattering properties. This study focuses on the effects of the mixing state on the multiple scattering of soot aerosols using the vector radiative transfer model. Two types of soot aerosols with different mixing states such as external mixture soot aerosols and internal mixture soot aerosols are studied. Upward radiance/polarization and hemispheric flux are studied with variable soot aerosol loadings for clear and haze scenarios. Our study showed dramatic changes in upward radiance/polarization due to the effects of the mixing state on the multiple scattering of soot aerosols. The relative difference in upward radiance due to the different mixing states can reach 16%, whereas the relative difference of upward polarization can reach 200%. The effects of the mixing state on the multiple-scattering properties of soot aerosols increase with increasing soot aerosol loading. The effects of the soot aerosol mixing state on upwelling hemispheric flux are much smaller than in upward radiance/polarization, which increase with increasing solar zenith angle. The relative difference in upwelling hemispheric flux due to the different soot aerosol mixing states can reach 18% when the solar zenith angle is 75°. The findings should improve our understanding of the effects of mixing states on the optical properties of soot aerosols and their effects on climate. The mixing mechanism of soot aerosols is of critical importance in evaluating the climate effects of soot aerosols, which should be explicitly included in radiative forcing models and aerosol remote sensing.

  17. Non-linear mixing effects on mass-47 CO2 clumped isotope thermometry: Patterns and implications.

    PubMed

    Defliese, William F; Lohmann, Kyger C

    2015-05-15

    Mass-47 CO(2) clumped isotope thermometry requires relatively large (~20 mg) samples of carbonate minerals due to detection limits and shot noise in gas source isotope ratio mass spectrometry (IRMS). However, it is unreasonable to assume that natural geologic materials are homogenous on the scale required for sampling. We show that sample heterogeneities can cause offsets from equilibrium Δ(47) values that are controlled solely by end member mixing and are independent of equilibrium temperatures. A numerical model was built to simulate and quantify the effects of end member mixing on Δ(47). The model was run in multiple possible configurations to produce a dataset of mixing effects. We verified that the model accurately simulated real phenomena by comparing two artificial laboratory mixtures measured using IRMS to model output. Mixing effects were found to be dependent on end member isotopic composition in δ(13)C and δ(18)O values, and independent of end member Δ(47) values. Both positive and negative offsets from equilibrium Δ(47) can occur, and the sign is dependent on the interaction between end member isotopic compositions. The overall magnitude of mixing offsets is controlled by the amount of variability within a sample; the larger the disparity between end member compositions, the larger the mixing offset. Samples varying by less than 2 ‰ in both δ(13)C and δ(18)O values have mixing offsets below current IRMS detection limits. We recommend the use of isotopic subsampling for δ(13)C and δ(18)O values to determine sample heterogeneity, and to evaluate any potential mixing effects in samples suspected of being heterogonous. Copyright © 2015 John Wiley & Sons, Ltd.

  18. The salinity effect in a mixed layer ocean model

    NASA Technical Reports Server (NTRS)

    Miller, J. R.

    1976-01-01

    A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.

  19. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.

  20. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model

    PubMed Central

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639

  1. Calibrating and testing a gap model for simulating forest management in the Oregon Coast Range

    Treesearch

    Robert J. Pabst; Matthew N. Goslin; Steven L. Garman; Thomas A. Spies

    2008-01-01

    The complex mix of economic and ecological objectives facing today's forest managers necessitates the development of growth models with a capacity for simulating a wide range of forest conditions while producing outputs useful for economic analyses. We calibrated the gap model ZELIG to simulate stand level forest development in the Oregon Coast Range as part of a...

  2. Cost-utility of First-line Disease-modifying Treatments for Relapsing-Remitting Multiple Sclerosis.

    PubMed

    Soini, Erkki; Joutseno, Jaana; Sumelahti, Marja-Liisa

    2017-03-01

    This study evaluated the cost-effectiveness of first-line treatments of relapsing-remitting multiple sclerosis (RRMS) (dimethyl fumarate [DMF] 240 mg PO BID, teriflunomide 14 mg once daily, glatiramer acetate 20 mg SC once daily, interferon [IFN]-β1a 44 µg TIW, IFN-β1b 250 µg EOD, and IFN-β1a 30 µg IM QW) and best supportive care (BSC) in the health care payer setting in Finland. The primary outcome was the modeled incremental cost-effectiveness ratio (ICER; €/quality-adjusted life-year [QALY] gained, 3%/y discounting). Markov cohort modeling with a 15-year time horizon was employed. During each 1-year modeling cycle, patients either maintained the Expanded Disability Status Scale (EDSS) score or experienced progression, developed secondary progressive MS (SPMS) or showed EDSS progression in SPMS, experienced relapse with/without hospitalization, experienced an adverse event (AE), or died. Patients׳ characteristics, RRMS progression probabilities, and standardized mortality ratios were derived from a registry of patients with MS in Finland. A mixed-treatment comparison (MTC) informed the treatment effects. Finnish EuroQol Five-Dimensional Questionnaire, Three-Level Version quality-of-life and direct-cost estimates associated with EDSS scores, relapses, and AEs were applied. Four approaches were used to assess the outcomes: cost-effectiveness plane and efficiency frontiers (relative value of efficient treatments); cost-effectiveness acceptability frontier, which demonstrated optimal treatment to maximize net benefit; Bayesian treatment ranking (BTR); and an impact investment assessment (IIA; a cost-benefit assessment), which increased the clinical interpretation and appeal of modeled outcomes in terms of absolute benefit gained with fixed drug-related budget. Robustness of results was tested extensively with sensitivity analyses. Based on the modeled results, teriflunomide was less costly, with greater QALYs, versus glatiramer acetate and the IFNs. Teriflunomide had the lowest ICER (24,081) versus BSC. DMF brought marginally more QALYs (0.089) than did teriflunomide, with greater costs over the 15 years. The ICER for DMF versus teriflunomide was 75,431. Teriflunomide had >50% cost-effectiveness probabilities with a willingness-to-pay threshold of <€77,416/QALY gained. According to BTR, teriflunomide was first-best among the disease-modifying therapies, with potential willingness-to-pay thresholds of up to €68,000/QALY gained. In the IIA, teriflunomide was associated with the longest incremental quality-adjusted survival and time without cane use. Generally, primary outcomes results were robust, based on the sensitivity analyses. The results were sensitive only to large changes in analysis perspective or mixed-treatment comparison. The results were sensitive only to large changes in analysis perspective or MTC. Based on the analyses, teriflunomide was cost-effective versus BSC or DMF with the common threshold values, was dominant versus other first-line RRMS treatments, and provided the greatest impact on investment. Teriflunomide is potentially the most cost-effective option among first-line treatments of RRMS in Finland. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. A Call for Conducting Multivariate Mixed Analyses

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.

    2016-01-01

    Several authors have written methodological works that provide an introductory- and/or intermediate-level guide to conducting mixed analyses. Although these works have been useful for beginning and emergent mixed researchers, with very few exceptions, works are lacking that describe and illustrate advanced-level mixed analysis approaches. Thus,…

  4. Subjective Social Status and Self-Reported Health Among US-born and Immigrant Latinos.

    PubMed

    Garza, Jeremiah R; Glenn, Beth A; Mistry, Rashmita S; Ponce, Ninez A; Zimmerman, Frederick J

    2017-02-01

    Subjective social status is associated with a range of health outcomes. Few studies have tested the relevance of subjective social status among Latinos in the U.S.; those that have yielded mixed results. Data come from the Latino subsample of the 2003 National Latino and Asian American Study (N = 2554). Regression models adjusted for socioeconomic and demographic factors. Stratified analyses tested whether nativity status modifies the effect of subjective social status on health. Subjective social status was associated with better health. Income and education mattered more for health than subjective social status among U.S.-born Latinos. However, the picture was mixed among immigrant Latinos, with subjective social status more strongly predictive than income but less so than education. Subjective social status may tap into stressful immigrant experiences that affect one's perceived self-worth and capture psychosocial consequences and social disadvantage left out by conventional socioeconomic measures.

  5. Transient Ejector Analysis (TEA) code user's guide

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1993-01-01

    A FORTRAN computer program for the semi analytic prediction of unsteady thrust augmenting ejector performance has been developed, based on a theoretical analysis for ejectors. That analysis blends classic self-similar turbulent jet descriptions with control-volume mixing region elements. Division of the ejector into an inlet, diffuser, and mixing region allowed flexibility in the modeling of the physics for each region. In particular, the inlet and diffuser analyses are simplified by a quasi-steady-analysis, justified by the assumption that pressure is the forcing function in those regions. Only the mixing region is assumed to be dominated by viscous effects. The present work provides an overview of the code structure, a description of the required input and output data file formats, and the results for a test case. Since there are limitations to the code for applications outside the bounds of the test case, the user should consider TEA as a research code (not as a production code), designed specifically as an implementation of the proposed ejector theory. Program error flags are discussed, and some diagnostic routines are presented.

  6. Meta-analysis of thirty-two case-control and two ecological radon studies of lung cancer.

    PubMed

    Dobrzynski, Ludwik; Fornalski, Krzysztof W; Reszczynska, Joanna

    2018-03-01

    A re-analysis has been carried out of thirty-two case-control and two ecological studies concerning the influence of radon, a radioactive gas, on the risk of lung cancer. Three mathematically simplest dose-response relationships (models) were tested: constant (zero health effect), linear, and parabolic (linear-quadratic). Health effect end-points reported in the analysed studies are odds ratios or relative risk ratios, related either to morbidity or mortality. In our preliminary analysis, we show that the results of dose-response fitting are qualitatively (within uncertainties, given as error bars) the same, whichever of these health effect end-points are applied. Therefore, we deemed it reasonable to aggregate all response data into the so-called Relative Health Factor and jointly analysed such mixed data, to obtain better statistical power. In the second part of our analysis, robust Bayesian and classical methods of analysis were applied to this combined dataset. In this part of our analysis, we selected different subranges of radon concentrations. In view of substantial differences between the methodology used by the authors of case-control and ecological studies, the mathematical relationships (models) were applied mainly to the thirty-two case-control studies. The degree to which the two ecological studies, analysed separately, affect the overall results when combined with the thirty-two case-control studies, has also been evaluated. In all, as a result of our meta-analysis of the combined cohort, we conclude that the analysed data concerning radon concentrations below ~1000 Bq/m3 (~20 mSv/year of effective dose to the whole body) do not support the thesis that radon may be a cause of any statistically significant increase in lung cancer incidence.

  7. APPLICATION OF STABLE ISOTOPE TECHNIQUES TO AIR POLLUTION RESEARCH

    EPA Science Inventory

    Stable isotope techniques provide a robust, yet under-utilized tool for examining pollutant effects on plant growth and ecosystem function. Here, we survey a range of mixing model, physiological and system level applications for documenting pollutant effects. Mixing model examp...

  8. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  9. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  10. Protocol for a mixed methods study of hospital readmissions: sensemaking in Veterans Health Administration healthcare system in the USA

    PubMed Central

    Leykum, Luci K; Noël, Polly; Finley, Erin P; Lanham, Holly Jordan; Pugh, Jacqueline

    2018-01-01

    Introduction Effective delivery of healthcare in complex systems requires managing interdependencies between professions and organisational units. Reducing 30-day hospital readmissions may be one of the most complex tasks that a healthcare system can undertake. We propose that these less than optimal outcomes are related to difficulties managing the complex interdependencies among organisational units and to a lack of effective sensemaking among individuals and organisational units regarding how best to coordinate patient needs. Methods and analysis This is a mixed method, multistepped study. We will conduct in-depth qualitative organisational case studies in 10 Veterans Health Administration facilities (6 with improving and 4 with worsening readmission rates), focusing on relationships, sensemaking and improvisation around care transition processes intended to reduce early readmissions. Data will be gathered through multiple methods (eg, chart reviews, surveys, interviews, observations) and analysed using analytic memos, qualitative coding and statistical analyses. We will construct an agent-based model based on those results to explore the influence of sensemaking and specific care transition processes on early readmissions. Ethics and dissemination Ethical approval has been obtained through the Institutional Review Board of the University of Texas Health Science Center at San Antonio (approval number: 14–258 hour). We will disseminate our findings in manuscripts in peer-reviewed journals, professional conferences and through short reports back to participating entities and stakeholders. PMID:29627815

  11. The 3D Navier-Stokes analysis of a Mach 2.68 bifurcated rectangular mixed-compression inlet

    NASA Technical Reports Server (NTRS)

    Mizukami, M.; Saunders, J. D.

    1995-01-01

    The supersonic diffuser of a Mach 2.68 bifurcated, rectangular, mixed-compression inlet was analyzed using a three-dimensional (3D) Navier-Stokes flow solver. A two-equation turbulence model, and a porous bleed model based on unchoked bleed hole discharge coefficients were used. Comparisons were made with experimental data, inviscid theory, and two-dimensional Navier-Stokes analyses. The main objective was to gain insight into the inlet fluid dynamics. Examination of the computational results along with the experimental data suggest that the cowl shock-sidewall boundary layer interaction near the leading edge caused a substantial separation in the wind tunnel inlet model. As a result, the inlet performance may have been compromised by increased spillage and higher bleed mass flow requirements. The internal flow contained substantial waves that were not in the original inviscid design. 3D effects were fairly minor for this inlet at on-design conditions. Navier-Stokes analysis appears to be an useful tool for gaining insight into the inlet fluid dynamics. It provides a higher fidelity simulation of the flowfield than the original inviscid design, by taking into account boundary layers, porous bleed, and their interactions with shock waves.

  12. Genetic parameters for growth characteristics of free-range chickens under univariate random regression models.

    PubMed

    Rovadoscki, Gregori A; Petrini, Juliana; Ramirez-Diaz, Johanna; Pertile, Simone F N; Pertille, Fábio; Salvian, Mayara; Iung, Laiza H S; Rodriguez, Mary Ana P; Zampar, Aline; Gaya, Leila G; Carvalho, Rachel S B; Coelho, Antonio A D; Savino, Vicente J M; Coutinho, Luiz L; Mourão, Gerson B

    2016-09-01

    Repeated measures from the same individual have been analyzed by using repeatability and finite dimension models under univariate or multivariate analyses. However, in the last decade, the use of random regression models for genetic studies with longitudinal data have become more common. Thus, the aim of this research was to estimate genetic parameters for body weight of four experimental chicken lines by using univariate random regression models. Body weight data from hatching to 84 days of age (n = 34,730) from four experimental free-range chicken lines (7P, Caipirão da ESALQ, Caipirinha da ESALQ and Carijó Barbado) were used. The analysis model included the fixed effects of contemporary group (gender and rearing system), fixed regression coefficients for age at measurement, and random regression coefficients for permanent environmental effects and additive genetic effects. Heterogeneous variances for residual effects were considered, and one residual variance was assigned for each of six subclasses of age at measurement. Random regression curves were modeled by using Legendre polynomials of the second and third orders, with the best model chosen based on the Akaike Information Criterion, Bayesian Information Criterion, and restricted maximum likelihood. Multivariate analyses under the same animal mixed model were also performed for the validation of the random regression models. The Legendre polynomials of second order were better for describing the growth curves of the lines studied. Moderate to high heritabilities (h(2) = 0.15 to 0.98) were estimated for body weight between one and 84 days of age, suggesting that selection for body weight at all ages can be used as a selection criteria. Genetic correlations among body weight records obtained through multivariate analyses ranged from 0.18 to 0.96, 0.12 to 0.89, 0.06 to 0.96, and 0.28 to 0.96 in 7P, Caipirão da ESALQ, Caipirinha da ESALQ, and Carijó Barbado chicken lines, respectively. Results indicate that genetic gain for body weight can be achieved by selection. Also, selection for body weight at 42 days of age can be maintained as a selection criterion. © 2016 Poultry Science Association Inc.

  13. Human Health Effects of Trichloroethylene: Key Findings and Scientific Issues

    PubMed Central

    Jinot, Jennifer; Scott, Cheryl Siegel; Makris, Susan L.; Cooper, Glinda S.; Dzubow, Rebecca C.; Bale, Ambuja S.; Evans, Marina V.; Guyton, Kathryn Z.; Keshava, Nagalakshmi; Lipscomb, John C.; Barone, Stanley; Fox, John F.; Gwinn, Maureen R.; Schaum, John; Caldwell, Jane C.

    2012-01-01

    Background: In support of the Integrated Risk Information System (IRIS), the U.S. Environmental Protection Agency (EPA) completed a toxicological review of trichloroethylene (TCE) in September 2011, which was the result of an effort spanning > 20 years. Objectives: We summarized the key findings and scientific issues regarding the human health effects of TCE in the U.S. EPA’s toxicological review. Methods: In this assessment we synthesized and characterized thousands of epidemiologic, experimental animal, and mechanistic studies, and addressed several key scientific issues through modeling of TCE toxicokinetics, meta-analyses of epidemiologic studies, and analyses of mechanistic data. Discussion: Toxicokinetic modeling aided in characterizing the toxicological role of the complex metabolism and multiple metabolites of TCE. Meta-analyses of the epidemiologic data strongly supported the conclusions that TCE causes kidney cancer in humans and that TCE may also cause liver cancer and non-Hodgkin lymphoma. Mechanistic analyses support a key role for mutagenicity in TCE-induced kidney carcinogenicity. Recent evidence from studies in both humans and experimental animals point to the involvement of TCE exposure in autoimmune disease and hypersensitivity. Recent avian and in vitro mechanistic studies provided biological plausibility that TCE plays a role in developmental cardiac toxicity, the subject of substantial debate due to mixed results from epidemiologic and rodent studies. Conclusions: TCE is carcinogenic to humans by all routes of exposure and poses a potential human health hazard for noncancer toxicity to the central nervous system, kidney, liver, immune system, male reproductive system, and the developing embryo/fetus. PMID:23249866

  14. Transient modeling/analysis of hyperbolic heat conduction problems employing mixed implicit-explicit alpha method

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; D'Costa, Joseph F.

    1991-01-01

    This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.

  15. Psychological mediators related to clinical outcome in cognitive behavioural therapy for coronary heart disease: A sub-analysis from the SUPRIM trial.

    PubMed

    Norlund, Fredrika; Olsson, Erik Mg; Pingel, Ronnie; Held, Claes; Svärdsudd, Kurt; Gulliksson, Mats; Burell, Gunilla

    2017-06-01

    Background The Secondary Prevention in Uppsala Primary Healthcare Project (SUPRIM) was a randomized controlled trial of a group-based cognitive behavioural therapy stress management programme for patients with coronary heart disease. The project was successful in reducing the risk of fatal or non-fatal first recurrent cardiovascular events. The aim of this study was to analyse the effect of cognitive behavioural therapy on self-rated stress, somatic anxiety, vital exhaustion and depression and to study the associations of these factors with the reduction in cardiovascular events. Methods A total of 362 patients were randomly assigned to intervention or usual care groups. The psychological outcomes were assessed five times during 24 months and analysed using linear mixed models. The mediating roles of the outcomes were analysed using joint modelling of the longitudinal and time to event data. Results The intervention had a positive effect on somatic anxiety ( p < 0.05), reflecting a beneficial development over time compared with the controls. Stress, vital exhaustion and depression did not differ between the groups over time. Mediator analysis suggested that somatic anxiety may have mediated the effect of treatment on cardiovascular events. Conclusions The intervention had a small positive effect on somatic anxiety, but did not affect stress, vital exhaustion or depression in patients with coronary heart disease. Somatic anxiety was associated with an increased risk of cardiovascular events and might act as a partial mediator in the treatment effect on cardiovascular events. However, the mechanisms between the intervention and the protective cardiovascular outcome remain to be identified.

  16. Random effects coefficient of determination for mixed and meta-analysis models.

    PubMed

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  17. Effective temperatures of red giants in the APOKASC catalogue and the mixing length calibration in stellar models

    NASA Astrophysics Data System (ADS)

    Salaris, M.; Cassisi, S.; Schiavon, R. P.; Pietrinferni, A.

    2018-04-01

    Red giants in the updated APOGEE-Kepler catalogue, with estimates of mass, chemical composition, surface gravity and effective temperature, have recently challenged stellar models computed under the standard assumption of solar calibrated mixing length. In this work, we critically reanalyse this sample of red giants, adopting our own stellar model calculations. Contrary to previous results, we find that the disagreement between the Teff scale of red giants and models with solar calibrated mixing length disappears when considering our models and the APOGEE-Kepler stars with scaled solar metal distribution. However, a discrepancy shows up when α-enhanced stars are included in the sample. We have found that assuming mass, chemical composition and effective temperature scale of the APOGEE-Kepler catalogue, stellar models generally underpredict the change of temperature of red giants caused by α-element enhancements at fixed [Fe/H]. A second important conclusion is that the choice of the outer boundary conditions employed in model calculations is critical. Effective temperature differences (metallicity dependent) between models with solar calibrated mixing length and observations appear for some choices of the boundary conditions, but this is not a general result.

  18. Selecting elephant grass families and progenies to produce bioenergy through mixed models (REML/BLUP).

    PubMed

    Rodrigues, E V; Daher, R F; Dos Santos, A; Vivas, M; Machado, J C; Gravina, G do A; de Souza, Y P; Vidal, A K; Rocha, A Dos S; Freitas, R S

    2017-05-18

    Brazil has great potential to produce bioenergy since it is located in a tropical region that receives high incidence of solar energy and presents favorable climatic conditions for such purpose. However, the use of bioenergy in the country is below its productivity potential. The aim of the current study was to select full-sib progenies and families of elephant grass (Pennisetum purpureum S.) to optimize phenotypes relevant to bioenergy production through mixed models (REML/BLUP). The circulating diallel-based crossing of ten elephant grass genotypes was performed. An experimental design using the randomized block methodology, with three repetitions, was set to assess both the hybrids and the parents. Each plot comprised 14-m rows, 1.40 m spacing between rows, and 1.40 m spacing between plants. The number of tillers, plant height, culm diameter, fresh biomass production, dry biomass rate, and the dry biomass production were assessed. Genetic-statistical analyses were performed through mixed models (REML/BLUP). The genetic variance in the assessed families was explained through additive genetic effects and dominance genetic effects; the dominance variance was prevalent. Families such as Capim Cana D'África x Guaçu/I.Z.2, Cameroon x Cuba-115, CPAC x Cuba-115, Cameroon x Guaçu/I.Z.2, and IAC-Campinas x CPAC showed the highest dry biomass production. The family derived from the crossing between Cana D'África and Guaçu/I.Z.2 showed the largest number of potential individuals for traits such as plant height, culm diameter, fresh biomass production, dry biomass production, and dry biomass rate. The individual 5 in the family Cana D'África x Guaçu/I.Z.2, planted in blocks 1 and 2, showed the highest dry biomass production.

  19. Assessing the feasibility of community health insurance in Uganda: A mixed-methods exploratory analysis.

    PubMed

    Biggeri, M; Nannini, M; Putoto, G

    2018-03-01

    Community health insurance (CHI) aims to provide financial protection and facilitate health care access among poor rural populations. Given common operational challenges that hamper the full development of the scheme, there is need to undertake systematic feasibility studies. These are scarce in the literature and usually they do not provide a comprehensive analysis of the local context. The present research intends to adopt a mixed-methods approach to assess ex-ante the feasibility of CHI. In particular, eight preconditions are proposed to inform the viability of introducing the micro insurance. A case study located in rural northern Uganda is presented to test the effectiveness of the mixed-methods procedure for the feasibility purpose. A household survey covering 180 households, 8 structured focus group discussions, and 40 key informant interviews were performed between October and December 2016 in order to provide a complete and integrated analysis of the feasibility preconditions. Through the data collected at the household level, the population health seeking behaviours and the potential insurance design were examined; econometric analyses were carried out to investigate the perception of health as a priority need and the willingness to pay for the scheme. The latter component, in particular, was analysed through a contingent valuation method. The results validated the relevant feasibility preconditions. Econometric estimates demonstrated that awareness of catastrophic health expenditures and the distance to the hospital play a critical influence on household priorities and willingness to pay. Willingness is also significantly affected by socio-economic status and basic knowledge of insurance principles. Overall, the mixed-methods investigation showed that a comprehensive feasibility analysis can shape a viable CHI model to be implemented in the local context. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Simulating mixed-phase Arctic stratus clouds: sensitivity to ice initiation mechanisms

    NASA Astrophysics Data System (ADS)

    Sednev, I.; Menon, S.; McFarquhar, G.

    2008-06-01

    The importance of Arctic mixed-phase clouds on radiation and the Arctic climate is well known. However, the development of mixed-phase cloud parameterization for use in large scale models is limited by lack of both related observations and numerical studies using multidimensional models with advanced microphysics that provide the basis for understanding the relative importance of different microphysical processes that take place in mixed-phase clouds. To improve the representation of mixed-phase cloud processes in the GISS GCM we use the GISS single-column model coupled to a bin resolved microphysics (BRM) scheme that was specially designed to simulate mixed-phase clouds and aerosol-cloud interactions. Using this model with the microphysical measurements obtained from the DOE ARM Mixed-Phase Arctic Cloud Experiment (MPACE) campaign in October 2004 at the North Slope of Alaska, we investigate the effect of ice initiation processes and Bergeron-Findeisen process (BFP) on glaciation time and longevity of single-layer stratiform mixed-phase clouds. We focus on observations taken during 9th-10th October, which indicated the presence of a single-layer mixed-phase clouds. We performed several sets of 12-h simulations to examine model sensitivity to different ice initiation mechanisms and evaluate model output (hydrometeors' concentrations, contents, effective radii, precipitation fluxes, and radar reflectivity) against measurements from the MPACE Intensive Observing Period. Overall, the model qualitatively simulates ice crystal concentration and hydrometeors content, but it fails to predict quantitatively the effective radii of ice particles and their vertical profiles. In particular, the ice effective radii are overestimated by at least 50%. However, using the same definition as used for observations, the effective radii simulated and that observed were more comparable. We find that for the single-layer stratiform mixed-phase clouds simulated, process of ice phase initiation due to freezing of supercooled water in both saturated and undersaturated (w.r.t. water) environments is as important as primary ice crystal origination from water vapor. We also find that the BFP is a process mainly responsible for the rates of glaciation of simulated clouds. These glaciation rates cannot be adequately represented by a water-ice saturation adjustment scheme that only depends on temperature and liquid and solid hydrometeors' contents as is widely used in bulk microphysics schemes and are better represented by processes that also account for supersaturation changes as the hydrometeors grow.

  1. Simulating mixed-phase Arctic stratus clouds: sensitivity to ice initiation mechanisms

    NASA Astrophysics Data System (ADS)

    Sednev, I.; Menon, S.; McFarquhar, G.

    2009-07-01

    The importance of Arctic mixed-phase clouds on radiation and the Arctic climate is well known. However, the development of mixed-phase cloud parameterization for use in large scale models is limited by lack of both related observations and numerical studies using multidimensional models with advanced microphysics that provide the basis for understanding the relative importance of different microphysical processes that take place in mixed-phase clouds. To improve the representation of mixed-phase cloud processes in the GISS GCM we use the GISS single-column model coupled to a bin resolved microphysics (BRM) scheme that was specially designed to simulate mixed-phase clouds and aerosol-cloud interactions. Using this model with the microphysical measurements obtained from the DOE ARM Mixed-Phase Arctic Cloud Experiment (MPACE) campaign in October 2004 at the North Slope of Alaska, we investigate the effect of ice initiation processes and Bergeron-Findeisen process (BFP) on glaciation time and longevity of single-layer stratiform mixed-phase clouds. We focus on observations taken during 9-10 October, which indicated the presence of a single-layer mixed-phase clouds. We performed several sets of 12-h simulations to examine model sensitivity to different ice initiation mechanisms and evaluate model output (hydrometeors' concentrations, contents, effective radii, precipitation fluxes, and radar reflectivity) against measurements from the MPACE Intensive Observing Period. Overall, the model qualitatively simulates ice crystal concentration and hydrometeors content, but it fails to predict quantitatively the effective radii of ice particles and their vertical profiles. In particular, the ice effective radii are overestimated by at least 50%. However, using the same definition as used for observations, the effective radii simulated and that observed were more comparable. We find that for the single-layer stratiform mixed-phase clouds simulated, process of ice phase initiation due to freezing of supercooled water in both saturated and subsaturated (w.r.t. water) environments is as important as primary ice crystal origination from water vapor. We also find that the BFP is a process mainly responsible for the rates of glaciation of simulated clouds. These glaciation rates cannot be adequately represented by a water-ice saturation adjustment scheme that only depends on temperature and liquid and solid hydrometeors' contents as is widely used in bulk microphysics schemes and are better represented by processes that also account for supersaturation changes as the hydrometeors grow.

  2. Nonlinear mixed effects modelling for the analysis of longitudinal body core temperature data in healthy volunteers.

    PubMed

    Seng, Kok-Yong; Chen, Ying; Wang, Ting; Ming Chai, Adam Kian; Yuen Fun, David Chiok; Teo, Ya Shi; Sze Tan, Pearl Min; Ang, Wee Hon; Wei Lee, Jason Kai

    2016-04-01

    Many longitudinal studies have collected serial body core temperature (T c) data to understand thermal work strain of workers under various environmental and operational heat stress environments. This provides the opportunity for the development of mathematical models to analyse and forecast temporal T c changes across populations of subjects. Such models can reduce the need for invasive methods that continuously measure T c. This current work sought to develop a nonlinear mixed effects modelling framework to delineate the dynamic changes of T c and its association with a set of covariates of interest (e.g. heart rate, chest skin temperature), and the structure of the variability of T c in various longitudinal studies. Data to train and evaluate the model were derived from two laboratory investigations involving male soldiers who participated in either a 12 (N  =  18) or 15 km (N  =  16) foot march with varied clothing, load and heat acclimatisation status. Model qualification was conducted using nonparametric bootstrap and cross validation procedures. For cross validation, the trajectory of a new subject's T c was simulated via Bayesian maximum a posteriori estimation when using only the baseline T c or using the baseline T c as well as measured T c at the end of every work (march) phase. The final model described T c versus time profiles using a parametric function with its main parameters modelled as a sigmoid hyperbolic function of the load and/or chest skin temperature. Overall, T c predictions corresponded well with the measured data (root mean square deviation: 0.16 °C), and compared favourably with those provided by two recently published Kalman filter models.

  3. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS.

    PubMed

    Son, Heesook; Friedmann, Erika; Thomas, Sue A

    2012-01-01

    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  4. Heavy neutrino mixing and single production at linear collider

    NASA Astrophysics Data System (ADS)

    Gluza, J.; Maalampi, J.; Raidal, M.; Zrałek, M.

    1997-02-01

    We study the single production of heavy neutrinos via the processes e- e+ -> νN and e- γ -> W- N at future linear colliders. As a base of our considerations we take a wide class of models, both with vanishing and non-vanishing left-handed Majorana neutrino mass matrix mL. We perform a model independent analyses of the existing experimental data and find connections between the characteristic of heavy neutrinos (masses, mixings, CP eigenvalues) and the mL parameters. We show that with the present experimental constraints heavy neutrino masses almost up to the collision energy can be tested in the future experiments.

  5. Can pair-instability supernova models match the observations of superluminous supernovae?

    NASA Astrophysics Data System (ADS)

    Kozyreva, Alexandra; Blinnikov, S.

    2015-12-01

    An increasing number of so-called superluminous supernovae (SLSNe) are discovered. It is believed that at least some of them with slowly fading light curves originate in stellar explosions induced by the pair instability mechanism. Recent stellar evolution models naturally predict pair instability supernovae (PISNe) from very massive stars at wide range of metallicities (up to Z = 0.006, Yusof et al.). In the scope of this study, we analyse whether PISN models can match the observational properties of SLSNe with various light-curve shapes. Specifically, we explore the influence of different degrees of macroscopic chemical mixing in PISN explosive products on the resulting observational properties. We artificially apply mixing to the 250 M⊙ PISN evolutionary model from Kozyreva et al. and explore its supernova evolution with the one-dimensional radiation hydrodynamics code STELLA. The greatest success in matching SLSN observations is achieved in the case of an extreme macroscopic mixing, where all radioactive material is ejected into the hydrogen-helium outer layer. Such an extreme macroscopic redistribution of chemicals produces events with faster light curves with high photospheric temperatures and high photospheric velocities. These properties fit a wider range of SLSNe than non-mixed PISN model. Our mixed models match the light curves, colour temperature, and photospheric velocity evolution of two well-observed SLSNe PTF12dam and LSQ12dlf. However, these models' extreme chemical redistribution may be hard to realize in massive PISNe. Therefore, alternative models such as the magnetar mechanism or wind-interaction may still to be favourable to interpret rapidly rising SLSNe.

  6. Runtime and Pressurization Analyses of Propellant Tanks

    NASA Technical Reports Server (NTRS)

    Field, Robert E.; Ryan, Harry M.; Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chung P.

    2007-01-01

    Multi-element unstructured CFD has been utilized at NASA SSC to carry out analyses of propellant tank systems in different modes of operation. The three regimes of interest at SSC include (a) tank chill down (b) tank pressurization and (c) runtime propellant draw-down and purge. While tank chill down is an important event that is best addressed with long time-scale heat transfer calculations, CFD can play a critical role in the tank pressurization and runtime modes of operation. In these situations, problems with contamination of the propellant by inclusion of the pressurant gas from the ullage causes a deterioration of the quality of the propellant delivered to the test article. CFD can be used to help quantify the mixing and propellant degradation. During tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. It should be noted that traditional CFD modeling is inadequate for such simulations because the fluids in the tank are in a range of different sub-critical and supercritical states and elaborate phase change and mixing rules have to be developed to accurately model the interaction between the ullage gas and the propellant. We show a typical run-time simulation of a spherical propellant tank, containing RP-1 in this case, being pressurized with room-temperature nitrogen at 540 R. Nitrogen, shown in blue on the right-hand side of the figures, enters the tank from the diffuser at the top of the figures and impinges on the RP-1, shown in red, while the propellant is being continuously drained at the rate of 1050 lbs/sec through a pipe at the bottom of the tank. The sequence of frames in Figure 1 shows the resultant velocity fields and mixing between nitrogen and RP-1 in a cross-section of the tank at different times. A vortex is seen to form in the incoming nitrogen stream that tends to entrain propellant, mixing it with the pressurant gas. The RP-1 mass fraction contours in Figure 1 are also indicative of the level of mixing and contamination of the propellant. The simulation is used to track the propagation of the pure propellant front as it is drawn toward the exit with the evolution of the mixing processes in the tank. The CFD simulation modeled a total of 10 seconds of run time. As is seen from Figure 1d, after 5.65 seconds the propellant front is nearing the drain pipe, especially near the center of the tank. Behind this pure propellant front is a mixed fluid of compromised quality that would require the test to end when it reaches the exit pipe. Such unsteady simulations provide an estimate of the time that a high-quality propellant supply to the test article can be guaranteed at the modeled mass flow rate. In the final paper, we will discuss simulations of the LOX and propellant tanks at NASA SSC being pressurized by an inert ullage. Detailed comparisons will be made between the CFD simulations and lower order models as well as with test data. Conditions leading to cryo collapse in the tank will also be identified.

  7. Influence of non-homogeneous mixing on final epidemic size in a meta-population model.

    PubMed

    Cui, Jingan; Zhang, Yanan; Feng, Zhilan

    2018-06-18

    In meta-population models for infectious diseases, the basic reproduction number [Formula: see text] can be as much as 70% larger in the case of preferential mixing than that in homogeneous mixing [J.W. Glasser, Z. Feng, S.B. Omer, P.J. Smith, and L.E. Rodewald, The effect of heterogeneity in uptake of the measles, mumps, and rubella vaccine on the potential for outbreaks of measles: A modelling study, Lancet ID 16 (2016), pp. 599-605. doi: 10.1016/S1473-3099(16)00004-9 ]. This suggests that realistic mixing can be an important factor to consider in order for the models to provide a reliable assessment of intervention strategies. The influence of mixing is more significant when the population is highly heterogeneous. In this paper, another quantity, the final epidemic size ([Formula: see text]) of an outbreak, is considered to examine the influence of mixing and population heterogeneity. Final size relation is derived for a meta-population model accounting for a general mixing. The results show that [Formula: see text] can be influenced by the pattern of mixing in a significant way. Another interesting finding is that, heterogeneity in various sub-population characteristics may have the opposite effect on [Formula: see text] and [Formula: see text].

  8. Mixed-Mode Decohesion Elements for Analyses of Progressive Delamination

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.; deMoura, Marcelo F.

    2001-01-01

    A new 8-node decohesion element with mixed mode capability is proposed and demonstrated. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a strain softening law to track the damage state of the interface. The method can be used in conjunction with conventional material degradation procedures to account for inplane and intra-laminar damage modes. The accuracy of the predictions is evaluated in single mode delamination tests, in the mixed-mode bending test, and in a structural configuration consisting of the debonding of a stiffener flange from its skin.

  9. Statistical models of global Langmuir mixing

    NASA Astrophysics Data System (ADS)

    Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean

    2017-05-01

    The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.

  10. Using existing case-mix methods to fund trauma cases.

    PubMed

    Monakova, Julia; Blais, Irene; Botz, Charles; Chechulin, Yuriy; Picciano, Gino; Basinski, Antoni

    2010-01-01

    Policymakers frequently face the need to increase funding in isolated and frequently heterogeneous (clinically and in terms of resource consumption) patient subpopulations. This article presents a methodologic solution for testing the appropriateness of using existing grouping and weighting methodologies for funding subsets of patients in the scenario where a case-mix approach is preferable to a flat-rate based payment system. Using as an example the subpopulation of trauma cases of Ontario lead trauma hospitals, the statistical techniques of linear and nonlinear regression models, regression trees, and spline models were applied to examine the fit of the existing case-mix groups and reference weights for the trauma cases. The analyses demonstrated that for funding Ontario trauma cases, the existing case-mix systems can form the basis for rational and equitable hospital funding, decreasing the need to develop a different grouper for this subset of patients. This study confirmed that Injury Severity Score is a poor predictor of costs for trauma patients. Although our analysis used the Canadian case-mix classification system and cost weights, the demonstrated concept of using existing case-mix systems to develop funding rates for specific subsets of patient populations may be applicable internationally.

  11. Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects

    NASA Astrophysics Data System (ADS)

    Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.

    2008-10-01

    A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.

  12. A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses

    PubMed Central

    Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert

    2011-01-01

    Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325

  13. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    PubMed

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.

  14. The Effect of Mixing Entire Male Pigs Prior to Transport to Slaughter on Behaviour, Welfare and Carcass Lesions

    PubMed Central

    van Staaveren, Nienke; Teixeira, Dayane Lemos; Hanlon, Alison; Boyle, Laura Ann

    2015-01-01

    Research is needed to validate lesions recorded at meat inspection as indicators of pig welfare on farm. The aims were to determine the influence of mixing pigs on carcass lesions and to establish whether such lesions correlate with pig behaviour and lesions scored on farm. Aggressive and mounting behaviour of pigs in three single sex pens was recorded on Day −5, −2, and −1 relative to slaughter (Day 0). On Day 0 pigs were randomly allocated to 3 treatments (n = 20/group) over 5 replicates: males mixed with females (MF), males mixed with males (MM), and males unmixed (MUM). Aggressive and mounting behaviours were recorded on Day 0 at holding on farm and lairage. Skin/tail lesions were scored according to severity at the farm (Day −1), lairage, and on the carcass (Day 0). Effect of treatment and time on behaviour and lesions were analysed by mixed models. Spearman rank correlations between behaviour and lesion scores and between scores recorded at different stages were determined. In general, MM performed more aggressive behaviour (50.4 ± 10.72) than MUM (20.3 ± 9.55, P < 0.05) and more mounting (30.9 ± 9.99) than MF (11.4 ± 3.76) and MUM (9.8 ± 3.74, P < 0.05). Skin lesion scores increased between farm (Day −1) and lairage (P < 0.001), but this tended to be significant only for MF and MM (P = 0.08). There was no effect of treatment on carcass lesions and no associations were found with fighting/mounting. Mixing entire males prior to slaughter stimulated mounting and aggressive behaviour but did not influence carcass lesion scores. Carcass skin/tail lesions scores were correlated with scores recorded on farm (rskin = 0.21 and rtail = 0.18, P < 0.01) suggesting that information recorded at meat inspection could be used as indicators of pig welfare on farm. PMID:25830336

  15. Multi-Scale Analysis for Characterizing Near-Field Constituent Concentrations in the Context of a Macro-Scale Semi-Lagrangian Numerical Model

    NASA Astrophysics Data System (ADS)

    Yearsley, J. R.

    2017-12-01

    The semi-Lagrangian numerical scheme employed by RBM, a model for simulating time-dependent, one-dimensional water quality constituents in advection-dominated rivers, is highly scalable both in time and space. Although the model has been used at length scales of 150 meters and time scales of three hours, the majority of applications have been at length scales of 1/16th degree latitude/longitude (about 5 km) or greater and time scales of one day. Applications of the method at these scales has proven successful for characterizing the impacts of climate change on water temperatures in global rivers and on the vulnerability of thermoelectric power plants to changes in cooling water temperatures in large river systems. However, local effects can be very important in terms of ecosystem impacts, particularly in the case of developing mixing zones for wastewater discharges with pollutant loadings limited by regulations imposed by the Federal Water Pollution Control Act (FWPCA). Mixing zone analyses have usually been decoupled from large-scale watershed influences by developing scenarios that represent critical scenarios for external processes associated with streamflow and weather conditions . By taking advantage of the particle-tracking characteristics of the numerical scheme, RBM can provide results at any point in time within the model domain. We develop a proof of concept for locations in the river network where local impacts such as mixing zones may be important. Simulated results from the semi-Lagrangian numerical scheme are treated as input to a finite difference model of the two-dimensional diffusion equation for water quality constituents such as water temperature or toxic substances. Simulations will provide time-dependent, two-dimensional constituent concentration in the near-field in response to long-term basin-wide processes. These results could provide decision support to water quality managers for evaluating mixing zone characteristics.

  16. Stellar evolution with turbulent diffusion. I. A new formalism of mixing.

    NASA Astrophysics Data System (ADS)

    Deng, L.; Bressan, A.; Chiosi, C.

    1996-09-01

    In this paper we present a new formulation of diffusive mixing in stellar interiors aimed at casting light on the kind of mixing that should take place in the so-called overshoot regions surrounding fully convective zones. Key points of the analysis are the inclusion the concept of scale length most effective for mixing, by means of which the diffusion coefficient is formulated, and the inclusion of intermittence and stirring, two properties of turbulence known from laboratory fluid dynamics. The formalism is applied to follow the evolution of a 20Msun_ star with composition Z=0.008 and Y=0.25. Depending on the value of the diffusion coefficient holding in the overshoot region, the evolutionary behaviour of the test stars goes from the case of virtually no mixing (semiconvective like structures) to that of full mixing over there (standard overshoot models). Indeed, the efficiency of mixing in this region drives the extension of the intermediate fully convective shell developing at the onset of the the shell H-burning, and in turn the path in the HR Diagram (HRD). Models with low efficiency of mixing burn helium in the core at high effective temperatures, models with intermediate efficiency perform extended loops in the HRD, finally models with high efficiency spend the whole core He-burning phase at low effective temperatures. In order to cast light on this important point of stellar structure, we test whether or not in the regions of the H-burning shell a convective layer can develop. More precisely, we examine whether the Schwarzschild or the Ledoux criterion ought to be adopted in this region. Furthermore, we test the response of stellar models to the kind of mixing supposed to occur in the H-burning shell regions. Finally, comparing the time scale of thermal dissipation to the evolutionary time scale, we get the conclusion that no mixing in this region should occur. The models with intermediate efficiency of mixing and no mixing at all in the shell H-burning regions are of particular interest as they possess at the same time evolutionary characteristics that are separately typical of models calculated with different schemes of mixing. In other words, the new models share the same properties of models with standard overshoot, namely a wider main sequence band, higher luminosity, and longer lifetimes than classical models, but they also possess extended loops that are the main signature of the classical (semiconvective) description of convection at the border of the core.

  17. Studies on the detection and identification of the explosives in the terahertz range

    NASA Astrophysics Data System (ADS)

    Zhou, Qing-li; Zhang, Cun-lin; Li, Wei-Wei; Mu, Kai-jun; Feng, Rui-shu

    2008-03-01

    The sensing of the explosives and the related compounds is very important for homeland security and defense. Based on the non-invasive terahertz (THz) technology, we have studied some pure and mixed explosives by using the THz time-domain spectroscopy and have obtained the absorption spectra of those samples. The obtained results show that those explosives can be identified due to their different characterized finger-prints in the terahertz frequency region of 0.2-2.5 THz. Furthermore, the spectra analyses indicate that the shape and peak positions of the spectra for these mixed explosive are mainly determined by their explosive components. In order to identify those different kinds of explosives, we have applied the artificial neural network, which is a mathematical device for modeling complex and non-linear functionalities, to our present work. After the repetitive modeling and adequate training with the known input-output data, the identification of the explosive is realized roughly on a multi-hidden-layers model. It is shown that the neural network analyses of the THz spectra would positively identify the explosives and reduce false alarm rates.

  18. Population pharmacokinetic and pharmacodynamic analyses of safinamide in subjects with Parkinson's disease.

    PubMed

    Loprete, Luca; Leuratti, Chiara; Cattaneo, Carlo; Thapar, Mita M; Farrell, Colm; Sardina, Marco

    2016-10-01

    Safinamide is an orally administered α -aminoamide derivative with both dopaminergic and non-dopaminergic properties. Nonlinear mixed effects models for population pharmacokinetic (PK) and pharmacokinetic-pharmacodynamic (PKPD) analyses were developed using records from, respectively, 623 and 668 patients belonging to two Phase 3, randomized, placebo-controlled, double-blind efficacy studies. The aim was to estimate safinamide population PK parameters in patients with Parkinson's disease (PD) on stable levodopa therapy, and to develop a model of safinamide effect on the PD phase of normal functioning (ON-time). The final models were internally evaluated using visual predictive checks (VPCs), prediction corrected-VPC, and nonparametric bootstrap analysis. Safinamide profiles were adequately described by a linear one-compartmental model with first-order absorption and elimination. CL/F, Vd/F, and KA (95% confidence interval [CI]) were 4.96 (4.73-5.21) L/h, 166 (158-174) L, and 0.582 (0.335-0.829) h -1 , respectively. CL/F and Vd/F increased with body weight, while age, gender, renal function, and exposure to levodopa did not influence safinamide PK. The observed ON-time values were adequately described by a linear model, with time in the study period as dependent variable, and rate of ON-time change and baseline plus offset effect as slope and intercept parameters. Safinamide treatment resulted in an increase in ON-time of 0.73 h (week 4), with further ON-time increase with the same slope as placebo. The increase was not influenced by age, levodopa, or safinamide exposure. The population models adequately describe the population PK of safinamide and safinamide effect on ON-time. No dose adjustments in elderly and mild to moderate renally impaired patients are requested.

  19. Group cognitive behavioral therapy for patients with generalized social anxiety disorder in Japan: outcomes at 1-year follow up and outcome predictors

    PubMed Central

    Kawaguchi, Akiko; Watanabe, Norio; Nakano, Yumi; Ogawa, Sei; Suzuki, Masako; Kondo, Masaki; Furukawa, Toshi A; Akechi, Tatsuo

    2013-01-01

    Background Social anxiety disorder (SAD) is one of the most common psychiatric disorders worldwide. Cognitive behavioral therapy (CBT) is an effective treatment option for patients with SAD. In the present study, we examined the efficacy of group CBT for patients with generalized SAD in Japan at 1-year follow-up and investigated predictors with regard to outcomes. Methods This study was conducted as a single-arm, naturalistic, follow-up study in a routine Japanese clinical setting. A total of 113 outpatients with generalized SAD participated in group CBT from July 2003 to August 2010 and were assessed at follow-ups for up to 1 year. Primary outcome was the total score on the Social Phobia Scale/Social Interaction Anxiety Scale (SPS/SIAS) at 1 year. Possible baseline predictors were investigated using mixed-model analyses. Results Among the 113 patients, 70 completed the assessment at the 1-year follow-up. The SPS/SIAS scores showed significant improvement throughout the follow-ups for up to 1 year. The effect sizes of SPS/SIAS at the 1-year follow-up were 0.68 (95% confidence interval 0.41–0.95)/0.76 (0.49–1.03) in the intention-to-treat group and 0.77 (0.42–1.10)/0.84 (0.49–1.18) in completers. Older age at baseline, late onset, and lower severity of SAD were significantly associated with good outcomes as a result of mixed-model analyses. Conclusions CBT for patients with generalized SAD in Japan is effective for up to 1 year after treatment. The effect sizes were as large as those in previous studies conducted in Western countries. Older age at baseline, late onset, and lower severity of SAD were predictors for a good outcome from group CBT. PMID:23450841

  20. Hyperbolic Discounting: Value and Time Processes of Substance Abusers and Non-Clinical Individuals in Intertemporal Choice

    PubMed Central

    2014-01-01

    The single parameter hyperbolic model has been frequently used to describe value discounting as a function of time and to differentiate substance abusers and non-clinical participants with the model's parameter k. However, k says little about the mechanisms underlying the observed differences. The present study evaluates several alternative models with the purpose of identifying whether group differences stem from differences in subjective valuation, and/or time perceptions. Using three two-parameter models, plus secondary data analyses of 14 studies with 471 indifference point curves, results demonstrated that adding a valuation, or a time perception function led to better model fits. However, the gain in fit due to the flexibility granted by a second parameter did not always lead to a better understanding of the data patterns and corresponding psychological processes. The k parameter consistently indexed group and context (magnitude) differences; it is thus a mixed measure of person and task level effects. This was similar for a parameter meant to index payoff devaluation. A time perception parameter, on the other hand, fluctuated with contexts in a non-predicted fashion and the interpretation of its values was inconsistent with prior findings that supported enlarged perceived delays for substance abusers compared to controls. Overall, the results provide mixed support for hyperbolic models of intertemporal choice in terms of the psychological meaning afforded by their parameters. PMID:25390941

  1. The use of mixed effects ANCOVA to characterize vehicle emission profiles

    DOT National Transportation Integrated Search

    2000-09-01

    A mixed effects analysis of covariance model to characterize mileage dependent emissions profiles for any given group of vehicles having a common model design is used in this paper. These types of evaluations are used by the U.S. Environmental Protec...

  2. Random regression analyses using B-splines to model growth of Australian Angus cattle

    PubMed Central

    Meyer, Karin

    2005-01-01

    Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011

  3. Proteomic studies in zebrafish liver cells exposed to the brominated flame retardants HBCD and TBBPA.

    PubMed

    Kling, Peter; Förlin, Lars

    2009-10-01

    Proteomic effect screening in zebrafish liver cells was performed to generate hypotheses regarding single and mixed exposure to the BFRs HBCD and TBBPA. Responses at sublethal exposure were analysed by two-dimensional gel electrophoresis followed by MALDI-TOF and FT-ICR protein identification. Mixing of HBCD and TBBPA at sublethal doses of individual substances seemed to increase toxicity. Proteomic analyses revealed distinct exposure-specific and overlapping responses suggesting novel mechanisms with regard to HBCD and TBBPA exposure. While distinct HBCD responses were related to decreased protein metabolism, TBBPA revealed effects related to protein folding and NADPH production. Overlapping responses suggest increased gluconeogenesis (GAPDH and aldolase) while distinct mixture effects suggest a pronounced NADPH production and changes in proteins related to cell cycle control (prohibitin and crk-like oncogene). We conclude that mixtures containing HBCD and TBBPA may result in unexpected effects highlighting proteomics as a sensitive tool for detecting and hypothesis generation of mixture effects.

  4. Effect of long-term antibiotic use on weight in adolescents with acne.

    PubMed

    Contopoulos-Ioannidis, Despina G; Ley, Catherine; Wang, Wei; Ma, Ting; Olson, Clifford; Shi, Xiaoli; Luft, Harold S; Hastie, Trevor; Parsonnet, Julie

    2016-04-01

    Antibiotics increase weight in farm animals and may cause weight gain in humans. We used electronic health records from a large primary care organization to determine the effect of antibiotics on weight and BMI in healthy adolescents with acne. We performed a retrospective cohort study of adolescents with acne prescribed ≥4 weeks of oral antibiotics with weight measurements within 18 months pre-antibiotics and 12 months post-antibiotics. We compared within-individual changes in weight-for-age Z-scores (WAZs) and BMI-for-age Z-scores (BMIZs). We used: (i) paired t-tests to analyse changes between the last pre-antibiotics versus the first post-antibiotic measurements; (ii) piecewise-constant-mixed models to capture changes between mean measurements pre- versus post-antibiotics; (iii) piecewise-linear-mixed models to capture changes in trajectory slopes pre- versus post-antibiotics; and (iv) χ(2) tests to compare proportions of adolescents with ≥0.2 Z-scores WAZ or BMIZ increase or decrease. Our cohort included 1012 adolescents with WAZs; 542 also had BMIZs. WAZs decreased post-antibiotics in all analyses [change between last WAZ pre-antibiotics versus first WAZ post-antibiotics = -0.041 Z-scores (P < 0.001); change between mean WAZ pre- versus post-antibiotics = -0.050 Z-scores (P < 0.001); change in WAZ trajectory slopes pre- versus post-antibiotics = -0.025 Z-scores/6 months (P = 0.002)]. More adolescents had a WAZ decrease post-antibiotics ≥0.2 Z-scores than an increase (26% versus 18%; P < 0.001). Trends were similar, though not statistically significant, for BMIZ changes. Contrary to original expectations, long-term antibiotic use in healthy adolescents with acne was not associated with weight gain. This finding, which was consistent across all analyses, does not support a weight-promoting effect of antibiotics in adolescents. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. An in-depth assessment of a diagnosis-based risk adjustment model based on national health insurance claims: the application of the Johns Hopkins Adjusted Clinical Group case-mix system in Taiwan.

    PubMed

    Chang, Hsien-Yen; Weiner, Jonathan P

    2010-01-18

    Diagnosis-based risk adjustment is becoming an important issue globally as a result of its implications for payment, high-risk predictive modelling and provider performance assessment. The Taiwanese National Health Insurance (NHI) programme provides universal coverage and maintains a single national computerized claims database, which enables the application of diagnosis-based risk adjustment. However, research regarding risk adjustment is limited. This study aims to examine the performance of the Adjusted Clinical Group (ACG) case-mix system using claims-based diagnosis information from the Taiwanese NHI programme. A random sample of NHI enrollees was selected. Those continuously enrolled in 2002 were included for concurrent analyses (n = 173,234), while those in both 2002 and 2003 were included for prospective analyses (n = 164,562). Health status measures derived from 2002 diagnoses were used to explain the 2002 and 2003 health expenditure. A multivariate linear regression model was adopted after comparing the performance of seven different statistical models. Split-validation was performed in order to avoid overfitting. The performance measures were adjusted R2 and mean absolute prediction error of five types of expenditure at individual level, and predictive ratio of total expenditure at group level. The more comprehensive models performed better when used for explaining resource utilization. Adjusted R2 of total expenditure in concurrent/prospective analyses were 4.2%/4.4% in the demographic model, 15%/10% in the ACGs or ADGs (Aggregated Diagnosis Group) model, and 40%/22% in the models containing EDCs (Expanded Diagnosis Cluster). When predicting expenditure for groups based on expenditure quintiles, all models underpredicted the highest expenditure group and overpredicted the four other groups. For groups based on morbidity burden, the ACGs model had the best performance overall. Given the widespread availability of claims data and the superior explanatory power of claims-based risk adjustment models over demographics-only models, Taiwan's government should consider using claims-based models for policy-relevant applications. The performance of the ACG case-mix system in Taiwan was comparable to that found in other countries. This suggested that the ACG system could be applied to Taiwan's NHI even though it was originally developed in the USA. Many of the findings in this paper are likely to be relevant to other diagnosis-based risk adjustment methodologies.

  6. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  7. Uncertainty in Analyzed Water and Energy Budgets at Continental Scales

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Robertson, F. R.; Mocko, D.; Chen, J.

    2011-01-01

    Operational analyses and retrospective-analyses provide all the physical terms of mater and energy budgets, guided by the assimilation of atmospheric observations. However, there is significant reliance on the numerical models, and so, uncertainty in the budget terms is always present. Here, we use a recently developed data set consisting of a mix of 10 analyses (both operational and retrospective) to quantify the uncertainty of analyzed water and energy budget terms for GEWEX continental-scale regions, following the evaluation of Dr. John Roads using individual reanalyses data sets.

  8. An improved NSGA - II algorithm for mixed model assembly line balancing

    NASA Astrophysics Data System (ADS)

    Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong

    2018-05-01

    Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.

  9. A Fatty Acid Based Bayesian Approach for Inferring Diet in Aquatic Consumers

    PubMed Central

    Holtgrieve, Gordon W.; Ward, Eric J.; Ballantyne, Ashley P.; Burns, Carolyn W.; Kainz, Martin J.; Müller-Navarra, Doerthe C.; Persson, Jonas; Ravet, Joseph L.; Strandberg, Ursula; Taipale, Sami J.; Alhgren, Gunnel

    2015-01-01

    We modified the stable isotope mixing model MixSIR to infer primary producer contributions to consumer diets based on their fatty acid composition. To parameterize the algorithm, we generated a ‘consumer-resource library’ of FA signatures of Daphnia fed different algal diets, using 34 feeding trials representing diverse phytoplankton lineages. This library corresponds to the resource or producer file in classic Bayesian mixing models such as MixSIR or SIAR. Because this library is based on the FA profiles of zooplankton consuming known diets, and not the FA profiles of algae directly, trophic modification of consumer lipids is directly accounted for. To test the model, we simulated hypothetical Daphnia comprised of 80% diatoms, 10% green algae, and 10% cryptophytes and compared the FA signatures of these known pseudo-mixtures to outputs generated by the mixing model. The algorithm inferred these simulated consumers were comprised of 82% (63-92%) [median (2.5th to 97.5th percentile credible interval)] diatoms, 11% (4-22%) green algae, and 6% (0-25%) cryptophytes. We used the same model with published phytoplankton stable isotope (SI) data for δ13C and δ15N to examine how a SI based approach resolved a similar scenario. With SI, the algorithm inferred that the simulated consumer assimilated 52% (4-91%) diatoms, 23% (1-78%) green algae, and 18% (1-73%) cyanobacteria. The accuracy and precision of SI based estimates was extremely sensitive to both resource and consumer uncertainty, as well as the trophic fractionation assumption. These results indicate that when using only two tracers with substantial uncertainty for the putative resources, as is often the case in this class of analyses, the underdetermined constraint in consumer-resource SI analyses may be intractable. The FA based approach alleviated the underdetermined constraint because many more FA biomarkers were utilized (n < 20), different primary producers (e.g., diatoms, green algae, and cryptophytes) have very characteristic FA compositions, and the FA profiles of many aquatic primary consumers are strongly influenced by their diets. PMID:26114945

  10. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

    PubMed

    Janssen, Dirk P

    2012-03-01

    Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

  11. Modelling of subgrid-scale phenomena in supercritical transitional mixing layers: an a priori study

    NASA Astrophysics Data System (ADS)

    Selle, Laurent C.; Okong'o, Nora A.; Bellan, Josette; Harstad, Kenneth G.

    A database of transitional direct numerical simulation (DNS) realizations of a supercritical mixing layer is analysed for understanding small-scale behaviour and examining subgrid-scale (SGS) models duplicating that behaviour. Initially, the mixing layer contains a single chemical species in each of the two streams, and a perturbation promotes roll-up and a double pairing of the four spanwise vortices initially present. The database encompasses three combinations of chemical species, several perturbation wavelengths and amplitudes, and several initial Reynolds numbers specifically chosen for the sole purpose of achieving transition. The DNS equations are the Navier-Stokes, total energy and species equations coupled to a real-gas equation of state; the fluxes of species and heat include the Soret and Dufour effects. The large-eddy simulation (LES) equations are derived from the DNS ones through filtering. Compared to the DNS equations, two types of additional terms are identified in the LES equations: SGS fluxes and other terms for which either assumptions or models are necessary. The magnitude of all terms in the LES conservation equations is analysed on the DNS database, with special attention to terms that could possibly be neglected. It is shown that in contrast to atmospheric-pressure gaseous flows, there are two new terms that must be modelled: one in each of the momentum and the energy equations. These new terms can be thought to result from the filtering of the nonlinear equation of state, and are associated with regions of high density-gradient magnitude both found in DNS and observed experimentally in fully turbulent high-pressure flows. A model is derived for the momentum-equation additional term that performs well at small filter size but deteriorates as the filter size increases, highlighting the necessity of ensuring appropriate grid resolution in LES. Modelling approaches for the energy-equation additional term are proposed, all of which may be too computationally intensive in LES. Several SGS flux models are tested on an a priori basis. The Smagorinsky (SM) model has a poor correlation with the data, while the gradient (GR) and scale-similarity (SS) models have high correlations. Calibrated model coefficients for the GR and SS models yield good agreement with the SGS fluxes, although statistically, the coefficients are not valid over all realizations. The GR model is also tested for the variances entering the calculation of the new terms in the momentum and energy equations; high correlations are obtained, although the calibrated coefficients are not statistically significant over the entire database at fixed filter size. As a manifestation of the small-scale supercritical mixing peculiarities, both scalar-dissipation visualizations and the scalar-dissipation probability density functions (PDF) are examined. The PDF is shown to exhibit minor peaks, with particular significance for those at larger scalar dissipation values than the mean, thus significantly departing from the Gaussian behaviour.

  12. Additive effects of word frequency and stimulus quality: the influence of trial history and data transformations.

    PubMed

    Balota, David A; Aschenbrenner, Andrew J; Yap, Melvin J

    2013-09-01

    A counterintuitive and theoretically important pattern of results in the visual word recognition literature is that both word frequency and stimulus quality produce large but additive effects in lexical decision performance. The additive nature of these effects has recently been called into question by Masson and Kliegl (in press), who used linear mixed effects modeling to provide evidence that the additive effects were actually being driven by previous trial history. Because Masson and Kliegl also included semantic priming as a factor in their study and recent evidence has shown that semantic priming can moderate the additivity of word frequency and stimulus quality (Scaltritti, Balota, & Peressotti, 2012), we reanalyzed data from 3 published studies to determine if previous trial history moderated the additive pattern when semantic priming was not also manipulated. The results indicated that previous trial history did not influence the joint influence of word frequency and stimulus quality. More important, and independent of Masson and Kliegl's conclusions, we also show how a common transformation used in linear mixed effects analyses to normalize the residuals can systematically alter the way in which two variables combine to influence performance. Specifically, using transformed, rather than raw reaction times, consistently produces more underadditive patterns. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. Mixed-effects models for estimating stand volume by means of small footprint airborne laser scanner data.

    Treesearch

    J. Breidenbach; E. Kublin; R. McGaughey; H.-E. Andersen; S. Reutebuch

    2008-01-01

    For this study, hierarchical data sets--in that several sample plots are located within a stand--were analyzed for study sites in the USA and Germany. The German data had an additional hierarchy as the stands are located within four distinct public forests. Fixed-effects models and mixed-effects models with a random intercept on the stand level were fit to each data...

  14. The effort-reward imbalance work-stress model and daytime salivary cortisol and dehydroepiandrosterone (DHEA) among Japanese women.

    PubMed

    Ota, Atsuhiko; Mase, Junji; Howteerakul, Nopporn; Rajatanun, Thitipat; Suwannapong, Nawarat; Yatsuya, Hiroshi; Ono, Yuichiro

    2014-09-17

    We examined the influence of work-related effort-reward imbalance and overcommitment to work (OC), as derived from Siegrist's Effort-Reward Imbalance (ERI) model, on the hypothalamic-pituitary-adrenocortical (HPA) axis. We hypothesized that, among healthy workers, both cortisol and dehydroepiandrosterone (DHEA) secretion would be increased by effort-reward imbalance and OC and, as a result, cortisol-to-DHEA ratio (C/D ratio) would not differ by effort-reward imbalance or OC. The subjects were 115 healthy female nursery school teachers. Salivary cortisol, DHEA, and C/D ratio were used as indexes of HPA activity. Mixed-model analyses of variance revealed that neither the interaction between the ERI model indicators (i.e., effort, reward, effort-to-reward ratio, and OC) and the series of measurement times (9:00, 12:00, and 15:00) nor the main effect of the ERI model indicators was significant for daytime salivary cortisol, DHEA, or C/D ratio. Multiple linear regression analyses indicated that none of the ERI model indicators was significantly associated with area under the curve of daytime salivary cortisol, DHEA, or C/D ratio. We found that effort, reward, effort-reward imbalance, and OC had little influence on daytime variation patterns, levels, or amounts of salivary HPA-axis-related hormones. Thus, our hypotheses were not supported.

  15. Pre-natal exposures to cocaine and alcohol and physical growth patterns to age 8 years

    PubMed Central

    Lumeng, Julie C.; Cabral, Howard J.; Gannon, Katherine; Heeren, Timothy; Frank, Deborah A.

    2007-01-01

    Two hundred and two primarily African American/Caribbean children (classified by maternal report and infant meconium as 38 heavier, 74 lighter and 89 not cocaine-exposed) were measured repeatedly from birth to age 8 years to assess whether there is an independent effect of prenatal cocaine exposure on physical growth patterns. Children with fetal alcohol syndrome identifiable at birth were excluded. At birth, cocaine and alcohol exposures were significantly and independently associated with lower weight, length and head circumference in cross-sectional multiple regression analyses. The relationship over time of pre-natal exposures to weight, height, and head circumference was then examined by multiple linear regression using mixed linear models including covariates: child’s gestational age, gender, ethnicity, age at assessment, current caregiver, birth mother’s use of alcohol, marijuana and tobacco during the pregnancy and pre-pregnancy weight (for child’s weight) and height (for child’s height and head circumference). The cocaine effects did not persist beyond infancy in piecewise linear mixed models, but a significant and independent negative effect of pre-natal alcohol exposure persisted for weight, height, and head circumference. Catch-up growth in cocaine-exposed infants occurred primarily by 6 months of age for all growth parameters, with some small fluctuations in growth rates in the preschool age range but no detectable differences between heavier versus unexposed nor lighter versus unexposed thereafter. PMID:17412558

  16. Human salmonellosis: estimation of dose-illness from outbreak data.

    PubMed

    Bollaerts, Kaatje; Aerts, Marc; Faes, Christel; Grijspeerdt, Koen; Dewulf, Jeroen; Mintiens, Koen

    2008-04-01

    The quantification of the relationship between the amount of microbial organisms ingested and a specific outcome such as infection, illness, or mortality is a key aspect of quantitative risk assessment. A main problem in determining such dose-response models is the availability of appropriate data. Human feeding trials have been criticized because only young healthy volunteers are selected to participate and low doses, as often occurring in real life, are typically not considered. Epidemiological outbreak data are considered to be more valuable, but are more subject to data uncertainty. In this article, we model the dose-illness relationship based on data of 20 Salmonella outbreaks, as discussed by the World Health Organization. In particular, we model the dose-illness relationship using generalized linear mixed models and fractional polynomials of dose. The fractional polynomial models are modified to satisfy the properties of different types of dose-illness models as proposed by Teunis et al. Within these models, differences in host susceptibility (susceptible versus normal population) are modeled as fixed effects whereas differences in serovar type and food matrix are modeled as random effects. In addition, two bootstrap procedures are presented. A first procedure accounts for stochastic variability whereas a second procedure accounts for both stochastic variability and data uncertainty. The analyses indicate that the susceptible population has a higher probability of illness at low dose levels when the combination pathogen-food matrix is extremely virulent and at high dose levels when the combination is less virulent. Furthermore, the analyses suggest that immunity exists in the normal population but not in the susceptible population.

  17. Genomics of the divergence continuum in an African plant biodiversity hotspot, I: drivers of population divergence in Restio capensis (Restionaceae).

    PubMed

    Lexer, C; Wüest, R O; Mangili, S; Heuertz, M; Stölting, K N; Pearman, P B; Forest, F; Salamin, N; Zimmermann, N E; Bossolini, E

    2014-09-01

    Understanding the drivers of population divergence, speciation and species persistence is of great interest to molecular ecology, especially for species-rich radiations inhabiting the world's biodiversity hotspots. The toolbox of population genomics holds great promise for addressing these key issues, especially if genomic data are analysed within a spatially and ecologically explicit context. We have studied the earliest stages of the divergence continuum in the Restionaceae, a species-rich and ecologically important plant family of the Cape Floristic Region (CFR) of South Africa, using the widespread CFR endemic Restio capensis (L.) H.P. Linder & C.R. Hardy as an example. We studied diverging populations of this morphotaxon for plastid DNA sequences and >14 400 nuclear DNA polymorphisms from Restriction site Associated DNA (RAD) sequencing and analysed the results jointly with spatial, climatic and phytogeographic data, using a Bayesian generalized linear mixed modelling (GLMM) approach. The results indicate that population divergence across the extreme environmental mosaic of the CFR is mostly driven by isolation by environment (IBE) rather than isolation by distance (IBD) for both neutral and non-neutral markers, consistent with genome hitchhiking or coupling effects during early stages of divergence. Mixed modelling of plastid DNA and single divergent outlier loci from a Bayesian genome scan confirmed the predominant role of climate and pointed to additional drivers of divergence, such as drift and ecological agents of selection captured by phytogeographic zones. Our study demonstrates the usefulness of population genomics for disentangling the effects of IBD and IBE along the divergence continuum often found in species radiations across heterogeneous ecological landscapes. © 2014 John Wiley & Sons Ltd.

  18. Stable isotope signatures and trophic-step fractionation factors of fish tissues collected as non-lethal surrogates of dorsal muscle.

    PubMed

    Busst, Georgina M A; Bašić, Tea; Britton, J Robert

    2015-08-30

    Dorsal white muscle is the standard tissue analysed in fish trophic studies using stable isotope analyses. As muscle is usually collected destructively, fin tissues and scales are often used as non-lethal surrogates; we examined the utility of scales and fin tissue as muscle surrogates. The muscle, fin and scale δ(15) N and δ(13) C values from 10 cyprinid fish species determined with an elemental analyser coupled with an isotope ratio mass spectrometer were compared. The fish comprised (1) samples from the wild, and (2) samples from tank aquaria, using six species held for 120 days and fed a single food resource. Relationships between muscle, fin and scale isotope ratios were examined for each species and for the entire dataset, with the efficacy of four methods of predicting muscle isotope ratios from fin and scale values being tested. The fractionation factors between the three tissues of the laboratory fishes and their food resource were then calculated and applied to Bayesian mixing models to assess their effect on fish diet predictions. The isotopic data of the three tissues per species were distinct, but were significantly related, enabling estimations of muscle values from the two surrogates. Species-specific equations provided the least erroneous corrections of scale and fin isotope ratios (errors < 0.6‰). The fractionation factors for δ(15) N values were in the range obtained for other species, but were often higher for δ(13) C values. Their application to data from two fish populations in the mixing models resulted in significant alterations in diet predictions. Scales and fin tissue are strong surrogates of dorsal muscle in food web studies as they can provide estimates of muscle values within an acceptable level of error when species-specific methods are used. Their derived fractionation factors can also be applied to models predicting fish diet composition from δ(15) N and δ(13) C values. Copyright © 2015 John Wiley & Sons, Ltd.

  19. A mixed model for the relationship between climate and human cranial form.

    PubMed

    Katz, David C; Grote, Mark N; Weaver, Timothy D

    2016-08-01

    We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. Hot HB Stars in Globular Clusters - Physical Parameters and Consequences for Theory. VI. The Second Parameter Pair M3 and M13

    NASA Technical Reports Server (NTRS)

    Moehler, S.; Landsman, W. B.; Sweigart, A. V.; Grundahl, F.

    2002-01-01

    We present the results of spectroscopic analyses of hot horizontal branch (HB) stars in M13 and M3, which form a famous second parameter pair. From the spectra we derived - for the first time in M13 - atmospheric parameters (effective temperature and surface gravity) as well as abundances of helium, magnesium, and iron. Consistent with analyses of hot HB stars in other globular clusters we find evidence for helium depletion and iron enrichment in stars hotter than about 12,000 K in both M3 and M13. Accounting for the iron enrichment substantially improves the agreement with canonical evolutionary models, although the derived gravities and masses are still somewhat too low. This remaining discrepancy may be an indication that scaled-solar metal-rich model atmospheres do not adequately represent the highly non-solar abundance ratios found in blue HB stars with radiative levitation. We discuss the effects of an enhancement in the envelope helium abundance on the atmospheric parameters of the blue HB stars, as might be caused by deep mixing on the red giant branch or primordial pollution from an earlier generation of intermediate mass asymptotic giant branch stars.

  1. Accounting for Population Structure in Gene-by-Environment Interactions in Genome-Wide Association Studies Using Mixed Models.

    PubMed

    Sul, Jae Hoon; Bilow, Michael; Yang, Wen-Yun; Kostem, Emrah; Furlotte, Nick; He, Dan; Eskin, Eleazar

    2016-03-01

    Although genome-wide association studies (GWASs) have discovered numerous novel genetic variants associated with many complex traits and diseases, those genetic variants typically explain only a small fraction of phenotypic variance. Factors that account for phenotypic variance include environmental factors and gene-by-environment interactions (GEIs). Recently, several studies have conducted genome-wide gene-by-environment association analyses and demonstrated important roles of GEIs in complex traits. One of the main challenges in these association studies is to control effects of population structure that may cause spurious associations. Many studies have analyzed how population structure influences statistics of genetic variants and developed several statistical approaches to correct for population structure. However, the impact of population structure on GEI statistics in GWASs has not been extensively studied and nor have there been methods designed to correct for population structure on GEI statistics. In this paper, we show both analytically and empirically that population structure may cause spurious GEIs and use both simulation and two GWAS datasets to support our finding. We propose a statistical approach based on mixed models to account for population structure on GEI statistics. We find that our approach effectively controls population structure on statistics for GEIs as well as for genetic variants.

  2. Phylogenetically diverse macrophyte community promotes species diversity of mobile epi-benthic invertebrates

    NASA Astrophysics Data System (ADS)

    Nakamoto, Kenta; Hayakawa, Jun; Kawamura, Tomohiko; Kodama, Masafumi; Yamada, Hideaki; Kitagawa, Takashi; Watanabe, Yoshiro

    2018-07-01

    Various aspects of plant diversity such as species diversity and phylogenetic diversity enhance the species diversity of associated animals in terrestrial systems. In marine systems, however, the effects of macrophyte diversity on the species diversity of associated animals have received little attention. Here, we sampled in a subtropical seagrass-seaweed mixed bed to elucidate the effect of the macrophyte phylogenetic diversity based on the taxonomic relatedness as well as the macrophyte species diversity on species diversity of mobile epi-benthic invertebrates. Using regression analyses for each macrophyte parameter as well as multiple regression analyses, we found that the macrophyte phylogenetic diversity (taxonomic diversity index: Delta) positively influenced the invertebrate species richness and diversity index (H‧). Although the macrophyte species richness and H‧ also positively influenced the invertebrate species richness, the best fit model for invertebrate species richness did not include them, suggesting that the macrophyte species diversity indirectly influenced invertebrate species diversity. Possible explanations of the effects of macrophyte Delta on the invertebrate species diversity were the niche complementarity effect and the selection effect. This is the first study which demonstrates that macrophyte phylogenetic diversity has a strong effect on the species diversity of mobile epi-benthic invertebrates.

  3. Numerical and experimental characterization of a novel modular passive micromixer.

    PubMed

    Pennella, Francesco; Rossi, Massimiliano; Ripandelli, Simone; Rasponi, Marco; Mastrangelo, Francesco; Deriu, Marco A; Ridolfi, Luca; Kähler, Christian J; Morbiducci, Umberto

    2012-10-01

    This paper reports a new low-cost passive microfluidic mixer design, based on a replication of identical mixing units composed of microchannels with variable curvature (clothoid) geometry. The micromixer presents a compact and modular architecture that can be easily fabricated using a simple and reliable fabrication process. The particular clothoid-based geometry enhances the mixing by inducing transversal secondary flows and recirculation effects. The role of the relevant fluid mechanics mechanisms promoting the mixing in this geometry were analysed using computational fluid dynamics (CFD) for Reynolds numbers ranging from 1 to 110. A measure of mixing potency was quantitatively evaluated by calculating mixing efficiency, while a measure of particle dispersion was assessed through the lacunarity index. The results show that the secondary flow arrangement and recirculation effects are able to provide a mixing efficiency equal to 80 % at Reynolds number above 70. In addition, the analysis of particles distribution promotes the lacunarity as powerful tool to quantify the dispersion of fluid particles and, in turn, the overall mixing. On fabricated micromixer prototypes the microscopic-Laser-Induced-Fluorescence (μLIF) technique was applied to characterize mixing. The experimental results confirmed the mixing potency of the microdevice.

  4. Simulation of particle diversity and mixing state over Greater Paris: a model-measurement inter-comparison.

    PubMed

    Zhu, Shupeng; Sartelet, Karine N; Healy, Robert M; Wenger, John C

    2016-07-18

    Air quality models are used to simulate and forecast pollutant concentrations, from continental scales to regional and urban scales. These models usually assume that particles are internally mixed, i.e. particles of the same size have the same chemical composition, which may vary in space and time. Although this assumption may be realistic for continental-scale simulations, where particles originating from different sources have undergone sufficient mixing to achieve a common chemical composition for a given model grid cell and time, it may not be valid for urban-scale simulations, where particles from different sources interact on shorter time scales. To investigate the role of the mixing state assumption on the formation of particles, a size-composition resolved aerosol model (SCRAM) was developed and coupled to the Polyphemus air quality platform. Two simulations, one with the internal mixing hypothesis and another with the external mixing hypothesis, have been carried out for the period 15 January to 11 February 2010, when the MEGAPOLI winter field measurement campaign took place in Paris. The simulated bulk concentrations of chemical species and the concentrations of individual particle classes are compared with the observations of Healy et al. (Atmos. Chem. Phys., 2013, 13, 9479-9496) for the same period. The single particle diversity and the mixing-state index are computed based on the approach developed by Riemer et al. (Atmos. Chem. Phys., 2013, 13, 11423-11439), and they are compared to the measurement-based analyses of Healy et al. (Atmos. Chem. Phys., 2014, 14, 6289-6299). The average value of the single particle diversity, which represents the average number of species within each particle, is consistent between simulation and measurement (2.91 and 2.79 respectively). Furthermore, the average value of the mixing-state index is also well represented in the simulation (69% against 59% from the measurements). The spatial distribution of the mixing-state index shows that the particles are not mixed in urban areas, while they are well mixed in rural areas. This indicates that the assumption of internal mixing traditionally used in transport chemistry models is well suited to rural areas, but this assumption is less realistic for urban areas close to emission sources.

  5. Lifetime use of cannabis from longitudinal assessments, cannabinoid receptor (CNR1) variation, and reduced volume of the right anterior cingulate

    PubMed Central

    Hill, Shirley Y.; Sharma, Vinod; Jones, Bobby L.

    2016-01-01

    Lifetime measures of cannabis use and co-occurring exposures were obtained from a longitudinal cohort followed an average of 13 years at the time they received a structural MRI scan. MRI scans were analyzed for 88 participants (mean age=25.9 years), 34 of whom were regular users of cannabis. Whole brain voxel based morphometry analyses (SPM8) were conducted using 50 voxel clusters at p=0.005. Controlling for age, familial risk, and gender, we found reduced volume in Regular Users compared to Non-Users, in the lingual gyrus, anterior cingulum (right and left), and the rolandic operculum (right). The right anterior cingulum reached family-wise error statistical significance at p=0.001, controlling for personal lifetime use of alcohol and cigarettes and any prenatal exposures. CNR1 haplotypes were formed from four CNR1 SNPs (rs806368, rs1049353, rs2023239, and rs6454674) and tested with level of cannabis exposure to assess their interactive effects on the lingual gyrus, cingulum (right and left) and rolandic operculum, regions showing cannabis exposure effects in the SPM8 analyses. These analyses used mixed model analyses (SPSS) to control for multiple potentially confounding variables. Level of cannabis exposure was associated with decreased volume of the right anterior cingulum and showed interaction effects with haplotype variation. PMID:27500453

  6. Coding response to a case-mix measurement system based on multiple diagnoses.

    PubMed

    Preyra, Colin

    2004-08-01

    To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.

  7. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    PubMed

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  8. Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model.

    PubMed

    Musekiwa, Alfred; Manda, Samuel O M; Mwambi, Henry G; Chen, Ding-Geng

    2016-01-01

    Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results.

  9. Estimates of lake trout (Salvelinus namaycush) diet in Lake Ontario using two and three isotope mixing models

    USGS Publications Warehouse

    Colborne, Scott F.; Rush, Scott A.; Paterson, Gordon; Johnson, Timothy B.; Lantry, Brian F.; Fisk, Aaron T.

    2016-01-01

    Recent development of multi-dimensional stable isotope models for estimating both foraging patterns and niches have presented the analytical tools to further assess the food webs of freshwater populations. One approach to refine predictions from these analyses is to include a third isotope to the more common two-isotope carbon and nitrogen mixing models to increase the power to resolve different prey sources. We compared predictions made with two-isotope carbon and nitrogen mixing models and three-isotope models that also included sulphur (δ34S) for the diets of Lake Ontario lake trout (Salvelinus namaycush). We determined the isotopic compositions of lake trout and potential prey fishes sampled from Lake Ontario and then used quantitative estimates of resource use generated by two- and three-isotope Bayesian mixing models (SIAR) to infer feeding patterns of lake trout. Both two- and three-isotope models indicated that alewife (Alosa pseudoharengus) and round goby (Neogobius melanostomus) were the primary prey items, but the three-isotope models were more consistent with recent measures of prey fish abundances and lake trout diets. The lake trout sampled directly from the hatcheries had isotopic compositions derived from the hatchery food which were distinctively different from those derived from the natural prey sources. Those hatchery signals were retained for months after release, raising the possibility to distinguish hatchery-reared yearlings and similarly sized naturally reproduced lake trout based on isotopic compositions. Addition of a third-isotope resulted in mixing model results that confirmed round goby have become an important component of lake trout diet and may be overtaking alewife as a prey resource.

  10. Gluon-fusion Higgs production in the Standard Model Effective Field Theory

    NASA Astrophysics Data System (ADS)

    Deutschmann, Nicolas; Duhr, Claude; Maltoni, Fabio; Vryonidou, Eleni

    2017-12-01

    We provide the complete set of predictions needed to achieve NLO accuracy in the Standard Model Effective Field Theory at dimension six for Higgs production in gluon fusion. In particular, we compute for the first time the contribution of the chromomagnetic operator {\\overline{Q}}_LΦ σ {q}_RG at NLO in QCD, which entails two-loop virtual and one-loop real contributions, as well as renormalisation and mixing with the Yukawa operator {Φ}^{\\dagger}Φ{\\overline{Q}}_LΦ {q}_R and the gluon-fusion operator Φ†Φ GG. Focusing on the top-quark-Higgs couplings, we consider the phenomenological impact of the NLO corrections in constraining the three relevant operators by implementing the results into the M adG raph5_ aMC@NLO frame-work. This allows us to compute total cross sections as well as to perform event generation at NLO that can be directly employed in experimental analyses.

  11. Chronic Use of Aspirin and Total White Matter Lesion Volume: Results from the Women's Health Initiative Memory Study of Magnetic Resonance Imaging Study.

    PubMed

    Holcombe, Andrea; Ammann, Eric; Espeland, Mark A; Kelley, Brendan J; Manson, JoAnn E; Wallace, Robert; Robinson, Jennifer

    2017-10-01

    To investigate the relationship between aspirin and subclinical cerebrovascular heath, we evaluated the effect of chronic aspirin use on white matter lesions (WML) volume among women. Chronic aspirin use was assessed in 1365 women who participated in the Women's Health Initiative Memory Study of Magnetic Resonance Imaging. Differences in WML volumes between aspirin users and nonusers were assessed with linear mixed models. A number of secondary analyses were performed, including lobe-specific analyses, subgroup analyses based on participants' overall risk of cerebrovascular disease, and a dose-response relationship analysis. The mean age of the women at magnetic resonance imaging examination was 77.6 years. Sixty-one percent of participants were chronic aspirin users. After adjusting for demographic variables and comorbidities, chronic aspirin use was nonsignificantly associated with 4.8% (95% CI: -6.8%, 17.9%) larger WML volumes. These null findings were confirmed in secondary and sensitivity analyses, including an active comparator evaluation where aspirin users were compared to users of nonaspirin nonsteroidal anti-inflammatory drugs or acetaminophen. There was a nonsignificant difference in WML volumes between aspirin users and nonusers. Further, our results suggest that chronic aspirin use may not have a clinically significant effect on WML volumes in women. Published by Elsevier Inc.

  12. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    PubMed

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  13. Identifying Glacial Meltwater in the Amundsen Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Biddle, L. C.; Heywood, K. J.; Jenkins, A.; Kaiser, J.

    2016-02-01

    Pine Island Glacier, located in the Amundsen Sea, is losing mass rapidly due to relatively warm ocean waters melting its ice shelf from below. The resulting increase in meltwater production may be the root of the freshening in the Ross Sea over the last 30 years. Tracing the meltwater travelling away from the ice sheets is important in order to identify the regions most affected by the increased input of this water type. We use water mass characteristics (temperature, salinity, O2 concentration) derived from 105 CTD casts during the Ocean2ice cruise on RRS James Clark Ross in January-March 2014 to calculate meltwater fractions north of Pine Island Glacier. The data show maximum meltwater fractions at the ice front of up to 2.4 % and a plume of meltwater travelling away from the ice front along the 1027.7 kg m-3 isopycnal. We investigate the reliability of these results and attach uncertainties to the measurements made to ascertain the most reliable method of meltwater calculation in the Amundsen Sea. Processes such as atmospheric interaction and biological activity also affect the calculated apparent meltwater fractions. We analyse their effects on the reliability of the calculated meltwater fractions across the region using a bulk mixed layer model based on the one-dimensional Price-Weller-Pinkel model (Price et al., 1986). The model includes sea ice, dissolved oxygen concentrations and a simple respiration model, forced by NCEP climatology and an initial linear mixing profile between Winter Water (WW) and Circumpolar Deep Water (CDW). The model mimics the seasonal cycle of mixed layer warming and freshening and simulates how increases in sea ice formation and the influx of slightly cooler Lower CDW impact on the apparent meltwater fractions. These processes could result in biased meltwater signatures across the eastern Amundsen Sea.

  14. Identifying glacial meltwater in the Amundsen Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Biddle, Louise; Heywood, Karen; Jenkins, Adrian; Kaiser, Jan

    2016-04-01

    Pine Island Glacier, located in the Amundsen Sea, is losing mass rapidly due to relatively warm ocean waters melting its ice shelf from below. The resulting increase in meltwater production may be the root of the freshening in the Ross Sea over the last 30 years. Tracing the meltwater travelling away from the ice sheets is important in order to identify the regions most affected by the increased input of this water type. We use water mass characteristics (temperature, salinity, O2 concentration) derived from 105 CTD casts during the Ocean2ice cruise on RRS James Clark Ross in January-March 2014 to calculate meltwater fractions north of Pine Island Glacier. The data show maximum meltwater fractions at the ice front of up to 2.4 % and a plume of meltwater travelling away from the ice front along the 1027.7 kg m-3 isopycnal. We investigate the reliability of these results and attach uncertainties to the measurements made to ascertain the most reliable method of meltwater calculation in the Amundsen Sea. Processes such as atmospheric interaction and biological activity also affect the calculated apparent meltwater fractions. We analyse their effects on the reliability of the calculated meltwater fractions across the region using a bulk mixed layer model based on the one-dimensional Price-Weller-Pinkel model (1986). The model includes sea ice, dissolved oxygen concentrations and a simple respiration model, forced by NCEP climatology and an initial linear mixing profile between Winter Water (WW) and Circumpolar Deep Water (CDW). The model mimics the seasonal cycle of mixed layer warming and freshening and simulates how increases in sea ice formation and the influx of slightly cooler Lower CDW impact on the apparent meltwater fractions. These processes could result in biased meltwater signatures across the eastern Amundsen Sea.

  15. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    NASA Astrophysics Data System (ADS)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  16. Some analyses of the chemistry and diffusion of SST exhaust materials during phase 3 of the wake period. [in lower stratosphere

    NASA Technical Reports Server (NTRS)

    Hilst, G. R.; Donaldson, C. D.; Contiliano, R. M.

    1973-01-01

    In the generally stably stratified lower stratosphere, SST exhaust plumes could spend a significant length of time in a relatively undispersed state. This effort has utilized invariant modeling techniques to simulate the separate and combined effects of atmospheric turbulence, turbulent diffusion, and chemical reactions of SST exhaust materials in the lower stratosphere. The primary results to date are: (1) The combination of relatively slow diffusive mixing and rapid chemical reactions during the Phase III wake period minimizes the effect of SST exhausts on O3 depletion by the so-called NOx catalytic cycle. While the SST-produced NO is substantially above background concentrations, it appears diffusive mixing of NO and O3 is simply too slow to produce the O3 depletions originally proposed. (2) The time required to dilute the SST exhaust plume may be a significant fraction of the total time these materials are resident in the lower stratosphere. If this is the case, then prior estimates of the environmental impact of these materials must be revised significantly downward.

  17. Coastal Ocean Variability Off the Coast of Taiwan in Response toTyphoon Morakot: River Forcing, Atmospheric Forcing, and Cold Dome Dynamics

    DTIC Science & Technology

    2014-09-01

    very short time period and in this research, we model and study the effects of this rainfall on Taiwan?s coastal oceans as a result of river discharge...model and study the effects of this rainfall on Taiwan’s coastal oceans as a result of river discharge. We do this through the use of a river discharge... Effects of Footprint Shape on the Bulk Mixing Model . . . . . . . . . 57 4.2 Effects of the Horizontal Extent of the Bulk Mixing Model . . . . . . 59

  18. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  19. Using Mixed-Effects Structural Equation Models to Study Student Academic Development.

    ERIC Educational Resources Information Center

    Pike, Gary R.

    1992-01-01

    A study at the University of Tennessee Knoxville used mixed-effect structural equation models incorporating latent variables as an alternative to conventional methods of analyzing college students' (n=722) first-year-to-senior academic gains. Results indicate, contrary to previous analysis, that coursework and student characteristics interact to…

  20. Effect of an oral healthcare protocol in nursing homes on care staffs' knowledge and attitude towards oral health care: a cluster-randomised controlled trial.

    PubMed

    Janssens, Barbara; De Visschere, Luc; van der Putten, Gert-Jan; de Lugt-Lustig, Kersti; Schols, Jos M G A; Vanobbergen, Jacques

    2016-06-01

    To explore the impact of a supervised implementation of an oral healthcare protocol, in addition to education, on nurses' and nurses' aides' oral health-related knowledge and attitude. A random sample of 12 nursing homes, accommodating a total of 120-150 residents, was obtained using stratified cluster sampling with replacement. The intervention included the implementation of an oral healthcare protocol and three different educational stages. One of the investigators supervised the implementation process, supported by a dental hygienist. A 34-item questionnaire was developed and validated to evaluate the knowledge and attitude of nurses and nurses' aides at baseline and 6 months after the start of the intervention. Linear mixed-model analyses were performed to explore differences in knowledge and attitude at 6 months after implementation. At baseline, no significant differences were observed between the intervention and the control group for both knowledge (p = 0.42) and attitude (p = 0.37). Six months after the start of the intervention, significant differences were found between the intervention and the control group for the variable knowledge in favour of the intervention group (p < 0.0001) but not for the variable attitude (p = 0.78). Out of the mixed model with attitude as the dependent variable, it can be concluded that age (p = 0.031), educational level (p = 0.009) and ward type (p = 0.014) have a significant effect. The mixed model with knowledge as the dependent variable resulted in a significant effect of the intervention (p = 0.001) and the educational level (p = 0.009). The supervised implementation of an oral healthcare protocol significantly increased the knowledge of nurses and nurses' aides. In contrast, no significant improvements could be demonstrated in attitude. © 2014 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.

  1. Study on processing immiscible materials in zero gravity

    NASA Technical Reports Server (NTRS)

    Reger, J. L.; Mendelson, R. A.

    1975-01-01

    An experimental investigation was conducted to evaluate mixing immiscible metal combinations under several process conditions. Under one-gravity, these included thermal processing, thermal plus electromagnetic mixing, and thermal plus acoustic mixing. The same process methods were applied during free fall on the MSFC drop tower facility. The design is included of drop tower apparatus to provide the electromagnetic and acoustic mixing equipment, and a thermal model was prepared to design the specimen and cooling procedure. Materials systems studied were Ca-La, Cd-Ga and Al-Bi; evaluation of the processed samples included the morphology and electronic property measurements. The morphology was developed using optical and scanning electron microscopy and microprobe analyses. Electronic property characterization of the superconducting transition temperatures were made using an impedance change-tuned coil method.

  2. Budget model can aid group practice planning.

    PubMed

    Bender, A D

    1991-12-01

    A medical practice can enhance its planning by developing a budgetary model to test effects of planning assumptions on its profitability and cash requirements. A model focusing on patient visits, payment mix, patient mix, and fee and payment schedules can help assess effects of proposed decisions. A planning model is not a substitute for planning but should complement a plan that includes mission, goals, values, strategic issues, and different outcomes.

  3. Modeling Individual Differences in Within-Person Variation of Negative and Positive Affect in a Mixed Effects Location Scale Model Using BUGS/JAGS

    ERIC Educational Resources Information Center

    Rast, Philippe; Hofer, Scott M.; Sparks, Catharine

    2012-01-01

    A mixed effects location scale model was used to model and explain individual differences in within-person variability of negative and positive affect across 7 days (N=178) within a measurement burst design. The data come from undergraduate university students and are pooled from a study that was repeated at two consecutive years. Individual…

  4. Procyanidins improve some disrupted glucose homoeostatic situations: an analysis of doses and treatments according to different animal models.

    PubMed

    Pinent, Montserrat; Cedó, Lidia; Montagut, Gemma; Blay, Mayte; Ardévol, Anna

    2012-01-01

    This review analyses the potential beneficial effects of procyanidins, the main class of flavonoids, in situations in which glucose homeostasis is disrupted. Because the disruption of glucose homeostasis can occur as a result of various causes, we critically review the effects of procyanidins based on the specific origin of each type of disruption. Where little or no insulin is present (Type I diabetic animals), summarized studies of procyanidin treatment suggest that procyanidins have a short-lived insulin-mimetic effect on the internal targets of the organism, an effect not reproduced in normoglycemic, normoinsulinemic healthy animals. Insulin resistance (usually linked to hyperinsulinemia) poses a very different situation. Preventive studies using fructose-fed models indicate that procyanidins may be useful in preventing the induction of damage and thus in limiting hyperglycemia. But the results of other studies using models such as high-fat diet treated rats or genetically obese animals are controversial. Although the effects on glucose parameters are hazy, it is known that procyanidins target key tissues involved in its homeostasis. Interestingly, all available data suggest that procyanidins are more effective when administered in one acute load than when mixed with food.

  5. Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model

    NASA Astrophysics Data System (ADS)

    Megann, A.; Nurser, G.

    2014-12-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.

  6. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    PubMed

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  7. Interspecies Mixed-Effect Pharmacokinetic Modeling of Penicillin G in Cattle and Swine

    PubMed Central

    Li, Mengjie; Gehring, Ronette; Tell, Lisa; Baynes, Ronald; Huang, Qingbiao

    2014-01-01

    Extralabel drug use of penicillin G in food-producing animals may cause an excess of residues in tissue which will have the potential to damage human health. Of all the antibiotics, penicillin G may have the greatest potential for producing allergic responses to the consumer of food animal products. There are, however, no population pharmacokinetic studies of penicillin G for food animals. The objective of this study was to develop a population pharmacokinetic model to describe the time-concentration data profile of penicillin G across two species. Data were collected from previously published pharmacokinetic studies in which several formulations of penicillin G were administered to diverse populations of cattle and swine. Liver, kidney, and muscle residue data were also used in this study. Compartmental models with first-order absorption and elimination were fit to plasma and tissue concentrations using a nonlinear mixed-effect modeling approach. A 3-compartment model with extra tissue compartments was selected to describe the pharmacokinetics of penicillin G. Typical population parameter estimates (interindividual variability) were central volumes of distribution of 3.45 liters (12%) and 3.05 liters (8.8%) and central clearance of 105 liters/h (32%) and 16.9 liters/h (14%) for cattle and swine, respectively, with peripheral clearance of 24.8 liters/h (13%) and 9.65 liters/h (23%) for cattle and 13.7 liters/h (85%) and 0.52 liters/h (40%) for swine. Body weight and age were the covariates in the final pharmacokinetic models. This study established a robust model of penicillin for a large and diverse population of food-producing animals which could be applied to other antibiotics and species in future analyses. PMID:24867969

  8. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  9. Sexuality education in a representative sample of Portuguese schools: examining the impact of legislation.

    PubMed

    Rocha, Ana Cristina; Duarte, Cidália

    2015-02-01

    To share Portugal's experience with school-based sexuality education, and to describe its implementation at a local level, following an ecological model and using a mixed methodology approach. The study also examines the impact of the latest policies put into effect, identifying potential weaknesses and strengths affecting the effectiveness of sexuality education enforcement. A representative sample of 296 schools in Portugal was analysed. Teachers representing the school completed a questionnaire and were asked to share any kind of official document from their sexuality education project (such as curriculum content). A subsample of these documents was analysed by two coders. Quantitative analysis was carried out using descriptive statistics. The majority of Portuguese schools delivered sexuality education, in line with Portuguese technical guidelines and international recommendations. There were common procedures in planning, implementation and evaluation of sexuality education. Some strengths and weaknesses were identified. Results highlighted the impact of the various systems on the planning, enforcement and evaluation of sexuality education in school. The latest policies introduced valuable changes in school-based sexuality education. A way of assessing effectiveness of sexuality education is still needed.

  10. An analysis of the rotational, fine and hyperfine effects in the (0, 0) band of the A7Π- X7Σ + transition of manganese monohydride, MnH

    NASA Astrophysics Data System (ADS)

    Gengler, Jamie J.; Steimle, Timothy C.; Harrison, Jeremy J.; Brown, John M.

    2007-02-01

    High-resolution (±0.003 cm -1), laser induced fluorescence (LIF) spectra of a supersonic molecular beam sample of manganese monohydride, MnH, have been recorded in the 17500-17800 cm -1 region of the (0, 0) band of the A7Π- X7Σ + system. The low- N branch features were modeled successfully by inclusion of the magnetic hyperfine mixings of spin components within a given low- N rotational level using a traditional 'effective' Hamiltonian approach. An improved set of spectroscopic constants has been extracted and compared with those from previous analyses. The optimum optical features for future optical Stark and Zeeman measurements are identified.

  11. Comparing colon cancer outcomes: The impact of low hospital case volume and case-mix adjustment.

    PubMed

    Fischer, C; Lingsma, H F; van Leersum, N; Tollenaar, R A E M; Wouters, M W; Steyerberg, E W

    2015-08-01

    When comparing performance across hospitals it is essential to consider the noise caused by low hospital case volume and to perform adequate case-mix adjustment. We aimed to quantify the role of noise and case-mix adjustment on standardized postoperative mortality and anastomotic leakage (AL) rates. We studied 13,120 patients who underwent colon cancer resection in 85 Dutch hospitals. We addressed differences between hospitals in postoperative mortality and AL, using fixed (ignoring noise) and random effects (incorporating noise) logistic regression models with general and additional, disease specific, case-mix adjustment. Adding disease specific variables improved the performance of the case-mix adjustment models for postoperative mortality (c-statistic increased from 0.77 to 0.81). The overall variation in standardized mortality ratios was similar, but some individual hospitals changed considerably. For the standardized AL rates the performance of the adjustment models was poor (c-statistic 0.59 and 0.60) and overall variation was small. Most of the observed variation between hospitals was actually noise. Noise had a larger effect on hospital performance than extended case-mix adjustment, although some individual hospital outcome rates were affected by more detailed case-mix adjustment. To compare outcomes between hospitals it is crucial to consider noise due to low hospital case volume with a random effects model. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Nitrogen Dioxide Exposure and Airway Responsiveness in Individuals with Asthma

    EPA Science Inventory

    Controlled human exposure studies evaluating the effect of inhaled NO2 on the inherent responsiveness of the airways to challenge by bronchoconstricting agents have had mixed results. In general, existing meta-analyses show statistically significant effects of NO2 on the airway r...

  13. Effects of imperfect mixing on low-density polyethylene reactor dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, C.M.; Dihora, J.O.; Ray, W.H.

    1998-07-01

    Earlier work considered the effect of feed conditions and controller configuration on the runaway behavior of LDPE autoclave reactors assuming a perfectly mixed reactor. This study provides additional insight on the dynamics of such reactors by using an imperfectly mixed reactor model and bifurcation analysis to show the changes in the stability region when there is imperfect macroscale mixing. The presence of imperfect mixing substantially increases the range of stable operation of the reactor and makes the process much easier to control than for a perfectly mixed reactor. The results of model analysis and simulations are used to identify somemore » of the conditions that lead to unstable reactor behavior and to suggest ways to avoid reactor runaway or reactor extinction during grade transitions and other process operation disturbances.« less

  14. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    PubMed

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  15. Estimation of the linear mixed integrated Ornstein–Uhlenbeck model

    PubMed Central

    Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate

    2017-01-01

    ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536

  16. The health benefits of secondary education in adolescents and young adults: An international analysis in 186 low-, middle- and high-income countries from 1990 to 2013.

    PubMed

    Viner, Russell M; Hargreaves, Dougal S; Ward, Joseph; Bonell, Chris; Mokdad, Ali H; Patton, George

    2017-12-01

    The health benefits of secondary education have been little studied. We undertook country-level longitudinal analyses of the impact of lengthening secondary education on health outcomes amongst 15-24 year olds. Exposures: average length of secondary and primary education from 1980 to 2013.Data/Outcomes: Country level adolescent fertility rate (AFR), HIV prevalence and mortality rate from 1989/90 to 2013 across 186 low-, middle- and high-income countries.Analysis: Longitudinal mixed effects models, entering secondary and primary education together, adjusted for time varying GDP and country income status. Longitudinal structural marginal models using inverse probability weighting (IPW) to take account of time varying confounding by primary education and GDP. Counterfactual scenarios of no change in secondary education since 1980/1990 were estimated from model coefficients for each outcome. Each additional year of secondary education decreased AFR by 8.4% in mixed effects models and 14.6% in IPW models independent of primary education and GDP. Counterfactual analyses showed the proportion of the reduction in adolescent fertility rate over the study period independently attributable to secondary education was 28% in low income countries. Each additional year of secondary education reduced mortality by 16.9% for 15-19 year and 14.8% for 20-24 year old young women and 11.4% for 15-19 year and 8.8% for 20-24 year old young men. Counterfactual scenarios suggested 12% and 23% of the mortality reduction for 15-19 and 20-24 year old young men was attributable to secondary education in low income countries. Each additional year of secondary education was associated with a 24.5% and 43.1% reduction in HIV prevalence amongst young men and women. The health benefits associated with secondary education were greater than those of primary education and were greatest amongst young women and those from low income countries. Secondary education has the potential to be a social vaccine across many outcomes in low and middle income countries.

  17. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    PubMed

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  18. Effect of shroud geometry on the effectiveness of a short mixing stack gas eductor model

    NASA Astrophysics Data System (ADS)

    Kavalis, A. E.

    1983-06-01

    An existing apparatus for testing models of gas eductor systems using high temperature primary flow was modified to provide improved control and performance over a wide range of gas temperature and flow rates. Secondary flow pumping, temperature and pressure data were recorded for two gas eductor system models. The first, previously tested under hot flow conditions, consists of a primary plate with four tilted-angled nozzles and a slotted, shrouded mixing stack with two diffuser rings (overall L/D = 1.5). A portable pyrometer with a surface probe was used for the second model in order to identify any hot spots at the external surface of the mixing stack, shroud and diffuser rings. The second model is shown to have almost the same mixing and pumping performance with the first one but to exhibit much lower shroud and diffuser surface temperatures.

  19. A joint modeling and estimation method for multivariate longitudinal data with mixed types of responses to analyze physical activity data generated by accelerometers.

    PubMed

    Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E

    2017-11-10

    A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Trends and social differentials in child mortality in Rwanda 1990–2010: results from three demographic and health surveys

    PubMed Central

    Musafili, Aimable; Essén, Birgitta; Baribwira, Cyprien; Binagwaho, Agnes; Persson, Lars-Åke; Selling, Katarina Ekholm

    2015-01-01

    Background Rwanda has embarked on ambitious programmes to provide equitable health services and reduce mortality in childhood. Evidence from other countries indicates that advances in child survival often have come at the expense of increasing inequity. Our aims were to analyse trends and social differentials in mortality before the age of 5 years in Rwanda from 1990 to 2010. Methods We performed secondary analyses of data from three Demographic and Health Surveys conducted in 2000, 2005 and 2010 in Rwanda. These surveys included 34 790 children born between 1990 and 2010 to women aged 15–49 years. The main outcome measures were neonatal mortality rates (NMR) and under-5 mortality rates (U5MR) over time, and in relation to mother's educational level, urban or rural residence and household wealth. Generalised linear mixed effects models and a mixed effects Cox model (frailty model) were used, with adjustments for confounders and cluster sampling method. Results Mortality rates in Rwanda peaked in 1994 at the time of the genocide (NMR 60/1000 live births, 95% CI 51 to 65; U5MR 238/1000 live births, 95% CI 226 to 251). The 1990s and the first half of the 2000s were characterised by a marked rural/urban divide and inequity in child survival between maternal groups with different levels of education. Towards the end of the study period (2005–2010) NMR had been reduced to 26/1000 (95% CI 23 to 29) and U5MR to 65/1000 (95% CI 61 to 70), with little or no difference between urban and rural areas, and household wealth groups, while children of women with no education still had significantly higher U5MR. Conclusions Recent reductions in child mortality in Rwanda have concurred with improved social equity in child survival. Current challenges include the prevention of newborn deaths. PMID:25870163

  1. A review of 241 subjects who were patch tested twice: could fragrance mix I cause active sensitization?

    PubMed

    White, J M L; McFadden, J P; White, I R

    2008-03-01

    Active patch test sensitization is an uncommon phenomenon which may have undesirable consequences for those undergoing this gold-standard investigation for contact allergy. To perform a retrospective analysis of the results of 241 subjects who were patch tested twice in a monocentre evaluating approximately 1500 subjects per year. Positivity to 11 common allergens in the recommended Baseline Series of contact allergens (European) was analysed: nickel sulphate; Myroxylon pereirae; fragrance mix I; para-phenylenediamine; colophonium; epoxy resin; neomycin; quaternium-15; thiuram mix; sesquiterpene lactone mix; and para-tert-butylphenol resin. Only fragrance mix I gave a statistically significant, increased rate of positivity on the second reading compared with the first (P=0.011). This trend was maintained when separately analysing a subgroup of 42 subjects who had been repeat patch tested within 1 year; this analysis was done to minimize the potential confounding factor of increased usage of fragrances with a wide interval between both tests. To reduce the confounding effect of age on our data, we calculated expected frequencies of positivity to fragrance mix I based on previously published data from our centre. This showed a marked excess of observed cases over predicted ones, particularly in women in the age range 40-60 years. We suspect that active sensitization to fragrance mix I may occur. Similar published analysis from another large group using standard methodology supports our data.

  2. Socioeconomic Strata, Mobile Technology, and Education: A Comparative Analysis

    ERIC Educational Resources Information Center

    Kim, Paul; Hagashi, Teresita; Carillo, Laura; Gonzales, Irina; Makany, Tamas; Lee, Bommi; Garate, Alberto

    2011-01-01

    Mobile devices are highly portable, easily distributable, substantially affordable, and have the potential to be pedagogically complementary resources in education. This study, incorporating mixed method analyses, discusses the implications of a mobile learning technology-based learning model in two public primary schools near the Mexico-USA…

  3. Modelling the vertical distribution of Prochlorococcus and Synechococcus in the North Pacific Subtropical Ocean.

    PubMed

    Rabouille, Sophie; Edwards, Christopher A; Zehr, Jonathan P

    2007-10-01

    A simple model was developed to examine the vertical distribution of Prochlorococcus and Synechococcus ecotypes in the water column, based on their adaptation to light intensity. Model simulations were compared with a 14-year time series of Prochlorococcus and Synechococcus cell abundances at Station ALOHA in the North Pacific Subtropical Gyre. Data were analysed to examine spatial and temporal patterns in abundances and their ranges of variability in the euphotic zone, the surface mixed layer and the layer in the euphotic zone but below the base of the mixed layer. Model simulations show that the apparent occupation of the whole euphotic zone by a genus can be the result of a co-occurrence of different ecotypes that segregate vertically. The segregation of ecotypes can result simply from differences in light response. A sensitivity analysis of the model, performed on the parameter alpha (initial slope of the light-response curve) and the DIN concentration in the upper water column, demonstrates that the model successfully reproduces the observed range of vertical distributions. Results support the idea that intermittent mixing events may have important ecological and geochemical impacts on the phytoplankton community at Station ALOHA.

  4. The effects of run-of-river hydroelectric power schemes on invertebrate community composition in temperate streams and rivers.

    PubMed

    Bilotta, Gary S; Burnside, Niall G; Turley, Matthew D; Gray, Jeremy C; Orr, Harriet G

    2017-01-01

    Run-of-river (ROR) hydroelectric power (HEP) schemes are often presumed to be less ecologically damaging than large-scale storage HEP schemes. However, there is currently limited scientific evidence on their ecological impact. The aim of this article is to investigate the effects of ROR HEP schemes on communities of invertebrates in temperate streams and rivers, using a multi-site Before-After, Control-Impact (BACI) study design. The study makes use of routine environmental surveillance data collected as part of long-term national and international monitoring programmes at 22 systematically-selected ROR HEP schemes and 22 systematically-selected paired control sites. Five widely-used family-level invertebrate metrics (richness, evenness, LIFE, E-PSI, WHPT) were analysed using a linear mixed effects model. The analyses showed that there was a statistically significant effect (p<0.05) of ROR HEP construction and operation on the evenness of the invertebrate community. However, no statistically significant effects were detected on the four other metrics of community composition. The implications of these findings are discussed in this article and recommendations are made for best-practice study design for future invertebrate community impact studies.

  5. The effects of run-of-river hydroelectric power schemes on invertebrate community composition in temperate streams and rivers

    PubMed Central

    2017-01-01

    Run-of-river (ROR) hydroelectric power (HEP) schemes are often presumed to be less ecologically damaging than large-scale storage HEP schemes. However, there is currently limited scientific evidence on their ecological impact. The aim of this article is to investigate the effects of ROR HEP schemes on communities of invertebrates in temperate streams and rivers, using a multi-site Before-After, Control-Impact (BACI) study design. The study makes use of routine environmental surveillance data collected as part of long-term national and international monitoring programmes at 22 systematically-selected ROR HEP schemes and 22 systematically-selected paired control sites. Five widely-used family-level invertebrate metrics (richness, evenness, LIFE, E-PSI, WHPT) were analysed using a linear mixed effects model. The analyses showed that there was a statistically significant effect (p<0.05) of ROR HEP construction and operation on the evenness of the invertebrate community. However, no statistically significant effects were detected on the four other metrics of community composition. The implications of these findings are discussed in this article and recommendations are made for best-practice study design for future invertebrate community impact studies. PMID:28158282

  6. Innovative Equipment and Production Method for Mixed Fodder in the Conditions of Agricultural Enterprises

    NASA Astrophysics Data System (ADS)

    Sabiev, U. K.; Demchuk, E. V.; Myalo, V. V.; Soyunov, A. S.

    2017-07-01

    It is recommended to feed the cattle and poultry with grain fodder in the form of feed mixture balanced according to the content. Feeding of grain fodder in the form of stock feed is inefficient and economically unreasonable. The article is devoted to actual problem - the preparation of mixed fodder in the conditions of agricultural enterprises. Review and critical analyses of mixed fodder assemblies and aggregates are given. Structural and technical schemes of small-size mixed fodder aggregate with intensified attachments of vibrating and percussive action for preparation of bulk feed mixture in the conditions of agricultural enterprises were developed. The mixed fodder aggregate for its preparation in the places of direct consumption from own grain fodder production and purchased protein and vitamin supplements is also suggested. Mixed fodder aggregate allows to get prepared mixed fodder of high uniformity at low cost of energy and price of production that is becoming profitable for livestock breeding. Model line-up of suggested mixed fodder aggregate with different productivity both for small and big agricultural enterprises is considered.

  7. Payment schemes and cost efficiency: evidence from Swiss public hospitals.

    PubMed

    Meyer, Stefan

    2015-03-01

    This paper aims at analysing the impact of prospective payment schemes on cost efficiency of acute care hospitals in Switzerland. We study a panel of 121 public hospitals subject to one of four payment schemes. While several hospitals are still reimbursed on a per diem basis for the treatment of patients, most face flat per-case rates-or mixed schemes, which combine both elements of reimbursement. Thus, unlike previous studies, we are able to simultaneously analyse and isolate the cost-efficiency effects of different payment schemes. By means of stochastic frontier analysis, we first estimate a hospital cost frontier. Using the two-stage approach proposed by Battese and Coelli (Empir Econ 20:325-332, 1995), we then analyse the impact of these payment schemes on the cost efficiency of hospitals. Controlling for hospital characteristics, local market conditions in the 26 Swiss states (cantons), and a time trend, we show that, compared to per diem, hospitals which are reimbursed by flat payment schemes perform better in terms of cost efficiency. Our results suggest that mixed schemes create incentives for cost containment as well, although to a lesser extent. In addition, our findings indicate that cost-efficient hospitals are primarily located in cantons with competitive markets, as measured by the Herfindahl-Hirschman index in inpatient care. Furthermore, our econometric model shows that we obtain biased estimates from frontier analysis if we do not account for heteroscedasticity in the inefficiency term.

  8. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  9. Impact of Antarctic mixed-phase clouds on climate.

    PubMed

    Lawson, R Paul; Gettelman, Andrew

    2014-12-23

    Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. We modify the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm(-2), and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. These sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than -20 °C.

  10. Impact of Antarctic mixed-phase clouds on climate

    PubMed Central

    Lawson, R. Paul; Gettelman, Andrew

    2014-01-01

    Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. We modify the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm−2, and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. These sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than −20 °C. PMID:25489069

  11. Effect of different mixing and placement methods on the quality of MTA apical plug in simulated apexification model.

    PubMed

    Ghasemi, Negin; Janani, Maryam; Razi, Tahmineh; Atharmoghaddam, Faezeh

    2017-03-01

    It is necessary apical plug material to exhibit proper adaptation with the root canal walls. Presence of voids at the interface between the root canal wall and this material result in micro leakage, which might have a relationship with post treatment disease. The aim of the present study was to evaluate the effect of different mixing (manual and ultrasonic) and placement (manual and manual in association with indirect ultrasonic) method of Mineral Trioxide Aggregate (MTA) on the void count and dimension in the apical plug in natural teeth with simulated open apices. Eighty human maxillary central incisors were selected. After simulation of the open apex model, the teeth were assigned to 4 groups based on the mixing and placement techniques of MTA: group 1, manual mixing and manual placement; group 2, manual mixing and manual placement in association with indirect ultrasonic; group 3, ultrasonic mixing and and manual placement; and group 4, ultrasonic mixing and manual placement in association with indirect ultrasonic. The prepared samples were placed within gypsum sockets in which the periodontal ligament was reconstructed with polyether impression material. In group 1, after mixing, the material was condensed with a hand plugger. In group 2, after mixing, the ultrasonic tip was contacted with the hand plugger for 2 seconds. In groups 3 and 4, mixing was carried out with the ultrasonic tip for 5 seconds and in groups 3 and 4, similar to groups 1 and 2, respectively, the materials were placed as apical plugs, measuring 3 mm in length. A wet cotton pellet was placed at canal orifices and dressed with Cavit. After one week, the cone beam computed tomography (CBCT) technique was used to count the number of voids between the material and root canal walls. The void dimensions were determined using the following scoring system: score 1, absence of voids; score 2, the void size less than half of the dimensions of the evaluated cross-section; score 3, the void size larger than half of the dimensions of the evaluated cross-section. Chi-squared and Fisher's exact tests were used for statistical analyses. Statistical significance was set at P <0.05. The maximum (13) and minimum (3) number of voids were detected in groups 2 and 3, respectively. There were no significant differences between groups 1 and 3 in the number of voids ( p >0.05). Evaluation of void dimensions showed no score 3 in any of the study groups and the dimensions of all the voids conformed to score 2. Under the limitations of the present study, use of ultrasonic mixing and manual placement techniques resulted in a decrease in the number of voids in the apical plug. Key words: Apical plug, MTA, ultrasonic, void.

  12. Relationship between attributional style, perceived control, self-esteem, and depressive mood in a nonclinical sample: a structural equation-modelling approach.

    PubMed

    Ledrich, Julie; Gana, Kamel

    2013-12-01

    The aim of this study was to examine the intricate relationship between some personality traits (i.e., attributional style, perceived control over consequences, self-esteem), and depressive mood in a nonclinical sample (N= 334). Method. Structural equation modelling was used to estimate five competing models: two vulnerability models describing the effects of personality traits on depressive mood, one scar model describing the effects of depression on personality traits, a mixed model describing the effects of attributional style and perceived control over consequences on depressive mood, which in turn affects self-esteem, and a reciprocal model which is a non-recursive version of the mixed model that specifies bidirectional effects between depressive mood and self-esteem. The best-fitting model was the mixed model. Moreover, we observed a significant negative effect of depression on self-esteem, but no effect in the opposite direction. These findings provide supporting arguments against the continuum model of the relationship between self-esteem and depression, and lend substantial support to the scar model, which claims that depressive mood damages and erodes self-esteem. In addition, the 'depressogenic' nature of the pessimistic attributional style, and the 'antidepressant' nature of perceived control over consequences plead in favour of the vulnerability model. © 2012 The British Psychological Society.

  13. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  14. Childhood obesity prevention and control in city recreation centres and family homes: the MOVE/me Muevo Project.

    PubMed

    Elder, J P; Crespo, N C; Corder, K; Ayala, G X; Slymen, D J; Lopez, N V; Moody, J S; McKenzie, T L

    2014-06-01

    Interventions to prevent and control childhood obesity have shown mixed results in terms of short- and long-term changes. 'MOVE/me Muevo' was a 2-year family- and recreation centre-based randomized controlled trial to promote healthy eating and physical activity among 5- to 8-year-old children. It was hypothesized that children in the intervention group would demonstrate lower post-intervention body mass index (BMI) values and improved obesity-related behaviours compared with the control group children. Thirty recreation centres in San Diego County, California, were randomized to an intervention or control condition. Five hundred forty-one families were enrolled and children's BMI, diet, physical activity and other health indicators were tracked from baseline to 2 years post-baseline. Analyses followed an intent-to-treat approach using mixed-effects models. No significant intervention effects were observed for the primary outcomes of child's or parent's BMI and child's waist circumference. Moderator analyses, however, showed that girls (but not boys) in the intervention condition reduced their BMI. At the 2-year follow-up, intervention condition parents reported that their children were consuming fewer high-fat foods and sugary beverages. Favourable implementation fidelity and high retention rates support the feasibility of this intervention in a large metropolitan area; however, interventions of greater intensity may be needed to achieve effects on child's BMI. Also, further research is needed to develop gender-specific intervention strategies so that both genders may benefit from such efforts. © 2013 The Authors. Pediatric Obesity © 2013 International Association for the Study of Obesity.

  15. Childhood obesity prevention and control in city recreation centers and family homes: the MOVE/me Muevo Project

    PubMed Central

    Elder, John P.; Crespo, Noe C.; Corder, Kirsten; Ayala, Guadalupe X.; Slymen, Donald J.; Lopez, Nanette V.; Moody, Jamie S.; McKenzie, Thomas L.

    2013-01-01

    Background Interventions to prevent and control childhood obesity have shown mixed results in terms of short- and long-term changes. Objectives “MOVE/me Muevo” was a two-year family- and recreation center-based randomized controlled trial to promote healthy eating and physical activity among 5-8 year old children. It was hypothesized that children in the intervention group would demonstrate lower post-intervention BMI values and improve obesity-related behaviors compared to control group children. Methods Thirty recreation centers in San Diego County, California were randomized to an intervention or control condition. Five hundred and forty-one families were enrolled and children’s body mass index (BMI), diet, physical activity and other health indicators were tracked from baseline to two years post-baseline. Analyses followed an intent-to-treat approach using mixed effects models. Results No significant intervention effects were observed for the primary outcomes of child or parent BMI and child waist circumference. Moderator analyses however showed girls (but not boys) in the intervention condition reduced their BMI. At the two-year follow-up, intervention condition parents reported that their children were consuming fewer high-fat foods and sugary beverages. Conclusions Favorable implementation fidelity and high retention rates support the feasibility of this intervention in a large metropolitan area; however, interventions of greater intensity may be needed to achieve effects on child’s BMI. Also, further research is needed to develop gender-specific intervention strategies so that both genders may benefit from such efforts. PMID:23754782

  16. Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.

    In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less

  17. Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction

    DOE PAGES

    Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.; ...

    2016-08-01

    In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less

  18. The Divergent Meanings of Life Satisfaction: Item Response Modeling of the Satisfaction with Life Scale in Greenland and Norway

    ERIC Educational Resources Information Center

    Vitterso, Joar; Biswas-Diener, Robert; Diener, Ed

    2005-01-01

    Cultural differences in response to the Satisfaction With Life Scale (SWLS) items is investigated. Data were fit to a mixed Rasch model in order to identify latent classes of participants in a combined sample of Norwegians (N = 461) and Greenlanders (N = 180). Initial analyses showed no mean difference in life satisfaction between the two…

  19. Inferential Processing among Adequate and Struggling Adolescent Comprehenders and Relations to Reading Comprehension

    PubMed Central

    Barth, Amy E.; Barnes, Marcia; Francis, David J.; Vaughn, Sharon; York, Mary

    2015-01-01

    Separate mixed model analyses of variance (ANOVA) were conducted to examine the effect of textual distance on the accuracy and speed of text consistency judgments among adequate and struggling comprehenders across grades 6–12 (n = 1203). Multiple regressions examined whether accuracy in text consistency judgments uniquely accounted for variance in comprehension. Results suggest that there is considerable growth across the middle and high school years, particularly for adequate comprehenders in those text integration processes that maintain local coherence. Accuracy in text consistency judgments accounted for significant unique variance for passage-level, but not sentence-level comprehension, particularly for adequate comprehenders. PMID:26166946

  20. Longitudinal associations between stressors and work ability in hospital workers.

    PubMed

    Carmen Martinez, Maria; da Silva Alexandre, Tiago; Dias de Oliveira Latorre, Maria do Rosario; Marina Fischer, Frida

    This study sought to assess associations between work stressors and work ability in a cohort (2009-2012) of 498 hospital workers. Time-dependent variables associated with the Work Ability Index (WAI) were evaluated using general linear mixed models. Analyses included effects of individual and work characteristics. Except for work demands, the work stressors (job control, social support, effort-reward imbalance, overcommitment and work-related activities that cause pain/injury) were associated with WAI (p < 0.050) at intercept and in the time interaction. Daytime work and morning shift work were associated with decreased WAI (p < 0.010). Work stressors negatively affected work ability over time independently of other variables.

  1. Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.

    PubMed

    Lages, Martin; Scheel, Anne

    2016-01-01

    We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.

  2. Experimental and mathematical model of the interactions in the mixed culture of links in the "producer-consumer" cycle

    NASA Astrophysics Data System (ADS)

    Pisman, T. I.; Galayda, Ya. V.

    The paper presents experimental and mathematical model of interactions between invertebrates the ciliates Paramecium caudatum and the rotifers Brachionus plicatilis and algae Chlorella vulgaris and Scenedesmus quadricauda in the producer -- consumer aquatic biotic cycle with spatially separated components The model describes the dynamics of the mixed culture of ciliates and rotifers in the consumer component feeding on the mixed algal culture of the producer component It has been found that metabolites of the algae Scenedesmus produce an adverse effect on the reproduction of the ciliates P caudatum Taking into account this effect the results of investigation of the mathematical model were in qualitative agreement with the experimental results In the producer -- consumer biotic cycle it was shown that coexistence is impossible in the mixed algal culture of the producer component and in the mixed culture of invertebrates of the consumer component The ciliates P caudatum are driven out by the rotifers Brachionus plicatilis

  3. Modelling individual tree height to crown base of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.)

    PubMed Central

    Jansa, Václav

    2017-01-01

    Height to crown base (HCB) of a tree is an important variable often included as a predictor in various forest models that serve as the fundamental tools for decision-making in forestry. We developed spatially explicit and spatially inexplicit mixed-effects HCB models using measurements from a total 19,404 trees of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.) on the permanent sample plots that are located across the Czech Republic. Variables describing site quality, stand density or competition, and species mixing effects were included into the HCB model with use of dominant height (HDOM), basal area of trees larger in diameters than a subject tree (BAL- spatially inexplicit measure) or Hegyi’s competition index (HCI—spatially explicit measure), and basal area proportion of a species of interest (BAPOR), respectively. The parameters describing sample plot-level random effects were included into the HCB model by applying the mixed-effects modelling approach. Among several functional forms evaluated, the logistic function was found most suited to our data. The HCB model for Norway spruce was tested against the data originated from different inventory designs, but model for European beech was tested using partitioned dataset (a part of the main dataset). The variance heteroscedasticity in the residuals was substantially reduced through inclusion of a power variance function into the HCB model. The results showed that spatially explicit model described significantly a larger part of the HCB variations [R2adj = 0.86 (spruce), 0.85 (beech)] than its spatially inexplicit counterpart [R2adj = 0.84 (spruce), 0.83 (beech)]. The HCB increased with increasing competitive interactions described by tree-centered competition measure: BAL or HCI, and species mixing effects described by BAPOR. A test of the mixed-effects HCB model with the random effects estimated using at least four trees per sample plot in the validation data confirmed that the model was precise enough for the prediction of HCB for a range of site quality, tree size, stand density, and stand structure. We therefore recommend measuring of HCB on four randomly selected trees of a species of interest on each sample plot for localizing the mixed-effects model and predicting HCB of the remaining trees on the plot. Growth simulations can be made from the data that lack the values for either crown ratio or HCB using the HCB models. PMID:29049391

  4. Precipitation and growth of barite within hydrothermal vent deposits from the Endeavour Segment, Juan de Fuca Ridge

    NASA Astrophysics Data System (ADS)

    Jamieson, John William; Hannington, Mark D.; Tivey, Margaret K.; Hansteen, Thor; Williamson, Nicole M.-B.; Stewart, Margaret; Fietzke, Jan; Butterfield, David; Frische, Matthias; Allen, Leigh; Cousens, Brian; Langer, Julia

    2016-01-01

    Hydrothermal vent deposits form on the seafloor as a result of cooling and mixing of hot hydrothermal fluids with cold seawater. Amongst the major sulfide and sulfate minerals that are preserved at vent sites, barite (BaSO4) is unique because it requires the direct mixing of Ba-rich hydrothermal fluid with sulfate-rich seawater in order for precipitation to occur. Because of its extremely low solubility, barite crystals preserve geochemical fingerprints associated with conditions of formation. Here, we present data from petrographic and geochemical analyses of hydrothermal barite from the Endeavour Segment of the Juan de Fuca Ridge, northeast Pacific Ocean, in order to determine the physical and chemical conditions under which barite precipitates within seafloor hydrothermal vent systems. Petrographic analyses of 22 barite-rich samples show a range of barite crystal morphologies: dendritic and acicular barite forms near the exterior vent walls, whereas larger bladed and tabular crystals occur within the interior of chimneys. A two component mixing model based on Sr concentrations and 87Sr/86Sr of both seawater and hydrothermal fluid, combined with 87Sr/86Sr data from whole rock and laser-ablation ICP-MS analyses of barite crystals indicate that barite precipitates from mixtures containing as low as 17% and as high as 88% hydrothermal fluid component, relative to seawater. Geochemical modelling of the relationship between aqueous species concentrations and degree of fluid mixing indicates that Ba2+ availability is the dominant control on mineral saturation. Observations combined with model results support that dendritic barite forms from fluids of less than 40% hydrothermal component and with a saturation index greater than ∼0.6, whereas more euhedral crystals form at lower levels of supersaturation associated with greater contributions of hydrothermal fluid. Fluid inclusions within barite indicate formation temperatures of between ∼120 °C and 240 °C during barite crystallization. The comparison of fluid inclusion formation temperatures to modelled mixing temperatures indicates that conductive cooling of the vent fluid accounts for 60-120 °C reduction in fluid temperature. Strontium zonation within individual barite crystals records fluctuations in the amount of conductive cooling within chimney walls that may result from cyclical oscillations in hydrothermal fluid flux. Barite chemistry and morphology can be used as a reliable indicator for past conditions of mineralization within both extinct seafloor hydrothermal deposits and ancient land-based volcanogenic massive sulfide deposits.

  5. Multilevel nonlinear mixed-effects models for the modeling of earlywood and latewood microfibril angle

    Treesearch

    Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He

    2005-01-01

    Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...

  6. A Practical Guide to Conducting a Systematic Review and Meta-analysis of Health State Utility Values.

    PubMed

    Petrou, Stavros; Kwon, Joseph; Madan, Jason

    2018-05-10

    Economic analysts are increasingly likely to rely on systematic reviews and meta-analyses of health state utility values to inform the parameter inputs of decision-analytic modelling-based economic evaluations. Beyond the context of economic evaluation, evidence from systematic reviews and meta-analyses of health state utility values can be used to inform broader health policy decisions. This paper provides practical guidance on how to conduct a systematic review and meta-analysis of health state utility values. The paper outlines a number of stages in conducting a systematic review, including identifying the appropriate evidence, study selection, data extraction and presentation, and quality and relevance assessment. The paper outlines three broad approaches that can be used to synthesise multiple estimates of health utilities for a given health state or condition, namely fixed-effect meta-analysis, random-effects meta-analysis and mixed-effects meta-regression. Each approach is illustrated by a synthesis of utility values for a hypothetical decision problem, and software code is provided. The paper highlights a number of methodological issues pertinent to the conduct of meta-analysis or meta-regression. These include the importance of limiting synthesis to 'comparable' utility estimates, for example those derived using common utility measurement approaches and sources of valuation; the effects of reliance on limited or poorly reported published data from primary utility assessment studies; the use of aggregate outcomes within analyses; approaches to generating measures of uncertainty; handling of median utility values; challenges surrounding the disentanglement of utility estimates collected serially within the context of prospective observational studies or prospective randomised trials; challenges surrounding the disentanglement of intervention effects; and approaches to measuring model validity. Areas of methodological debate and avenues for future research are highlighted.

  7. Effects of vertical shear in modelling horizontal oceanic dispersion

    NASA Astrophysics Data System (ADS)

    Lanotte, A. S.; Corrado, R.; Palatella, L.; Pizzigalli, C.; Schipa, I.; Santoleri, R.

    2016-02-01

    The effect of vertical shear on the horizontal dispersion properties of passive tracer particles on the continental shelf of the South Mediterranean is investigated by means of observation and model data. In situ current measurements reveal that vertical gradients of horizontal velocities in the upper mixing layer decorrelate quite fast ( ˜ 1 day), whereas an eddy-permitting ocean model, such as the Mediterranean Forecasting System, tends to overestimate such decorrelation time because of finite resolution effects. Horizontal dispersion, simulated by the Mediterranean sea Forecasting System, is mostly affected by: (1) unresolved scale motions, and mesoscale motions that are largely smoothed out at scales close to the grid spacing; (2) poorly resolved time variability in the profiles of the horizontal velocities in the upper layer. For the case study we have analysed, we show that a suitable use of deterministic kinematic parametrizations is helpful to implement realistic statistical features of tracer dispersion in two and three dimensions. The approach here suggested provides a functional tool to control the horizontal spreading of small organisms or substance concentrations, and is thus relevant for marine biology, pollutant dispersion as well as oil spill applications.

  8. Mixed Beam Murine Harderian Gland Tumorigenesis: Predicted Dose-Effect Relationships if neither Synergism nor Antagonism Occurs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranart, Nopphon; Blakely, Eleanor A.; Cheng, Alden

    Complex mixed radiation fields exist in interplanetary space, and not much is known about their latent effects on space travelers. In silico synergy analysis default predictions are useful when planning relevant mixed-ion-beam experiments and interpreting their results. These predictions are based on individual dose-effect relationships (IDER) for each component of the mixed-ion beam, assuming no synergy or antagonism. For example, a default hypothesis of simple effect additivity has often been used throughout the study of biology. However, for more than a century pharmacologists interested in mixtures of therapeutic drugs have analyzed conceptual, mathematical and practical questions similar to those thatmore » arise when analyzing mixed radiation fields, and have shown that simple effect additivity often gives unreasonable predictions when the IDER are curvilinear. Various alternatives to simple effect additivity proposed in radiobiology, pharmacometrics, toxicology and other fields are also known to have important limitations. In this work, we analyze upcoming murine Harderian gland (HG) tumor prevalence mixed-beam experiments, using customized open-source software and published IDER from past single-ion experiments. The upcoming experiments will use acute irradiation and the mixed beam will include components of high atomic number and energy (HZE). We introduce a new alternative to simple effect additivity, "incremental effect additivity", which is more suitable for the HG analysis and perhaps for other end points. We use incremental effect additivity to calculate default predictions for mixture dose-effect relationships, including 95% confidence intervals. We have drawn three main conclusions from this work. 1. It is important to supplement mixed-beam experiments with single-ion experiments, with matching end point(s), shielding and dose timing. 2. For HG tumorigenesis due to a mixed beam, simple effect additivity and incremental effect additivity sometimes give default predictions that are numerically close. However, if nontargeted effects are important and the mixed beam includes a number of different HZE components, simple effect additivity becomes unusable and another method is needed such as incremental effect additivity. 3. Eventually, synergy analysis default predictions of the effects of mixed radiation fields will be replaced by more mechanistic, biophysically-based predictions. However, optimizing synergy analyses is an important first step. If mixed-beam experiments indicate little synergy or antagonism, plans by NASA for further experiments and possible missions beyond low earth orbit will be substantially simplified.« less

  9. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    ERIC Educational Resources Information Center

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  10. An effective Z'

    DOE PAGES

    Fox, Patrick J.; Liu, Jia; Tucker-Smith, David; ...

    2011-12-06

    We describe a method to couple Z' gauge bosons to the standard model (SM), without charging the SM fields under the U(1)', but instead through effective higher-dimension operators. This method allows complete control over the tree-level couplings of the Z' and does not require altering the structure of any of the SM couplings, nor does it contain anomalies or require introduction of fields in nonstandard SM representations. Moreover, such interactions arise from simple renormalizable extensions of the SM—the addition of vectorlike matter that mixes with SM fermions when the U(1)' is broken. We apply effective Z' models as explanations ofmore » various recent anomalies: the D0 same-sign dimuon asymmetry, the CDF W+di-jet excess and the CDF top forward-backward asymmetry. In the case of the W+di-jet excess we also discuss several complementary analyses that may shed light on the nature of the discrepancy. We consider the possibility of non-Abelian groups, and discuss implications for the phenomenology of dark matter as well.« less

  11. 'Keep fit' exercise interventions to improve health, fitness and well-being of children and young people who use wheelchairs: mixed-method systematic review protocol.

    PubMed

    O'Brien, Thomas D; Noyes, Jane; Spencer, Llinos Haf; Kubis, Hans-Peter; Hastings, Richard P; Edwards, Rhiannon T; Bray, Nathan; Whitaker, Rhiannon

    2014-12-01

    This mixed-method systematic review aims to establish the current evidence base for 'keep fit', exercise or physical activity interventions for children and young people who use wheelchairs. Nurses have a vital health promotion, motivational and monitoring role in optimizing the health and well-being of disabled children. Children with mobility impairments are prone to have low participation levels in physical activity, which reduces fitness and well-being. Effective physical activity interventions that are fun and engaging for children are required to promote habitual participation as part of a healthy lifestyle. Previous intervention programmes have been trialled, but little is known about the most effective types of exercise to improve the fitness of young wheelchair users. Mixed-method design using Cochrane systematic processes. Evidence regarding physiological and psychological effectiveness, health economics, user perspectives and service evaluations will be included and analysed under distinct streams. The project was funded from October 2012. Multiple databases will be searched using search strings combining relevant medical subheadings and intervention-specific terms. Articles will also be identified from ancestral references and by approaching authors to identify unpublished work. Only studies or reports evaluating the effectiveness, participation experiences or cost of a physical activity programme will be included. Separate analyses will be performed for each data stream, including a meta-analysis if sufficient homogeneity exists and thematic analyses. Findings across streams will be synthesized in an overarching narrative summary. Evidence from the first systematic review of this type will inform development of effective child-centred physical activity interventions and their evaluation. © 2014 John Wiley & Sons Ltd.

  12. A mixed-effects height-diameter model for cottonwood in the Mississippi Delta

    Treesearch

    Curtis L. VanderSchaaf; H. Christoph Stuhlinger

    2012-01-01

    Eastern cottonwood (Populus deltoides Bartr. ex Marsh.) has been artificially regenerated throughout the Mississippi Delta region because of its fast growth and is being considered for biofuel production.This paper presents a mixed-effects height-diameter model for cottonwood in the Mississippi Delta region. After obtaining height-diameter...

  13. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  14. A mixed-effects model approach for the statistical analysis of vocal fold viscoelastic shear properties.

    PubMed

    Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei

    2017-11-01

    A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses

    PubMed Central

    Preyra, Colin

    2004-01-01

    Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940

  16. Accuracies of univariate and multivariate genomic prediction models in African cassava.

    PubMed

    Okeke, Uche Godfrey; Akdemir, Deniz; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc

    2017-12-04

    Genomic selection (GS) promises to accelerate genetic gain in plant breeding programs especially for crop species such as cassava that have long breeding cycles. Practically, to implement GS in cassava breeding, it is necessary to evaluate different GS models and to develop suitable models for an optimized breeding pipeline. In this paper, we compared (1) prediction accuracies from a single-trait (uT) and a multi-trait (MT) mixed model for a single-environment genetic evaluation (Scenario 1), and (2) accuracies from a compound symmetric multi-environment model (uE) parameterized as a univariate multi-kernel model to a multivariate (ME) multi-environment mixed model that accounts for genotype-by-environment interaction for multi-environment genetic evaluation (Scenario 2). For these analyses, we used 16 years of public cassava breeding data for six target cassava traits and a fivefold cross-validation scheme with 10-repeat cycles to assess model prediction accuracies. In Scenario 1, the MT models had higher prediction accuracies than the uT models for all traits and locations analyzed, which amounted to on average a 40% improved prediction accuracy. For Scenario 2, we observed that the ME model had on average (across all locations and traits) a 12% improved prediction accuracy compared to the uE model. We recommend the use of multivariate mixed models (MT and ME) for cassava genetic evaluation. These models may be useful for other plant species.

  17. A Novel Methodology to Estimate the Treatment Effect in Presence of Highly Variable Placebo Response

    PubMed Central

    Gomeni, Roberto; Goyal, Navin; Bressolle, Françoise; Fava, Maurizio

    2015-01-01

    One of the main reasons for the inefficiency of multicenter randomized clinical trials (RCTs) in depression is the excessively high level of placebo response. The aim of this work was to propose a novel methodology to analyze RCTs based on the assumption that centers with high placebo response are less informative than the other centers for estimating the ‘true' treatment effect (TE). A linear mixed-effect modeling approach for repeated measures (MMRM) was used as a reference approach. The new method for estimating TE was based on a nonlinear longitudinal modeling of clinical scores (NLMMRM). NLMMRM estimates TE by associating a weighting factor to the data collected in each center. The weight was defined by the posterior probability of detecting a clinically relevant difference between active treatment and placebo at that center. Data from five RCTs in depression were used to compare the performance of MMRM with NLMMRM. The results of the analyses showed an average improvement of ~15% in the TE estimated with NLMMRM when the center effect was included in the analyses. Opposite results were observed with MMRM: TE estimate was reduced by ~4% when the center effect was considered as covariate in the analysis. The novel NLMMRM approach provides a tool for controlling the confounding effect of high placebo response, to increase signal detection and to provide a more reliable estimate of the ‘true' TE by controlling false negative results associated with excessively high placebo response. PMID:25895454

  18. Economic Evaluation of Telemedicine for Patients in ICUs.

    PubMed

    Yoo, Byung-Kwang; Kim, Minchul; Sasaki, Tomoko; Melnikow, Joy; Marcin, James P

    2016-02-01

    Despite telemedicine's potential to improve patients' health outcomes and reduce costs in the ICU, hospitals have been slow to introduce telemedicine in the ICU due to high up-front costs and mixed evidence on effectiveness. This study's first aim was to conduct a cost-effectiveness analysis to estimate the incremental cost-effectiveness ratio of telemedicine in the ICU, compared with ICU without telemedicine, from the healthcare system perspective. The second aim was to examine potential cost saving of telemedicine in the ICU through probabilistic analyses and break-even analyses. Simulation analyses performed by standard decision models. Hypothetical ICU defined by the U.S. literature. Hypothetical adult patients in ICU defined by the U.S. literature. The intervention was the introduction of telemedicine in the ICU, which was assumed to affect per-patient per-hospital-stay ICU cost and hospital mortality. Telemedicine in the ICU operation costs included the telemedicine equipment-installation (start-up) costs with 5-year depreciation, maintenance costs, and clinician staffing costs. Telemedicine in the ICU effectiveness was measured by cumulative quality-adjusted life years for 5 years after ICU discharge. The base case cost-effectiveness analysis estimated telemedicine in the ICU to extend 0.011 quality-adjusted life years with an incremental cost of $516 per patient compared with ICU without telemedicine, resulting in an incremental cost-effectiveness ratio of $45,320 per additional quality-adjusted life year (= $516/0.011). The probabilistic cost-effectiveness analysis estimated an incremental cost-effectiveness ratio of $50,265 with a wide 95% CI from a negative value (suggesting cost savings) to $375,870. These probabilistic analyses projected that cost saving is achieved 37% of 1,000 iterations. Cost saving is also feasible if the per-patient per-hospital-stay operational cost and physician cost were less than $422 and less than $155, respectively, based on break-even analyses. Our analyses suggest that telemedicine in the ICU is cost-effective in most cases and cost saving in some cases. The thresholds of cost and effectiveness, estimated by break-even analyses, help hospitals determine the impact of telemedicine in the ICU and potential cost saving.

  19. Reliability Analysis of Uniaxially Ground Brittle Materials

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Nemeth, Noel N.; Powers, Lynn M.; Choi, Sung R.

    1995-01-01

    The fast fracture strength distribution of uniaxially ground, alpha silicon carbide was investigated as a function of grinding angle relative to the principal stress direction in flexure. Both as-ground and ground/annealed surfaces were investigated. The resulting flexural strength distributions were used to verify reliability models and predict the strength distribution of larger plate specimens tested in biaxial flexure. Complete fractography was done on the specimens. Failures occurred from agglomerates, machining cracks, or hybrid flaws that consisted of a machining crack located at a processing agglomerate. Annealing eliminated failures due to machining damage. Reliability analyses were performed using two and three parameter Weibull and Batdorf methodologies. The Weibull size effect was demonstrated for machining flaws. Mixed mode reliability models reasonably predicted the strength distributions of uniaxial flexure and biaxial plate specimens.

  20. Factor structure of the Norwegian version of the WAIS-III in a clinical sample: the arithmetic problem.

    PubMed

    Egeland, Jens; Bosnes, Ole; Johansen, Hans

    2009-09-01

    Confirmatory Factor Analyses (CFA) of the Wechsler Adult Intelligence Scale-III (WAIS-III) lend partial support to the four-factor model proposed in the test manual. However, the Arithmetic subtest has been especially difficult to allocate to one factor. Using the new Norwegian WAIS-III version, we tested factor models differing in the number of factors and in the placement of the Arithmetic subtest in a mixed clinical sample (n = 272). Only the four-factor solutions had adequate goodness-of-fit values. Allowing Arithmetic to load on both the Verbal Comprehension and Working Memory factors provided a more parsimonious solution compared to considering the subtest only as a measure of Working Memory. Effects of education were particularly high for both the Verbal Comprehension tests and Arithmetic.

  1. Ant Colony Optimization for Markowitz Mean-Variance Portfolio Model

    NASA Astrophysics Data System (ADS)

    Deng, Guang-Feng; Lin, Woo-Tsong

    This work presents Ant Colony Optimization (ACO), which was initially developed to be a meta-heuristic for combinatorial optimization, for solving the cardinality constraints Markowitz mean-variance portfolio model (nonlinear mixed quadratic programming problem). To our knowledge, an efficient algorithmic solution for this problem has not been proposed until now. Using heuristic algorithms in this case is imperative. Numerical solutions are obtained for five analyses of weekly price data for the following indices for the period March, 1992 to September, 1997: Hang Seng 31 in Hong Kong, DAX 100 in Germany, FTSE 100 in UK, S&P 100 in USA and Nikkei 225 in Japan. The test results indicate that the ACO is much more robust and effective than Particle swarm optimization (PSO), especially for low-risk investment portfolios.

  2. A microphysical pathway analysis to investigate aerosol effects on convective clouds

    NASA Astrophysics Data System (ADS)

    Heikenfeld, Max; White, Bethan; Labbouz, Laurent; Stier, Philip

    2017-04-01

    The impact of aerosols on ice- and mixed-phase processes in convective clouds remains highly uncertain, which has strong implications for estimates of the role of aerosol-cloud interactions in the climate system. The wide range of interacting microphysical processes are still poorly understood and generally not resolved in global climate models. To understand and visualise these processes and to conduct a detailed pathway analysis, we have added diagnostic output of all individual process rates for number and mass mixing ratios to two commonly-used cloud microphysics schemes (Thompson and Morrison) in WRF. This allows us to investigate the response of individual processes to changes in aerosol conditions and the propagation of perturbations throughout the development of convective clouds. Aerosol effects on cloud microphysics could strongly depend on the representation of these interactions in the model. We use different model complexities with regard to aerosol-cloud interactions ranging from simulations with different levels of fixed cloud droplet number concentration (CDNC) as a proxy for aerosol, to prognostic CDNC with fixed modal aerosol distributions. Furthermore, we have implemented the HAM aerosol model in WRF-chem to also perform simulations with a fully interactive aerosol scheme. We employ a hierarchy of simulation types to understand the evolution of cloud microphysical perturbations in atmospheric convection. Idealised supercell simulations are chosen to present and test the analysis methods for a strongly confined and well-studied case. We then extend the analysis to large case study simulations of tropical convection over the Amazon rainforest. For both cases we apply our analyses to individually tracked convective cells. Our results show the impact of model uncertainties on the understanding of aerosol-convection interactions and have implications for improving process representation in models.

  3. Clarifying the Content Coverage of Differing Psychopathy Inventories through Reference to the Triarchic Psychopathy Measure

    PubMed Central

    Drislane, Laura E.; Patrick, Christopher J.; Arsal, Güler

    2014-01-01

    The Triarchic Model of psychopathy (Patrick, Fowles, and Krueger, 2009) was formulated as an integrative framework for reconciling differing conceptions of psychopathy. The model characterizes psychopathy in terms of three distinguishable phenotypic components: boldness, meanness, and disinhibition. Data from a large mixed-gender undergraduate sample (N = 618) were used to examine relations of several of the best-known measures for assessing psychopathic traits with scores on the Triarchic Psychopathy Measure (TriPM), an inventory developed to operationalize the Triarchic Model through separate facet scales. Analyses revealed that established inventories of psychopathy index components of the model as indexed by the TriPM to varying degrees. While each inventory provided effective coverage of meanness and disinhibition components, instruments differed in their representation of boldness. Current results demonstrate the heuristic value of the Triarchic Model for delineating commonalities and differences among alternative measures of psychopathy, and provide support for the utility of the Triarchic Model as a framework for reconciling alternative conceptions of psychopathy. PMID:24320762

  4. MIXING MODELS IN ANALYSES OF DIET USING MULTIPLE STABLE ISOTOPES: A CRITIQUE

    EPA Science Inventory

    Stable isotopes have become widely used in ecology to quantify the importance of different sources based on their isotopic signature. One example of this has been the determination of food webs, where the isotopic signatures of a predator and various prey items can be used to de...

  5. UNCERTAINTY IN SOURCE PARTITIONING USING STABLE ISOTOPES

    EPA Science Inventory

    Stable isotope analyses are often used to quantify the contribution of multiple sources to a mixture, such as proportions of food sources in an animal's diet, C3 vs. C4 plant inputs to soil organic carbon, etc. Linear mixing models can be used to partition two sources with a sin...

  6. Testing feedback message framing and comparators to address prescribing of high-risk medications in nursing homes: protocol for a pragmatic, factorial, cluster-randomized trial.

    PubMed

    Ivers, Noah M; Desveaux, Laura; Presseau, Justin; Reis, Catherine; Witteman, Holly O; Taljaard, Monica K; McCleary, Nicola; Thavorn, Kednapa; Grimshaw, Jeremy M

    2017-07-14

    Audit and feedback (AF) interventions that leverage routine administrative data offer a scalable and relatively low-cost method to improve processes of care. AF interventions are usually designed to highlight discrepancies between desired and actual performance and to encourage recipients to act to address such discrepancies. Comparing to a regional average is a common approach, but more recipients would have a discrepancy if compared to a higher-than-average level of performance. In addition, how recipients perceive and respond to discrepancies may depend on how the feedback itself is framed. We aim to evaluate the effectiveness of different comparators and framing in feedback on high-risk prescribing in nursing homes. This is a pragmatic, 2 × 2 factorial, cluster-randomized controlled trial testing variations in the comparator and framing on the effectiveness of quarterly AF in changing high-risk prescribing in nursing homes in Ontario, Canada. We grouped homes that share physicians into clusters and randomized these clusters into the four experimental conditions. Outcomes will be assessed after 6 months; all primary analyses will be by intention-to-treat. The primary outcome (monthly number of high-risk medications received by each patient) will be analysed using a general linear mixed effects regression model. We will present both four-arm and factorial analyses. With 160 clusters and an average of 350 beds per cluster, assuming no interaction and similar effects for each intervention, we anticipate 90% power to detect an absolute mean difference of 0.3 high-risk medications prescribed. A mixed-methods process evaluation will explore potential mechanisms underlying the observed effects, exploring targeted constructs including intention, self-efficacy, outcome expectations, descriptive norms, and goal prioritization. An economic analysis will examine cost-effectiveness analysis from the perspective of the publicly funded health care system. This protocol describes the rationale and methodology of a trial testing manipulations of theory-informed components of an audit and feedback intervention to determine how to improve an existing intervention and provide generalizable insights for implementation science. NCT02979964.

  7. Use of chemical and isotopic tracers to characterize the interactions between ground water and surface water in mantled karst

    USGS Publications Warehouse

    Katz, B.G.; Coplen, T.B.; Bullen, T.D.; Hal, Davis J.

    1997-01-01

    In the mantled karst terrane of northern Florida, the water quality of the Upper Floridan aquifer is influenced by the degree of connectivity between the aquifer and the surface. Chemical and isotopic analyses [18O/16O (??18O), 2H/1H (??D), 13C/12C (??13C), tritium(3H), and strontium-87/strontium-86(87Sr/86Sr)]along with geochemical mass-balance modeling were used to identify the dominant hydrochemical processes that control the composition of ground water as it evolves downgradient in two systems. In one system, surface water enters the Upper Floridan aquifer through a sinkhole located in the Northern Highlands physiographic unit. In the other system, surface water enters the aquifer through a sinkhole lake (Lake Bradford) in the Woodville Karst Plain. Differences in the composition of water isotopes (??18O and ??D) in rainfall, ground water, and surface water were used to develop mixing models of surface water (leakage of water to the Upper Floridan aquifer from a sinkhole lake and a sinkhole) and ground water. Using mass-balance calculations, based on differences in ??18O and ??D, the proportion of lake water that mixed with meteoric water ranged from 7 to 86% in water from wells located in close proximity to Lake Bradford. In deeper parts of the Upper Floridan aquifer, water enriched in 18O and D from five of 12 sampled municipal wells indicated that recharge from a sinkhole (1 to 24%) and surface water with an evaporated isotopic signature (2 to 32%) was mixing with ground water. The solute isotopes, ??13C and 87Sr/86Sr, were used to test the sensitivity of binary and ternary mixing models, and to estimate the amount of mass transfer of carbon and other dissolved species in geochemical reactions. In ground water downgradient from Lake Bradford, the dominant processes controlling carbon cycling in ground water were dissolution of carbonate minerals, aerobic degradation of organic matter, and hydrolysis of silicate minerals. In the deeper parts of the Upper Floridan aquifer, the major processes controlling the concentrations of major dissolved species included dissolution of calcite and dolomite, and degradation of organic matter under oxic conditions. The Upper Floridan aquifer is highly susceptible to contamination from activities at the land surface in the Tallahassee area. The presence of post-1950s concentrations of 3H in ground water from depths greater than 100 m below land surface indicates that water throughout much of the Upper Floridan aquifer has been recharged during the last 40 years. Even though mixing is likely between ground water and surface water in many parts of the study area, the Upper Floridan aquifer produces good quality water, which due to dilution effects shows little if any impact from trace elements or nutrients that are present in surface waters.The water quality of the Upper Floridan aquifer is influenced by the degree of connectivity between the aquifer and the surface water. Chemical and isotopic analyses, tritium, and strontium-87/strontium-86 along with geochemical mass-balance modeling were used to identify the dominant hydrochemical processes that control the composition of groundwater. Differences in the composition of water isotopes in rainfall, groundwater and surface water were used to develop mixing models of surface water and groundwater. Even though mixing is likely between groundwater and surface water in many parts of the study area, the Upper Floridan aquifer produces good quality water, showing little impact from trace elements present in surface waters.

  8. Socioscientific Argumentation: The effects of content knowledge and morality

    NASA Astrophysics Data System (ADS)

    Sadler, Troy D.; Donnelly, Lisa A.

    2006-10-01

    Broad support exists within the science education community for the incorporation of socioscientific issues (SSI) and argumentation in the science curriculum. This study investigates how content knowledge and morality contribute to the quality of SSI argumentation among high school students. We employed a mixed-methods approach: 56 participants completed tests of content knowledge and moral reasoning as well as interviews, related to SSI topics, which were scored based on a rubric for argumentation quality. Multiple regression analyses revealed no statistically significant relationships among content knowledge, moral reasoning, and argumentation quality. Qualitative analyses of the interview transcripts supported the quantitative results in that participants very infrequently revealed patterns of content knowledge application. However, most of the participants did perceive the SSI as moral problems. We propose a “Threshold Model of Knowledge Transfer” to account for the relationship between content knowledge and argumentation quality. Implications for science education are discussed.

  9. Prevention of dentine erosion by brushing with anti-erosive toothpastes.

    PubMed

    Aykut-Yetkiner, Arzu; Attin, Thomas; Wiegand, Annette

    2014-07-01

    This in vitro study aimed to investigate the preventive effect of brushing with anti-erosive toothpastes compared to a conventional fluoride toothpaste on dentine erosion. Bovine dentine specimens (n=12 per subgroup) were eroded in an artificial mouth (6 days, 6×30 s/day) using either citric acid (pH:2.5) or a hydrochloric acid/pepsin solution (pH:1.6), simulating extrinsic or intrinsic erosive conditions, respectively. In between, the specimens were rinsed with artificial saliva. Twice daily, the specimens were brushed for 15 s in an automatic brushing machine at 2.5 N with a conventional fluoride toothpaste slurry (elmex, AmF) or toothpaste slurries with anti-erosive formulations: Apacare (NaF/1% nHAP), Biorepair (ZnCO3-HAP), Chitodent (Chitosan), elmex Erosionsschutz (NaF/AmF/SnCl2/Chitosan), mirasensitive hap (NaF/30% HAP), Sensodyne Proschmelz (NaF/KNO3). Unbrushed specimens served as control. Dentine loss was measured profilometrically and statistically analysed using two-way and one-way ANOVA followed by Scheffe's post hoc tests. RDA-values of all toothpastes were determined, and linear mixed models were applied to analyse the influence of toothpaste abrasivity on dentine wear (p<0.05). Dentine erosion of unbrushed specimens amounted to 5.1±1.0 μm (extrinsic conditions) and 12.9±1.4 μm (intrinsic conditions). All toothpastes significantly reduced dentine erosion by 24-67% (extrinsic conditions) and 21-40% (intrinsic conditions). Biorepair was least effective, while all other toothpastes were not significantly different from each other. Linear mixed models did not show a significant effect of the RDA-value of the respective toothpaste on dentine loss. Toothpastes with anti-erosive formulations reduced dentine erosion, especially under simulated extrinsic erosive conditions, but were not superior to a conventional fluoride toothpaste. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Global optimization of small bimetallic Pd-Co binary nanoalloy clusters: a genetic algorithm approach at the DFT level.

    PubMed

    Aslan, Mikail; Davis, Jack B A; Johnston, Roy L

    2016-03-07

    The global optimisation of small bimetallic PdCo binary nanoalloys are systematically investigated using the Birmingham Cluster Genetic Algorithm (BCGA). The effect of size and composition on the structures, stability, magnetic and electronic properties including the binding energies, second finite difference energies and mixing energies of Pd-Co binary nanoalloys are discussed. A detailed analysis of Pd-Co structural motifs and segregation effects is also presented. The maximal mixing energy corresponds to Pd atom compositions for which the number of mixed Pd-Co bonds is maximised. Global minimum clusters are distinguished from transition states by vibrational frequency analysis. HOMO-LUMO gap, electric dipole moment and vibrational frequency analyses are made to enable correlation with future experiments.

  11. An Assessment of Southern Ocean Water Masses and Sea Ice During 1988-2007 in a Suite of Interannual CORE-II Simulations

    NASA Technical Reports Server (NTRS)

    Downes, Stephanie M.; Farneti, Riccardo; Uotila, Petteri; Griffies, Stephen M.; Marsland, Simon J.; Bailey, David; Behrens, Erik; Bentsen, Mats; Bi, Daohua; Biastoch, Arne; hide

    2015-01-01

    We characterise the representation of the Southern Ocean water mass structure and sea ice within a suite of 15 global ocean-ice models run with the Coordinated Ocean-ice Reference Experiment Phase II (CORE-II) protocol. The main focus is the representation of the present (1988-2007) mode and intermediate waters, thus framing an analysis of winter and summer mixed layer depths; temperature, salinity, and potential vorticity structure; and temporal variability of sea ice distributions. We also consider the interannual variability over the same 20 year period. Comparisons are made between models as well as to observation-based analyses where available. The CORE-II models exhibit several biases relative to Southern Ocean observations, including an underestimation of the model mean mixed layer depths of mode and intermediate water masses in March (associated with greater ocean surface heat gain), and an overestimation in September (associated with greater high latitude ocean heat loss and a more northward winter sea-ice extent). In addition, the models have cold and fresh/warm and salty water column biases centred near 50 deg S. Over the 1988-2007 period, the CORE-II models consistently simulate spatially variable trends in sea-ice concentration, surface freshwater fluxes, mixed layer depths, and 200-700 m ocean heat content. In particular, sea-ice coverage around most of the Antarctic continental shelf is reduced, leading to a cooling and freshening of the near surface waters. The shoaling of the mixed layer is associated with increased surface buoyancy gain, except in the Pacific where sea ice is also influential. The models are in disagreement, despite the common CORE-II atmospheric state, in their spatial pattern of the 20-year trends in the mixed layer depth and sea-ice.

  12. The Relaxation Matrix for Symmetric Tops with Inversion Symmetry. II; Line Mixing Effects in the V1 Band of NH3

    NASA Technical Reports Server (NTRS)

    Boulet, C.; Ma, Q.

    2016-01-01

    Line mixing effects have been calculated in the ?1 parallel band of self-broadened NH3. The theoretical approach is an extension of a semi-classical model to symmetric-top molecules with inversion symmetry developed in the companion paper [Q. Ma and C. Boulet, J. Chem. Phys. 144, 224303 (2016)]. This model takes into account line coupling effects and hence enables the calculation of the entire relaxation matrix. A detailed analysis of the various coupling mechanisms is carried out for Q and R inversion doublets. The model has been applied to the calculation of the shape of the Q branch and of some R manifolds for which an obvious signature of line mixing effects has been experimentally demonstrated. Comparisons with measurements show that the present formalism leads to an accurate prediction of the available experimental line shapes. Discrepancies between the experimental and theoretical sets of first order mixing parameters are discussed as well as some extensions of both theory and experiment.

  13. Design and analysis of flow velocity distribution inside a raceway pond using computational fluid dynamics.

    PubMed

    Pandey, Ramakant; Premalatha, M

    2017-03-01

    Open raceway ponds are widely adopted for cultivating microalgae on a large scale. Working depth of the raceway pond is the major component to be analysed for increasing the volume to surface area ratio. The working depth is limited up to 5-15 cm in conventional ponds but in this analysis working depth of raceway pond is considered as 25 cm. In this work, positioning of the paddle wheel is analysed and corresponding Vertical Mixing Index are calculated using CFD. Flow pattern along the length of the raceway pond, at three different paddle wheel speeds are analysed for L/W ratio of 6, 8 and 10, respectively. Effect of clearance (C) between rotor blade tip and bottom surface is also analysed by taking four clearance conditions i.e. C = 2, 5, 10 and 15. Moving reference frame method of Fluent is used for the modeling of six blade paddle wheel and realizable k-ε model is used for capturing turbulence characteristics. Overall objective of this work is to analyse the required geometry for maintaining a minimum flow velocity to avoid settling of algae corresponding to 25 cm working depth. Geometry given in [13] is designed using ANSYS Design modular and CFD results are generated using ANSYS FLUENT for the purpose of validation. Good agreement of results is observed between CFD and experimental Particle image velocimetry results with the deviation of 7.23%.

  14. Pelagic effects of offshore wind farm foundations in the stratified North Sea

    NASA Astrophysics Data System (ADS)

    Floeter, Jens; van Beusekom, Justus E. E.; Auch, Dominik; Callies, Ulrich; Carpenter, Jeffrey; Dudeck, Tim; Eberle, Sabine; Eckhardt, André; Gloe, Dominik; Hänselmann, Kristin; Hufnagl, Marc; Janßen, Silke; Lenhart, Hermann; Möller, Klas Ove; North, Ryan P.; Pohlmann, Thomas; Riethmüller, Rolf; Schulz, Sabrina; Spreizenbarth, Stefan; Temming, Axel; Walter, Bettina; Zielinski, Oliver; Möllmann, Christian

    2017-08-01

    A recent increase in the construction of Offshore Wind Farms (OWFs) has initiated numerous environmental impact assessments and monitoring programs. These focus on sea mammals, seabirds, benthos or demersal fish, but generally ignore any potential effects OWFs may have on the pelagic ecosystem. The only work on the latter has been through modelling analyses, which predict localised impacts like enhanced vertical mixing leading to a decrease in seasonal stratification, as well as shelf-wide changes of tidal amplitudes. Here we provide for the first-time empirical bio-physical data from an OWF. The data were obtained by towing a remotely operated vehicle (TRIAXUS ROTV) through two non-operating OWFs in the summer stratified North Sea. The undulating TRIAXUS transects provided high-resolution CTD data accompanied by oxygen and chlorophyll-a measurements. We provide empirical indication that vertical mixing is increased within the OWFs, leading to a doming of the thermocline and a subsequent transport of nutrients into the surface mixed layer (SML). Nutrients were taken up rapidly because underwater photosynthetically active radiation (PAR) enabled net primary production in the entire water column, especially within submesoscale chlorophyll-a pillars that were observed at regular intervals within the OWF regions. Video Plankton Recorder (VPR) images revealed distinct meroplankton distribution patterns in a copepod-dominated plankton community. Hydroacoustic records did not show any OWF effects on the distribution of pelagic fish. The results of a pre-OWF survey show however, that it is difficult to fully separate the anthropogenic impacts from the natural variability.

  15. A mixed model framework for teratology studies.

    PubMed

    Braeken, Johan; Tuerlinckx, Francis

    2009-10-01

    A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.

  16. Posttraumatic Stress Disorder Symptom Clusters and the Interpersonal Theory of Suicide in a Large Military Sample.

    PubMed

    Pennings, Stephanie M; Finn, Joseph; Houtsma, Claire; Green, Bradley A; Anestis, Michael D

    2017-10-01

    Prior studies examining posttraumatic stress disorder (PTSD) symptom clusters and the components of the interpersonal theory of suicide (ITS) have yielded mixed results, likely stemming in part from the use of divergent samples and measurement techniques. This study aimed to expand on these findings by utilizing a large military sample, gold standard ITS measures, and multiple PTSD factor structures. Utilizing a sample of 935 military personnel, hierarchical multiple regression analyses were used to test the association between PTSD symptom clusters and the ITS variables. Additionally, we tested for indirect effects of PTSD symptom clusters on suicidal ideation through thwarted belongingness, conditional on levels of perceived burdensomeness. Results indicated that numbing symptoms are positively associated with both perceived burdensomeness and thwarted belongingness and hyperarousal symptoms (dysphoric arousal in the 5-factor model) are positively associated with thwarted belongingness. Results also indicated that hyperarousal symptoms (anxious arousal in the 5-factor model) were positively associated with fearlessness about death. The positive association between PTSD symptom clusters and suicidal ideation was inconsistent and modest, with mixed support for the ITS model. Overall, these results provide further clarity regarding the association between specific PTSD symptom clusters and suicide risk factors. © 2016 The American Association of Suicidology.

  17. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    PubMed Central

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  18. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    PubMed

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  19. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  20. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  1. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    PubMed

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials

    PubMed Central

    Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2016-01-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885

  3. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    PubMed

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2017-06-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  4. Effectiveness and cost-effectiveness of rehabilitation after lumbar disc surgery (REALISE): design of a randomised controlled trial

    PubMed Central

    2013-01-01

    Background Patients who undergo lumbar disc surgery for herniated discs, are advocated two different postoperative management strategies: a watchful waiting policy, or referral for rehabilitation immediately after discharge from the hospital. A direct comparison of the effectiveness and cost-effectiveness of these two strategies is lacking. Methods/Design A randomised controlled trial will be conducted with an economic evaluation alongside to assess the (cost-) effectiveness of rehabilitation after lumbar disc surgery. Two hundred patients aged 18–70 years with a clear indication for lumbar disc surgery of a single level herniated disc will be recruited and randomly assigned to either a watchful waiting policy for first six weeks or exercise therapy starting immediately after discharge from the hospital. Exercise therapy will focus on resumption of activities of daily living and return to work. Therapists will tailor the intervention to the individual patient’s needs. All patients will be followed up by the neurosurgeon six weeks postoperatively. Main outcome measures are: functional status, pain intensity and global perceived recovery. Questionnaires will be completed preoperatively and at 3, 6, 9, 12 and 26 weeks after surgery. Data will be analysed according to the intention-to-treat principle, using a linear mixed model for continuous outcomes and a generalised mixed model for dichotomous outcomes. The economic evaluation will be performed from a societal perspective. Discussion The results of this trial may lead to a more consistent postoperative strategy for patients who will undergo lumbar disc surgery. Trial registration Netherlands Trial Register: NTR3156 PMID:23560810

  5. Vulnerability-specific stress generation: An examination of negative cognitive and interpersonal styles

    PubMed Central

    Liu, Richard T.; Alloy, Lauren B.; Mastin, Becky M.; Choi, Jimmy Y.; Boland, Elaine M.; Jenkins, Abby L.

    2014-01-01

    Although there is substantial evidence documenting the stress generation effect in depression (i.e., the tendency for depression-prone individuals to experience higher rates of life stress to which they contribute), additional research is required to advance current understanding of the specific types of dependent stress (i.e., events influenced by characteristics and attendant behaviors of the individual) relevant to this effect. The present study tested an extension of the stress generation hypothesis, in which the content of dependent stress that is produced by depression-prone individuals is contingent upon, and matches, the nature of their particular vulnerabilities. This extension was tested within the context of two cognitive models (i.e., hopelessness theory [Abramson, Metalsky, & Alloy, 1989] and Cole’s [1990, 1991] competency-based model) and two interpersonal models (i.e., Swann’s [1987] self-verification theory and Coyne’s [1976] interpersonal theory) of depression. Overall, support was obtained for vulnerability-specific stress generation. Specifically, in analyses across vulnerability domains, evidence of stress-generation specificity was found for all domain-specific cognitive vulnerabilities except self-perceived social competence. The within-domain analyses for cognitive vulnerabilities produced more mixed results, but were largely supportive. Additionally, excessive reassurance-seeking was specifically predictive of dependent stress in the social domain, and moderated, but did not mediate, the relation between negative inferential styles overall and in the interpersonal domain and their corresponding generated stress. Finally, no evidence was found for a stress generation effect with negative feedback-seeking. PMID:24679143

  6. Epidemiology of antibiotic-resistant wound infections from six countries in Africa

    PubMed Central

    Bebell, Lisa M; Meney, Carron; Valeri, Linda

    2017-01-01

    Introduction Little is known about the antimicrobial susceptibility of common bacteria responsible for wound infections from many countries in sub-Saharan Africa. Methods We performed a retrospective review of microbial isolates collected based on clinical suspicion of wound infection between 2004 and 2016 from Mercy Ships, a non-governmental organisation operating a single mobile surgical unit in Benin, Congo, Liberia, Madagascar, Sierra Leone and Togo. Antimicrobial resistant organisms of interest were defined as methicillin-resistant Staphylococcus aureus (MRSA) or Enterobacteriaceae resistant to third-generation cephalosporins. Generalised mixed-effects models accounting for repeated isolates in a patient, potential clustering by case mix for each field service, age, gender and country were used to test the hypothesis that rates of antimicrobial resistance differed between countries. Results 3145 isolates from repeated field services in six countries were reviewed. In univariate analyses, the highest proportion of MRSA was found in Benin (34.6%) and Congo (31.9%), while the lowest proportion was found in Togo (14.3%) and Madagascar (14.5%); country remained a significant predictor in multivariate analyses (P=0.002). In univariate analyses, the highest proportion of third-generation cephalosporin-resistant Enterobacteriaceae was found in Benin (35.8%) and lowest in Togo (14.3%) and Madagascar (16.3%). Country remained a significant predictor for antimicrobial-resistant isolates in multivariate analyses (P=0.009). Conclusion A significant proportion of isolates from wound cultures were resistant to first-line antimicrobials in each country. Though antimicrobial resistance isolates were not verified in a reference laboratory and these data may not be representative of all regions of the countries studied, differences in the proportion of antimicrobial-resistant isolates and resistance profiles between countries suggest site-specific surveillance should be a priority and local antimicrobial resistance profiles should be used to guide empiric antibiotic selection. PMID:29588863

  7. Mixed-grade rejection and its association with overt aggression, relational aggression, anxious-withdrawal, and psychological maladjustment.

    PubMed

    Bowker, Julie C; Etkin, Rebecca G

    2014-01-01

    The authors examined the associations between mixed-grade rejection (rejection by peers in a different school grade), anxious-withdrawal, aggression, and psychological adjustment in a middle school setting. Participants were 181 seventh-grade and 180 eighth-grade students (M age = 13.20 years, SD = 0.68 years) who completed peer nomination and self-report measures in their classes. Analyses indicated that in general, same- and mixed-grade rejection were related to overt and relational aggression, but neither type was related to anxious-withdrawal. Mixed-grade rejection was associated uniquely and negatively with self-esteem for seventh-grade boys, while increasing the loneliness associated with anxious-withdrawal. The results suggest that school-wide models of peer relations may be promising for understanding the ways in which different peer contexts contribute to adjustment in middle school settings.

  8. Boundary Layer Depth In Coastal Regions

    NASA Astrophysics Data System (ADS)

    Porson, A.; Schayes, G.

    The results of earlier studies performed about sea breezes simulations have shown that this is a relevant feature of the Planetary Boundary Layer that still requires effort to be diagnosed properly by atmospheric models. Based on the observations made during the ESCOMPTE campaign, over the Mediterranean Sea, different CBL and SBL height estimation processes have been tested with a meso-scale model, TVM. The aim was to compare the critical points of the BL height determination computed using turbulent kinetic energy profile with some other standard evaluations. Moreover, these results have been analysed with different mixing length formulation. The sensitivity of formulation is also analysed with a simple coastal configuration.

  9. Longitudinal mathematics development of students with learning disabilities and students without disabilities: a comparison of linear, quadratic, and piecewise linear mixed effects models.

    PubMed

    Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz

    2015-04-01

    Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  10. Experimental and mathematical model of the interactions in the mixed culture of links in the “producer-consumer” cycle

    NASA Astrophysics Data System (ADS)

    Pisman, T. I.

    2009-07-01

    The paper presents a experimental and mathematical model of interactions between invertebrates (the ciliates Paramecium caudatum and the rotifers Brachionus plicatilis) in the "producer-consumer" aquatic biotic cycle with spatially separated components. The model describes the dynamics of the mixed culture of ciliates and rotifers in the "consumer" component, feeding on the mixed algal culture of the "producer" component. It has been found that metabolites of the algae Scenedesmus produce an adverse effect on the reproduction of the ciliates P. caudatum. Taking into account this effect, the results of investigation of the mathematical model were in qualitative agreement with the experimental results. In the "producer-consumer" biotic cycle it was shown that coexistence is impossible in the mixed culture of invertebrates of the "consumer" component. The ciliates P. caudatum are driven out by the rotifers B. plicatilis.

  11. Precision calculations for h → WW/ZZ → 4 fermions in a singlet extension of the Standard Model with P rophecy4 f

    NASA Astrophysics Data System (ADS)

    Altenkamp, Lukas; Boggia, Michele; Dittmaier, Stefan

    2018-04-01

    We consider an extension of the Standard Model by a real singlet scalar field with a ℤ2-symmetric Lagrangian and spontaneous symmetry breaking with vacuum expectation value for the singlet. Considering the lighter of the two scalars of the theory to be the 125 GeV Higgs particle, we parametrize the scalar sector by the mass of the heavy Higgs boson, a mixing angle α, and a scalar Higgs self-coupling λ 12. Taking into account theoretical constraints from perturbativity and vacuum stability, we compute next-to-leading-order electroweak and QCD corrections to the decays h → WW/ZZ → 4 fermions of the light Higgs boson for some scenarios proposed in the literature. We formulate two renormalization schemes and investigate the conversion of the input parameters between the schemes, finding sizeable effects. Solving the renormalization-group equations for the \\overline{MS} parameters α and λ 12, we observe a significantly reduced scale and scheme dependence in the next-to-leading-order results. For some scenarios suggested in the literature, the total decay width for the process h → 4 f is computed as a function of the mixing angle and compared to the width of a corresponding Standard Model Higgs boson, revealing deviations below 10%. Differential distributions do not show significant distortions by effects beyond the Standard Model. The calculations are implemented in the Monte Carlo generator P rophecy4 f, which is ready for applications in data analyses in the framework of the singlet extension.

  12. Diet behaviour among young people in transition to adulthood (18-25 year olds): a mixed method study.

    PubMed

    Poobalan, Amudha S; Aucott, Lorna S; Clarke, Amanda; Smith, William Cairns S

    2014-01-01

    Background : Young people (18-25 years) during the adolescence/adulthood transition are vulnerable to weight gain and notoriously hard to reach. Despite increased levels of overweight/obesity in this age group, diet behaviour, a major contributor to obesity, is poorly understood. The purpose of this study was to explore diet behaviour among 18-25 year olds with influential factors including attitudes, motivators and barriers. Methods : An explanatory mixed method study design, based on health Behaviour Change Theories was used. Those at University/college and in the community, including those Not in Education, Employment or Training (NEET) were included. An initial quantitative questionnaire survey underpinned by the Theory of Planned Behaviour and Social Cognitive Theory was conducted and the results from this were incorporated into the qualitative phase. Seven focus groups were conducted among similar young people, varying in education and socioeconomic status. Exploratory univariate analysis was followed by multi-staged modelling to analyse the quantitative data. 'Framework Analysis' was used to analyse the focus groups. Results : 1313 questionnaires were analysed. Self-reported overweight/obesity prevalence was 22%, increasing with age, particularly in males. Based on the survey, 40% of young people reported eating an adequate amount of fruits and vegetables and 59% eating regular meals, but 32% reported unhealthy snacking. Based on the statistical modelling, positive attitudes towards diet and high intention (89%), did not translate into healthy diet behaviour. From the focus group discussions, the main motivators for diet behaviour were 'self-appearance' and having 'variety of food'. There were mixed opinions on 'cost' of food and 'taste'. Conclusion : Elements deemed really important to young people have been identified. This mixed method study is the largest in this vulnerable and neglected group covering a wide spectrum of the community. It provides evidence base to inform tailored interventions for a healthy diet within this age group.

  13. Diet behaviour among young people in transition to adulthood (18–25 year olds): a mixed method study

    PubMed Central

    Poobalan, Amudha S.; Aucott, Lorna S.; Clarke, Amanda; Smith, William Cairns S.

    2014-01-01

    Background : Young people (18–25 years) during the adolescence/adulthood transition are vulnerable to weight gain and notoriously hard to reach. Despite increased levels of overweight/obesity in this age group, diet behaviour, a major contributor to obesity, is poorly understood. The purpose of this study was to explore diet behaviour among 18–25 year olds with influential factors including attitudes, motivators and barriers. Methods: An explanatory mixed method study design, based on health Behaviour Change Theories was used. Those at University/college and in the community, including those Not in Education, Employment or Training (NEET) were included. An initial quantitative questionnaire survey underpinned by the Theory of Planned Behaviour and Social Cognitive Theory was conducted and the results from this were incorporated into the qualitative phase. Seven focus groups were conducted among similar young people, varying in education and socioeconomic status. Exploratory univariate analysis was followed by multi-staged modelling to analyse the quantitative data. ‘Framework Analysis’ was used to analyse the focus groups. Results: 1313 questionnaires were analysed. Self-reported overweight/obesity prevalence was 22%, increasing with age, particularly in males. Based on the survey, 40% of young people reported eating an adequate amount of fruits and vegetables and 59% eating regular meals, but 32% reported unhealthy snacking. Based on the statistical modelling, positive attitudes towards diet and high intention (89%), did not translate into healthy diet behaviour. From the focus group discussions, the main motivators for diet behaviour were ‘self-appearance’ and having ‘variety of food’. There were mixed opinions on ‘cost’ of food and ‘taste’. Conclusion: Elements deemed really important to young people have been identified. This mixed method study is the largest in this vulnerable and neglected group covering a wide spectrum of the community. It provides evidence base to inform tailored interventions for a healthy diet within this age group. PMID:25750826

  14. Effects of mixing on resolved and unresolved scales on stratospheric age of air

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Garny, Hella; Plöger, Felix; Jöckel, Patrick; Cai, Duy

    2017-06-01

    Mean age of air (AoA) is a widely used metric to describe the transport along the Brewer-Dobson circulation. We seek to untangle the effects of different processes on the simulation of AoA, using the chemistry-climate model EMAC (ECHAM/MESSy Atmospheric Chemistry) and the Chemical Lagrangian Model of the Stratosphere (CLaMS). Here, the effects of residual transport and two-way mixing on AoA are calculated. To do so, we calculate the residual circulation transit time (RCTT). The difference of AoA and RCTT is defined as aging by mixing. However, as diffusion is also included in this difference, we further use a method to directly calculate aging by mixing on resolved scales. Comparing these two methods of calculating aging by mixing allows for separating the effect of unresolved aging by mixing (which we term aging by diffusion in the following) in EMAC and CLaMS. We find that diffusion impacts AoA by making air older, but its contribution plays a minor role (order of 10 %) in all simulations. However, due to the different advection schemes of the two models, aging by diffusion has a larger effect on AoA and mixing efficiency in EMAC, compared to CLaMS. Regarding the trends in AoA, in CLaMS the AoA trend is negative throughout the stratosphere except in the Northern Hemisphere middle stratosphere, consistent with observations. This slight positive trend is neither reproduced in a free-running nor in a nudged simulation with EMAC - in both simulations the AoA trend is negative throughout the stratosphere. Trends in AoA are mainly driven by the contributions of RCTT and aging by mixing, whereas the contribution of aging by diffusion plays a minor role.

  15. Item Response Theory Models for Wording Effects in Mixed-Format Scales

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu

    2015-01-01

    Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…

  16. The effect of mixing method on tricalcium silicate-based cement.

    PubMed

    Duque, J A; Fernandes, S L; Bubola, J P; Duarte, M A H; Camilleri, J; Marciano, M A

    2018-01-01

    To evaluate the effect of three methods of mixing on the physical and chemical properties of tricalcium silicate-based cements. The materials evaluated were MTA Angelus and Portland cement with 20% zirconium oxide (PC-20-Zr). The cements were mixed using a 3 : 1 powder-to-liquid ratio. The mixing methods were manual (m), trituration (tr) and ultrasonic (us) activation. The materials were characterized by means of scanning electron microscope (SEM) and energy dispersive X-ray spectroscopy. Flowability was analysed according to ANSI/ADA 57/2012. Initial and final setting times were assessed following ASTM C266/08. Volume change was evaluated using a micro-CT volumetric method. Solubility was analysed according to ADA 57/2012. pH and calcium ion release were measured after 3, 24, 72 and 168 h. Statistical analysis was performed using two-way analysis of variance. The level of significance was set at P = 0.05. The SEM analysis revealed that ultrasonic activation was associated with a homogeneous distribution of particles. Flowability, volume change and initial setting time were not influenced by the mixing method (P > 0.05). Solubility was influenced by the mixing method (P < 0.05). For pH, at 168 h, significant differences were found between MTA-m and PC-20-Zr-m (P < 0.05). For calcium ion release, PC-20-Zr-tr had higher values than MTA-m at 3 h, and MTA-tr had higher values than PC-20-Zr-m at 168 h (P < 0.05). The ultrasonic and trituration methods led to higher calcium ion release and pH compared with manual mixing for all cements, whilst the ultrasonic method produced smaller particles for the PC-20-Zr cement. Flow, setting times and volume change were not influenced by the mixing method used; however, it did have an impact on solubility. © 2017 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  17. Physical Modelling of the Effect of Slag and Top-Blowing on Mixing in the AOD Process

    NASA Astrophysics Data System (ADS)

    Haas, Tim; Visuri, Ville-Valtteri; Kärnä, Aki; Isohookana, Erik; Sulasalmi, Petri; Eriç, Rauf Hürman; Pfeifer, Herbert; Fabritius, Timo

    The argon-oxygen decarburization (AOD) process is the most common process for refining stainless steel. High blowing rates and the resulting efficient mixing of the steel bath are characteristic of the AOD process. In this work, a 1:9-scale physical model was used to study mixing in a 150 t AOD vessel. Water, air and rapeseed oil were used to represent steel, argon and slag, respectively, while the dynamic similarity with the actual converter was maintained using the modified Froude number and the momentum number. Employing sulfuric acid as a tracer, the mixing times were determined on the basis of pH measurements according to the 97.5% criterion. The gas blowing rate and slag-steel volume ratio were varied in order to study their effect on the mixing time. The effect of top-blowing was also investigated. The results suggest that mixing time decreases as the modified Froude number of the tuyères increases and that the presence of a slag layer increases the mixing time. Furthermore, top-blowing was found to increase the mixing time both with and without the slag layer.

  18. Relevance of workplace social mixing during influenza pandemics: an experimental modelling study of workplace cultures.

    PubMed

    Timpka, T; Eriksson, H; Holm, E; Strömgren, M; Ekberg, J; Spreco, A; Dahlström, Ö

    2016-07-01

    Workplaces are one of the most important regular meeting places in society. The aim of this study was to use simulation experiments to examine the impact of different workplace cultures on influenza dissemination during pandemics. The impact is investigated by experiments with defined social-mixing patterns at workplaces using semi-virtual models based on authentic sociodemographic and geographical data from a North European community (population 136 000). A simulated pandemic outbreak was found to affect 33% of the total population in the community with the reference academic-creative workplace culture; virus transmission at the workplace accounted for 10·6% of the cases. A model with a prevailing industrial-administrative workplace culture generated 11% lower incidence than the reference model, while the model with a self-employed workplace culture (also corresponding to a hypothetical scenario with all workplaces closed) produced 20% fewer cases. The model representing an academic-creative workplace culture with restricted workplace interaction generated 12% lower cumulative incidence compared to the reference model. The results display important theoretical associations between workplace social-mixing cultures and community-level incidence rates during influenza pandemics. Social interaction patterns at workplaces should be taken into consideration when analysing virus transmission patterns during influenza pandemics.

  19. Friendship Selection and Influence Processes for Physical Aggression and Prosociality: Differences between Single-Sex and Mixed-Sex Contexts.

    PubMed

    Dijkstra, Jan Kornelis; Berger, Christian

    2018-01-01

    The present study examined to what extent selection and influence processes for physical aggression and prosociality in friendship networks differed between sex-specific contexts (i.e., all-male, all-female, and mixed-sex classrooms), while controlling for perceived popularity. Whereas selection processes reflect how behaviors shape friendships, influence processes reveal the reversed pattern by indicating how friends affect individual behaviors. Data were derived from a longitudinal sample of early adolescents from Chile. Four all-male classrooms ( n  = 150 male adolescents), four all-female classrooms ( n  = 190 female adolescents), and eight mixed-sex classrooms ( n  = 272 students) were followed one year from grades 5 to 6 ( M age  = 13). Analyses were conducted by means of stochastic-actor-based modeling as implemented in RSIENA. Although it was expected that selection and influence effects for physical aggression and prosociality would vary by context, these effects showed remarkably similar trends across all-male, all-female, and mixed-sex classrooms, with physical aggression reducing and with prosociality increasing the number of nominations received as best friend in all-male and particularly all-female classrooms. Further, perceived popularity increased the number of friendship nominations received in all contexts. Influence processes were only found for perceived popularity, but not for physical aggression and prosociality in any of the three contexts. Together, these findings highlight the importance of both behaviors for friendship selection independent of sex-specific contexts, attenuating the implications of these gendered behaviors for peer relations.

  20. Complex segregation analysis of craniomandibular osteopathy in Deutsch Drahthaar dogs.

    PubMed

    Vagt, J; Distl, O

    2018-01-01

    This study investigated familial relationships among Deutsch Drahthaar dogs with craniomandibular osteopathy and examined the most likely mode of inheritance. Sixteen Deutsch Drahthaar dogs with craniomandibular osteopathy were diagnosed using clinical findings, radiography or computed tomography. All 16 dogs with craniomandibular osteopathy had one common ancestor. Complex segregation analyses rejected models explaining the segregation of craniomandibular osteopathy through random environmental variation, monogenic inheritance or an additive sex effect. Polygenic and mixed major gene models sufficiently explained the segregation of craniomandibular osteopathy in the pedigree analysis and offered the most likely hypotheses. The SLC37A2:c.1332C>T variant was not found in a sample of Deutsch Drahthaar dogs with craniomandibular osteopathy, nor in healthy controls. Craniomandibular osteopathy is an inherited condition in Deutsch Drahthaar dogs and the inheritance seems to be more complex than a simple Mendelian model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Mixing states of aerosols over four environmentally distinct atmospheric regimes in Asia: coastal, urban, and industrial locations influenced by dust.

    PubMed

    Ramachandran, S; Srivastava, Rohit

    2016-06-01

    Mixing can influence the optical, physical, and chemical characteristics of aerosols, which in turn can modify their life cycle and radiative effects. Assumptions on the mixing state can lead to uncertain estimates of aerosol radiative effects. To examine the effect of mixing on the aerosol characteristics, and their influence on radiative effects, aerosol mixing states are determined over four environmentally distinct locations (Karachi, Gwangju, Osaka, and Singapore) in Asia, an aerosol hot spot region, using measured spectral aerosol optical properties and optical properties model. Aerosol optical depth (AOD), single scattering albedo (SSA), and asymmetry parameter (g) exhibit spectral, spatial, and temporal variations. Aerosol mixing states exhibit large spatial and temporal variations consistent with aerosol characteristics and aerosol type over each location. External mixing of aerosol species is unable to reproduce measured SSA over Asia, thus providing a strong evidence that aerosols exist in mixed state. Mineral dust (MD) (core)-Black carbon (BC) (shell) is one of the most preferred aerosol mixing states. Over locations influenced by biomass burning aerosols, BC (core)-water soluble (WS, shell) is a preferred mixing state, while dust gets coated by anthropogenic aerosols (BC, WS) over urban regions influenced by dust. MD (core)-sea salt (shell) mixing is found over Gwangju corroborating the observations. Aerosol radiative forcing exhibits large seasonal and spatial variations consistent with features seen in aerosol optical properties and mixing states. TOA forcing is less negative/positive for external mixing scenario because of lower SSA. Aerosol radiative forcing in Karachi is a factor of 2 higher when compared to Gwangju, Osaka, and Singapore. The influence of g on aerosol radiative forcing is insignificant. Results emphasize that rather than prescribing one single aerosol mixing state in global climate models regionally and temporally varying aerosol mixing states should be included for more accurate assessment of aerosol radiative effects.

  2. Optimization of the time series NDVI-rainfall relationship using linear mixed-effects modeling for the anti-desertification area in the Beijing and Tianjin sandstorm source region

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie

    2018-05-01

    Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.

  3. The Martian atmospheric planetary boundary layer stability, fluxes, spectra, and similarity

    NASA Technical Reports Server (NTRS)

    Tillman, James E.

    1994-01-01

    This is the first analysis of the high frequency data from the Viking lander and spectra of wind, in the Martian atmospheric surface layer, along with the diurnal variation of the height of the mixed surface layer, are calculated for the first time for Mars. Heat and momentum fluxes, stability, and z(sub O) are estimated for early spring, from a surface temperature model and from Viking Lander 2 temperatures and winds at 44 deg N, using Monin-Obukhov similarity theory. The afternoon maximum height of the mixed layer for these seasons and conditions is estimated to lie between 3.6 and 9.2 km. Estimations of this height is of primary importance to all models of the boundary layer and Martian General Circulation Models (GCM's). Model spectra for two measuring heights and three surface roughnesses are calculated using the depth of the mixed layer, and the surface layer parameters and flow distortion by the lander is also taken into account. These experiments indicate that z(sub O), probably lies between 1.0 and 3.0 cm, and most likely is closer to 1.0 cm. The spectra are adjusted to simulate aliasing and high frequency rolloff, the latter caused both by the sensor response and the large Kolmogorov length on Mars. Since the spectral models depend on the surface parameters, including the estimated surface temperature, their agreement with the calculated spectra indicates that the surface layer estimates are self consistent. This agreement is especially noteworthy in that the inertial subrange is virtually absent in the Martian atmosphere at this height, due to the large Kolmogorov length scale. These analyses extend the range of applicability of terrestrial results and demonstrate that it is possible to estimate the effects of severe aliasing of wind measurements, to produce a models which agree well with the measured spectra. The results show that similarity theory developed for Earth applies to Mars, and that the spectral models are universal.

  4. CARES Helps Explain Secondary Organic Aerosols

    ScienceCinema

    Zaveri, Rahul

    2018-01-16

    What happens when urban man-made pollution mixes with what we think of as pristine forest air? To know more about what this interaction means for the climate, the Carbonaceous Aerosol and Radiative Effects Study, or CARES, field campaign was designed in 2010. The sampling strategy during CARES was coordinated with CalNex 2010, another major field campaign that was planned in California in 2010 by the California Air Resources Board (CARB), the National Oceanic and Atmospheric Administration (NOAA), and the California Energy Commission (CEC). "We found two things. When urban pollution mixes with forest pollutions we get more secondary organic aerosols," said Rahul Zaveri, FCSD scientist and project lead on CARES. "SOAs are thought to be formed primarily from forest emissions but only when they interact with urban emissions. The data is saying that there will be climate cooling over the central California valley because of these interactions." Knowledge gained from detailed analyses of data gathered during the CARES campaign, together with laboratory experiments, is being used to improve existing climate models.

  5. Amendment to "Analytical Solution for the Convectively-Mixed Atmospheric Boundary Layer": Inclusion of Subsidence

    NASA Astrophysics Data System (ADS)

    Ouwersloot, H. G.; de Arellano, J. Vilà-Guerau

    2013-09-01

    In Ouwersloot and Vilà-Guerau de Arellano (Boundary-Layer Meteorol. doi: 10.1007/s10546-013-9816-z , 2013, this issue), the analytical solutions for the boundary-layer height and scalar evolutions are derived for the convective boundary layer, based on the prognostic equations of mixed-layer slab models without taking subsidence into account. Here, we include and quantify the added effect of subsidence if the subsidence velocity scales linearly with height throughout the atmosphere. This enables analytical analyses for a wider range of observational cases. As a demonstration, the sensitivity of the boundary-layer height and the potential temperature jump to subsidence and the free tropospheric stability is graphically presented. The new relations show the importance of the temporal distribution of the surface buoyancy flux in determining the evolution if there is subsidence.

  6. The effect of gum tragacanth on the rheological properties of salep based ice cream mix.

    PubMed

    Kurt, Abdullah; Cengiz, Alime; Kahyaoglu, Talip

    2016-06-05

    The influence of concentration (0-0.5%, w/w) of gum tragacanth (GT) on thixotropy, dynamic, and creep-recovery rheological properties of ice cream mixes prepared with milk or water based were investigated. These properties were used to evaluate the viscoelastic behavior and internal structure of ice cream network. The textural properties of ice cream were also evaluated. Thixotropy values of samples were reduced by increasing GT concentration. The dynamic and creep-recovery analyses exhibited that GT addition increased both ice cream elastic and viscous behaviors. The increasing of Burger's model parameters with GT concentration indicated higher resistance network to the stress and more elastic behavior of samples. The applying of Cox-Merz rule is possible by using shift factor (α). GT also led to an increase in Young's modulus and the stickiness of ice creams. The obtained results highlighted the possible application of GT as a valuable member to promote structural properties of ice cream. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Sap flux density and stomatal conductance of European beech and common oak trees in pure and mixed stands during the summer drought of 2003

    NASA Astrophysics Data System (ADS)

    Jonard, F.; André, F.; Ponette, Q.; Vincke, C.; Jonard, M.

    2011-10-01

    SummarySap flux density of European beech and common oak trees was determined from sap flow measurements in pure and mixed stands during the summer drought of 2003. Eight trees per species and per stand were equipped with sap flow sensors. Soil water content was monitored in each stand at different depths by using time-domain reflectometry (TDR). Leaf area index and vertical root distribution were also investigated during the growing season. From sap flux density ( SFD) data, mean stomatal conductance of individual trees ( G s) was calculated by inverting the Penman-Monteith equation. Linear mixed models were developed to analyse the effects of species and stand type (pure vs. mixed) on SFD and G s and on their sensitivity to environmental variables (vapour pressure deficit ( D), incoming solar radiation ( R G), and relative extractable water ( REW)). For reference environmental conditions, we did not find any tree species or stand type effects on SFD. The sensitivity of SFD to D was higher for oak than for beech in the pure stands ( P < 0.0001) but the mixing of species reduced it for oak and increased it for beech, so that the sensitivity of SFD to D became higher for beech than for oak in the mixed stand ( P < 0.0001). At reference conditions, G s was significantly higher for beech compared to oak (2.1 and 1.8 times in the pure and mixed stand, respectively). This was explained by a larger beech sapwood-to-leaf area ratio compared to oak. The sensitivity of G s to REW was higher for beech than for oak and was ascribed to a higher vulnerability of beech to air embolism and to a more sensitive stomatal regulation. The sensitivity of beech G s to REW was lower in the mixed than in the pure stand, which could be explained by a better sharing of the resources in the mixture, by facilitation processes (hydraulic lift), and by a rainfall partitioning in favour of beech.

  8. Emotional intelligence and affective events in nurse education: A narrative review.

    PubMed

    Lewis, Gillian M; Neville, Christine; Ashkanasy, Neal M

    2017-06-01

    To investigate the current state of knowledge about emotional intelligence and affective events that arise during nursing students' clinical placement experiences. Narrative literature review. CINAHL, MEDLINE, PsycINFO, Scopus, Web of Science, ERIC and APAIS-Health databases published in English between 1990 and 2016. Data extraction from and constant comparative analysis of ten (10) research articles. We found four main themes: (1) emotional intelligence buffers stress; (2) emotional intelligence reduces anxiety associated with end of life care; (3) emotional intelligence promotes effective communication; and (4) emotional intelligence improves nursing performance. The articles we analysed adopted a variety of emotional intelligence models. Using the Ashkanasy and Daus "three-stream" taxonomy (Stream 1: ability models; 2: self-report; 3: mixed models), we found that Stream 2 self-report measures were the most popular followed by Stream 3 mixed model measures. None of the studies we surveyed used the Stream 1 approach. Findings nonetheless indicated that emotional intelligence was important in maintaining physical and psychological well-being. We concluded that developing emotional intelligence should be a useful adjunct to improve academic and clinical performance and to reduce the risk of emotional distress during clinical placement experiences. We call for more consistency in the use of emotional intelligence tests as a means to create an empirical evidence base in the field of nurse education. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features.

    PubMed

    Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara

    2017-01-01

    In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.

  10. Neutrinos and flavor symmetries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanimoto, Morimitsu

    2015-07-15

    We discuss the recent progress of flavor models with the non-Abelian discrete symmetry in the lepton sector focusing on the θ{sub 13} and CP violating phase. In both direct approach and indirect approach of the flavor symmetry, the non-vanishing θ{sub 13} is predictable. The flavor symmetry with the generalised CP symmetry can also predicts the CP violating phase. We show the phenomenological analyses of neutrino mixing for the typical flavor models.

  11. Molecular Modeling and Physicochemical Properties of Supramolecular Complexes of Limonene with α- and β-Cyclodextrins.

    PubMed

    Dos Passos Menezes, Paula; Dos Santos, Polliana Barbosa Pereira; Dória, Grace Anne Azevedo; de Sousa, Bruna Maria Hipólito; Serafini, Mairim Russo; Nunes, Paula Santos; Quintans-Júnior, Lucindo José; de Matos, Iara Lisboa; Alves, Péricles Barreto; Bezerra, Daniel Pereira; Mendonça Júnior, Francisco Jaime Bezerra; da Silva, Gabriel Francisco; de Aquino, Thiago Mendonça; de Souza Bento, Edson; Scotti, Marcus Tullius; Scotti, Luciana; de Souza Araujo, Adriano Antunes

    2017-02-01

    This study evaluated three different methods for the formation of an inclusion complex between alpha- and beta-cyclodextrin (α- and β-CD) and limonene (LIM) with the goal of improving the physicochemical properties of limonene. The study samples were prepared through physical mixing (PM), paste complexation (PC), and slurry complexation (SC) methods in the molar ratio of 1:1 (cyclodextrin:limonene). The complexes prepared were evaluated with thermogravimetry/derivate thermogravimetry, infrared spectroscopy, X-ray diffraction, complexation efficiency through gas chromatography/mass spectrometry analyses, molecular modeling, and nuclear magnetic resonance. The results showed that the physical mixing procedure did not produce complexation, but the paste and slurry methods produced inclusion complexes, which demonstrated interactions outside of the cavity of the CDs. However, the paste obtained with β-cyclodextrin did not demonstrate complexation in the gas chromatographic technique because, after extraction, most of the limonene was either surface-adsorbed by β-cyclodextrin or volatilized during the procedure. We conclude that paste complexation and slurry complexation are effective and economic methods to improve the physicochemical character of limonene and could have important applications in pharmacological activities in terms of an increase in solubility.

  12. Mixed-effects Gaussian process functional regression models with application to dose-response curve prediction.

    PubMed

    Shi, J Q; Wang, B; Will, E J; West, R M

    2012-11-20

    We propose a new semiparametric model for functional regression analysis, combining a parametric mixed-effects model with a nonparametric Gaussian process regression model, namely a mixed-effects Gaussian process functional regression model. The parametric component can provide explanatory information between the response and the covariates, whereas the nonparametric component can add nonlinearity. We can model the mean and covariance structures simultaneously, combining the information borrowed from other subjects with the information collected from each individual subject. We apply the model to dose-response curves that describe changes in the responses of subjects for differing levels of the dose of a drug or agent and have a wide application in many areas. We illustrate the method for the management of renal anaemia. An individual dose-response curve is improved when more information is included by this mechanism from the subject/patient over time, enabling a patient-specific treatment regime. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Immediate changes in widespread pressure pain sensitivity, neck pain, and cervical range of motion after cervical or thoracic thrust manipulation in patients with bilateral chronic mechanical neck pain: a randomized clinical trial.

    PubMed

    Martínez-Segura, Raquel; De-la-Llave-Rincón, Ana I; Ortega-Santiago, Ricardo; Cleland, Joshua A; Fernández-de-Las-Peñas, César

    2012-09-01

    Randomized clinical trial. To compare the effects of cervical versus thoracic thrust manipulation in patients with bilateral chronic mechanical neck pain on pressure pain sensitivity, neck pain, and cervical range of motion (CROM). Evidence suggests that spinal interventions can stimulate descending inhibitory pain pathways. To our knowledge, no study has investigated the neurophysiological effects of thoracic thrust manipulation in individuals with bilateral chronic mechanical neck pain, including widespread changes on pressure sensitivity. Ninety patients (51% female) were randomly assigned to 1 of 3 groups: cervical thrust manipulation on the right, cervical thrust manipulation on the left, or thoracic thrust manipulation. Pressure pain thresholds (PPTs) over the C5-6 zygapophyseal joint, lateral epicondyle, and tibialis anterior muscle, neck pain (11-point numeric pain rating scale), and cervical spine range of motion (CROM) were collected at baseline and 10 minutes after the intervention by an assessor blinded to the treatment allocation of the patients. Mixed-model analyses of covariance were used to examine the effects of the treatment on each outcome variable, with group as the between-subjects variable, time and side as the within-subject variables, and gender as the covariate. The primary analysis was the group-by-time interaction. No significant interactions were found with the mixed-model analyses of covariance for PPT level (C5-6, P>.210; lateral epicondyle, P>.186; tibialis anterior muscle, P>.268), neck pain intensity (P = .923), or CROM (flexion, P = .700; extension, P = .387; lateral flexion, P>.672; rotation, P>.192) as dependent variables. All groups exhibited similar changes in PPT, neck pain, and CROM (all, P<.001). Gender did not influence the main effects or the interaction effects in the analyses of the outcomes (P>.10). The results of the current randomized clinical trial suggest that cervical and thoracic thrust manipulation induce similar changes in PPT, neck pain intensity, and CROM in individuals with bilateral chronic mechanical neck pain. However, changes in PPT and CROM were small and did not surpass their respective minimal detectable change values. Further, because we did not include a control group, we cannot rule out a placebo effect of the thrust interventions on the outcomes. Therapy, level 1b.J Orthop Sports Phys Ther 2012;42(9):806-814, Epub 18 June 2012. doi:10.2519/jospt.2012.4151.

  14. Sensitivity of the ocean overturning circulation to wind and mixing: theoretical scalings and global ocean models

    NASA Astrophysics Data System (ADS)

    Nikurashin, Maxim; Gunn, Andrew

    2017-04-01

    The meridional overturning circulation (MOC) is a planetary-scale oceanic flow which is of direct importance to the climate system: it transports heat meridionally and regulates the exchange of CO2 with the atmosphere. The MOC is forced by wind and heat and freshwater fluxes at the surface and turbulent mixing in the ocean interior. A number of conceptual theories for the sensitivity of the MOC to changes in forcing have recently been developed and tested with idealized numerical models. However, the skill of the simple conceptual theories to describe the MOC simulated with higher complexity global models remains largely unknown. In this study, we present a systematic comparison of theoretical and modelled sensitivity of the MOC and associated deep ocean stratification to vertical mixing and southern hemisphere westerlies. The results show that theories that simplify the ocean into a single-basin, zonally-symmetric box are generally in a good agreement with a realistic, global ocean circulation model. Some disagreement occurs in the abyssal ocean, where complex bottom topography is not taken into account by simple theories. Distinct regimes, where the MOC has a different sensitivity to wind or mixing, as predicted by simple theories, are also clearly shown by the global ocean model. The sensitivity of the Indo-Pacific, Atlantic, and global basins is analysed separately to validate the conceptual understanding of the upper and lower overturning cells in the theory.

  15. Posttest Analyses of the Steel Containment Vessel Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costello, J.F.; Hessheimer, M.F.; Ludwigsen, J.S.

    A high pressure test of a scale model of a steel containment vessel (SCV) was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. This testis part of a program to investigate the response of representative models of nuclear containment structures to pressure loads beyond the design basis accident. The posttest analyses of this test focused on three areas where the pretest analysis effort did not adequately predict the model behavior duringmore » the test. These areas are the onset of global yielding, the strain concentrations around the equipment hatch and the strain concentrations that led to a small tear near a weld relief opening that was not modeled in the pretest analysis.« less

  16. Effect Size for Token Economy Use in Contemporary Classroom Settings: A Meta-Analysis of Single-Case Research

    ERIC Educational Resources Information Center

    Soares, Denise A.; Harrison, Judith R.; Vannest, Kimberly J.; McClelland, Susan S.

    2016-01-01

    Recent meta-analyses of the effectiveness of token economies (TEs) report insufficient quality in the research or mixed effects in the results. This study examines the contemporary (post-Public Law 94-142) peer-reviewed published single-case research evaluating the effectiveness of TEs. The results are stratified across quality of demonstrated…

  17. Impact of Antarctic mixed-phase clouds on climate

    DOE PAGES

    Lawson, R. Paul; Gettelman, Andrew

    2014-12-08

    Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. In this paper, we modify the National Center for Atmospheric Research (NCAR) Community Earthmore » System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm –2, and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. Finally, these sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than –20 °C.« less

  18. Differences in Mortality among Heroin, Cocaine, and Methamphetamine Users: A Hierarchical Bayesian Approach

    PubMed Central

    Liang, Li-Jung; Huang, David; Brecht, Mary-Lynn; Hser, Yih-ing

    2010-01-01

    Studies examining differences in mortality among long-term drug users have been limited. In this paper, we introduce a Bayesian framework that jointly models survival data using a Weibull proportional hazard model with frailty, and substance and alcohol data using mixed-effects models, to examine differences in mortality among heroin, cocaine, and methamphetamine users from five long-term follow-up studies. The traditional approach to analyzing combined survival data from numerous studies assumes that the studies are homogeneous, thus the estimates may be biased due to unobserved heterogeneity among studies. Our approach allows us to structurally combine the data from different studies while accounting for correlation among subjects within each study. Markov chain Monte Carlo facilitates the implementation of Bayesian analyses. Despite the complexity of the model, our approach is relatively straightforward to implement using WinBUGS. We demonstrate our joint modeling approach to the combined data and discuss the results from both approaches. PMID:21052518

  19. Temperature modelling and prediction for activated sludge systems.

    PubMed

    Lippi, S; Rosso, D; Lubello, C; Canziani, R; Stenstrom, M K

    2009-01-01

    Temperature is an important factor affecting biomass activity, which is critical to maintain efficient biological wastewater treatment, and also physiochemical properties of mixed liquor as dissolved oxygen saturation and settling velocity. Controlling temperature is not normally possible for treatment systems but incorporating factors impacting temperature in the design process, such as aeration system, surface to volume ratio, and tank geometry can reduce the range of temperature extremes and improve the overall process performance. Determining how much these design or up-grade options affect the tank temperature requires a temperature model that can be used with existing design methodologies. This paper presents a new steady state temperature model developed by incorporating the best aspects of previously published models, introducing new functions for selected heat exchange paths and improving the method for predicting the effects of covering aeration tanks. Numerical improvements with embedded reference data provide simpler formulation, faster execution, easier sensitivity analyses, using an ordinary spreadsheet. The paper presents several cases to validate the model.

  20. Transition mixing study empirical model report

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; White, C.

    1988-01-01

    The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.

Top