Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun
2013-09-01
By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.
Mixed effects versus fixed effects modelling of binary data with inter-subject variability.
Murphy, Valda; Dunne, Adrian
2005-04-01
The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.
NASA Astrophysics Data System (ADS)
Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu
2018-05-01
A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.
Extending existing structural identifiability analysis methods to mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2018-01-01
The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, R.
This report documents the initial progress on the reduced-order flow model developments in SAM for thermal stratification and mixing modeling. Two different modeling approaches are pursued. The first one is based on one-dimensional fluid equations with additional terms accounting for the thermal mixing from both flow circulations and turbulent mixing. The second approach is based on three-dimensional coarse-grid CFD approach, in which the full three-dimensional fluid conservation equations are modeled with closure models to account for the effects of turbulence.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Peters, J. M.
1981-01-01
Simple mixed models are developed for use in the geometrically nonlinear analysis of deep arches. A total Lagrangian description of the arch deformation is used, the analytical formulation being based on a form of the nonlinear deep arch theory with the effects of transverse shear deformation included. The fundamental unknowns comprise the six internal forces and generalized displacements of the arch, and the element characteristic arrays are obtained by using Hellinger-Reissner mixed variational principle. The polynomial interpolation functions employed in approximating the forces are one degree lower than those used in approximating the displacements, and the forces are discontinuous at the interelement boundaries. Attention is given to the equivalence between the mixed models developed herein and displacement models based on reduced integration of both the transverse shear and extensional energy terms. The advantages of mixed models over equivalent displacement models are summarized. Numerical results are presented to demonstrate the high accuracy and effectiveness of the mixed models developed and to permit a comparison of their performance with that of other mixed models reported in the literature.
An improved NSGA - II algorithm for mixed model assembly line balancing
NASA Astrophysics Data System (ADS)
Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong
2018-05-01
Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.
Estimating the numerical diapycnal mixing in an eddy-permitting ocean model
NASA Astrophysics Data System (ADS)
Megann, Alex
2018-01-01
Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Investigation of Compressibility Effect for Aeropropulsive Shear Flows
NASA Technical Reports Server (NTRS)
Balasubramanyam, M. S.; Chen, C. P.
2005-01-01
Rocket Based Combined Cycle (RBCC) engines operate within a wide range of Mach numbers and altitudes. Fundamental fluid dynamic mechanisms involve complex choking, mass entrainment, stream mixing and wall interactions. The Propulsion Research Center at the University of Alabama in Huntsville is involved in an on- going experimental and numerical modeling study of non-axisymmetric ejector-based combined cycle propulsion systems. This paper attempts to address the modeling issues related to mixing, shear layer/wall interaction in a supersonic Strutjet/ejector flow field. Reynolds Averaged Navier-Stokes (RANS) solutions incorporating turbulence models are sought and compared to experimental measurements to characterize detailed flow dynamics. The effect of compressibility on fluids mixing and wall interactions were investigated using an existing CFD methodology. The compressibility correction to conventional incompressible two- equation models is found to be necessary for the supersonic mixing aspect of the ejector flows based on 2-D simulation results. 3-D strut-base flows involving flow separations were also investigated.
A mixed model for the relationship between climate and human cranial form.
Katz, David C; Grote, Mark N; Weaver, Timothy D
2016-08-01
We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Coding response to a case-mix measurement system based on multiple diagnoses.
Preyra, Colin
2004-08-01
To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.
Likelihood-Based Random-Effect Meta-Analysis of Binary Events.
Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D
2015-01-01
Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.
Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses
Preyra, Colin
2004-01-01
Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940
NASA Astrophysics Data System (ADS)
Xu, Yang; Song, Kai; Shi, Qiang
2018-03-01
The hydride transfer reaction catalyzed by dihydrofolate reductase is studied using a recently developed mixed quantum-classical method to investigate the nuclear quantum effects on the reaction. Molecular dynamics simulation is first performed based on a two-state empirical valence bond potential to map the atomistic model to an effective double-well potential coupled to a harmonic bath. In the mixed quantum-classical simulation, the hydride degree of freedom is quantized, and the effective harmonic oscillator modes are treated classically. It is shown that the hydride transfer reaction rate using the mapped effective double-well/harmonic-bath model is dominated by the contribution from the ground vibrational state. Further comparison with the adiabatic reaction rate constant based on the Kramers theory confirms that the reaction is primarily vibrationally adiabatic, which agrees well with the high transmission coefficients found in previous theoretical studies. The calculated kinetic isotope effect is also consistent with the experimental and recent theoretical results.
Real longitudinal data analysis for real people: building a good enough mixed model.
Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E
2010-02-20
Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.
Interpretable inference on the mixed effect model with the Box-Cox transformation.
Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M
2017-07-10
We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Estimating the Numerical Diapycnal Mixing in the GO5.0 Ocean Model
NASA Astrophysics Data System (ADS)
Megann, A.; Nurser, G.
2014-12-01
Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2014), and forms part of the GC1 and GC2 climate models. It uses version 3.4 of the NEMO model, on the ORCA025 ¼° global tripolar grid. We describe various approaches to quantifying the numerical diapycnal mixing in this model, and present results from analysis of the GO5.0 model based on the isopycnal watermass analysis of Lee et al (2002) that indicate that numerical mixing does indeed form a significant component of the watermass transformation in the ocean interior.
NASA Astrophysics Data System (ADS)
Patel, V. K.; Singh, S. N.; Seshadri, V.
2013-06-01
A study is conducted to evolve an effective design concept to improve mixing in a combustor chamber to reduce the amount of intake air. The geometry used is that of a gas turbine combustor model. For simplicity, both the jets have been considered as air jets and effect of heat release and chemical reaction has not been modeled. Various contraction shapes and blockage have been investigated by placing them downstream at different locations with respect to inlet to obtain better mixing. A commercial CFD code `Fluent 6.3' which is based on finite volume method has been used to solve the flow in the combustor model. Validation is done with the experimental data available in literature using standard k-ω turbulence model. The study has shown that contraction and blockage at optimum location enhances the mixing process. Further, the effect of swirl in the jets has also investigated.
A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.
Li, Bing; Cui, Wei; Wang, Bin
2015-09-16
Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.
Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.
Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C
2014-12-01
D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.
A Mixed Learning Approach in Mechatronics Education
ERIC Educational Resources Information Center
Yilmaz, O.; Tuncalp, K.
2011-01-01
This study aims to investigate the effect of a Web-based mixed learning approach model on mechatronics education. The model combines different perception methods such as reading, listening, and speaking and practice methods developed in accordance with the vocational background of students enrolled in the course Electromechanical Systems in…
Model for compressible turbulence in hypersonic wall boundary and high-speed mixing layers
NASA Astrophysics Data System (ADS)
Bowersox, Rodney D. W.; Schetz, Joseph A.
1994-07-01
The most common approach to Navier-Stokes predictions of turbulent flows is based on either the classical Reynolds-or Favre-averaged Navier-Stokes equations or some combination. The main goal of the current work was to numerically assess the effects of the compressible turbulence terms that were experimentaly found to be important. The compressible apparent mass mixing length extension (CAMMLE) model, which was based on measured experimental data, was found to produce accurate predictions of the measured compressible turbulence data for both the wall bounded and free mixing layer. Hence, that model was incorporated into a finite volume Navier-Stokes code.
Statistical models of global Langmuir mixing
NASA Astrophysics Data System (ADS)
Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean
2017-05-01
The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.
Transition mixing study empirical model report
NASA Technical Reports Server (NTRS)
Srinivasan, R.; White, C.
1988-01-01
The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2016-11-01
The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.
A Methodology for Identifying Cost Effective Strategic Force Mixes.
1984-12-01
is not to say that the model could not be used to examine force increases. Given that the strategic force is already a mix of weapons, what is the...rules allow for the determination of what weapon mix to buy based on only the relative prices of the weapons and the parameters of the CES production...AD-A 151 773 AFIT/GOR/OS/84j /r A METHODOLOGY FOR IDENTIFYING COST EFFECTIVE STRATEGIC FORCE MIXES THESIS D I Thomas W. Manacapilli
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.
Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin
2017-02-01
The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed. Copyright © 2016 Elsevier Ltd. All rights reserved.
An adjoint-based framework for maximizing mixing in binary fluids
NASA Astrophysics Data System (ADS)
Eggl, Maximilian; Schmid, Peter
2017-11-01
Mixing in the inertial, but laminar parameter regime is a common application in a wide range of industries. Enhancing the efficiency of mixing processes thus has a fundamental effect on product quality, material homogeneity and, last but not least, production costs. In this project, we address mixing efficiency in the above mentioned regime (Reynolds number Re = 1000 , Peclet number Pe = 1000) by developing and demonstrating an algorithm based on nonlinear adjoint looping that minimizes the variance of a passive scalar field which models our binary Newtonian fluids. The numerical method is based on the FLUSI code (Engels et al. 2016), a Fourier pseudo-spectral code, which we modified and augmented by scalar transport and adjoint equations. Mixing is accomplished by moving stirrers which are numerically modeled using a penalization approach. In our two-dimensional simulations we consider rotating circular and elliptic stirrers and extract optimal mixing strategies from the iterative scheme. The case of optimizing shape and rotational speed of the stirrers will be demonstrated.
Prediction of hemoglobin in blood donors using a latent class mixed-effects transition model.
Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Rizopoulos, Dimitris; Lesaffre, Emmanuel
2016-02-20
Blood donors experience a temporary reduction in their hemoglobin (Hb) value after donation. At each visit, the Hb value is measured, and a too low Hb value leads to a deferral for donation. Because of the recovery process after each donation as well as state dependence and unobserved heterogeneity, longitudinal data of Hb values of blood donors provide unique statistical challenges. To estimate the shape and duration of the recovery process and to predict future Hb values, we employed three models for the Hb value: (i) a mixed-effects models; (ii) a latent-class mixed-effects model; and (iii) a latent-class mixed-effects transition model. In each model, a flexible function was used to model the recovery process after donation. The latent classes identify groups of donors with fast or slow recovery times and donors whose recovery time increases with the number of donations. The transition effect accounts for possible state dependence in the observed data. All models were estimated in a Bayesian way, using data of new entrant donors from the Donor InSight study. Informative priors were used for parameters of the recovery process that were not identified using the observed data, based on results from the clinical literature. The results show that the latent-class mixed-effects transition model fits the data best, which illustrates the importance of modeling state dependence, unobserved heterogeneity, and the recovery process after donation. The estimated recovery time is much longer than the current minimum interval between donations, suggesting that an increase of this interval may be warranted. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Tai, Y.; Watanabe, T.; Nagata, K.
2018-03-01
A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
Nguyen, Nam-Trung; Huang, Xiaoyang
2006-06-01
Effective and fast mixing is important for many microfluidic applications. In many cases, mixing is limited by molecular diffusion due to constrains of the laminar flow in the microscale regime. According to scaling law, decreasing the mixing path can shorten the mixing time and enhance mixing quality. One of the techniques for reducing mixing path is sequential segmentation. This technique divides solvent and solute into segments in axial direction. The so-called Taylor-Aris dispersion can improve axial transport by three orders of magnitudes. The mixing path can be controlled by the switching frequency and the mean velocity of the flow. Mixing ratio can be controlled by pulse width modulation of the switching signal. This paper first presents a simple time-dependent one-dimensional analytical model for sequential segmentation. The model considers an arbitrary mixing ratio between solute and solvent as well as the axial Taylor-Aris dispersion. Next, a micromixer was designed and fabricated based on polymeric micromachining. The micromixer was formed by laminating four polymer layers. The layers are micro machined by a CO(2) laser. Switching of the fluid flows was realized by two piezoelectric valves. Mixing experiments were evaluated optically. The concentration profile along the mixing channel agrees qualitatively well with the analytical model. Furthermore, mixing results at different switching frequencies were investigated. Due to the dynamic behavior of the valves and the fluidic system, mixing quality decreases with increasing switching frequency.
Jansa, Václav
2017-01-01
Height to crown base (HCB) of a tree is an important variable often included as a predictor in various forest models that serve as the fundamental tools for decision-making in forestry. We developed spatially explicit and spatially inexplicit mixed-effects HCB models using measurements from a total 19,404 trees of Norway spruce (Picea abies (L.) Karst.) and European beech (Fagus sylvatica L.) on the permanent sample plots that are located across the Czech Republic. Variables describing site quality, stand density or competition, and species mixing effects were included into the HCB model with use of dominant height (HDOM), basal area of trees larger in diameters than a subject tree (BAL- spatially inexplicit measure) or Hegyi’s competition index (HCI—spatially explicit measure), and basal area proportion of a species of interest (BAPOR), respectively. The parameters describing sample plot-level random effects were included into the HCB model by applying the mixed-effects modelling approach. Among several functional forms evaluated, the logistic function was found most suited to our data. The HCB model for Norway spruce was tested against the data originated from different inventory designs, but model for European beech was tested using partitioned dataset (a part of the main dataset). The variance heteroscedasticity in the residuals was substantially reduced through inclusion of a power variance function into the HCB model. The results showed that spatially explicit model described significantly a larger part of the HCB variations [R2adj = 0.86 (spruce), 0.85 (beech)] than its spatially inexplicit counterpart [R2adj = 0.84 (spruce), 0.83 (beech)]. The HCB increased with increasing competitive interactions described by tree-centered competition measure: BAL or HCI, and species mixing effects described by BAPOR. A test of the mixed-effects HCB model with the random effects estimated using at least four trees per sample plot in the validation data confirmed that the model was precise enough for the prediction of HCB for a range of site quality, tree size, stand density, and stand structure. We therefore recommend measuring of HCB on four randomly selected trees of a species of interest on each sample plot for localizing the mixed-effects model and predicting HCB of the remaining trees on the plot. Growth simulations can be made from the data that lack the values for either crown ratio or HCB using the HCB models. PMID:29049391
Wave–turbulence interaction-induced vertical mixing and its effects in ocean and climate models
Qiao, Fangli; Yuan, Yeli; Deng, Jia; Dai, Dejun; Song, Zhenya
2016-01-01
Heated from above, the oceans are stably stratified. Therefore, the performance of general ocean circulation models and climate studies through coupled atmosphere–ocean models depends critically on vertical mixing of energy and momentum in the water column. Many of the traditional general circulation models are based on total kinetic energy (TKE), in which the roles of waves are averaged out. Although theoretical calculations suggest that waves could greatly enhance coexisting turbulence, no field measurements on turbulence have ever validated this mechanism directly. To address this problem, a specially designed field experiment has been conducted. The experimental results indicate that the wave–turbulence interaction-induced enhancement of the background turbulence is indeed the predominant mechanism for turbulence generation and enhancement. Based on this understanding, we propose a new parametrization for vertical mixing as an additive part to the traditional TKE approach. This new result reconfirmed the past theoretical model that had been tested and validated in numerical model experiments and field observations. It firmly establishes the critical role of wave–turbulence interaction effects in both general ocean circulation models and atmosphere–ocean coupled models, which could greatly improve the understanding of the sea surface temperature and water column properties distributions, and hence model-based climate forecasting capability. PMID:26953182
How we compute N matters to estimates of mixing in stratified flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.
We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less
How we compute N matters to estimates of mixing in stratified flows
Arthur, Robert S.; Venayagamoorthy, Subhas K.; Koseff, Jeffrey R.; ...
2017-10-13
We know that most commonly used models for turbulent mixing in the ocean rely on a background stratification against which turbulence must work to stir the fluid. While this background stratification is typically well defined in idealized numerical models, it is more difficult to capture in observations. Here, a potential discrepancy in ocean mixing estimates due to the chosen calculation of the background stratification is explored using direct numerical simulation data of breaking internal waves on slopes. There are two different methods for computing the buoyancy frequencymore » $N$$, one based on a three-dimensionally sorted density field (often used in numerical models) and the other based on locally sorted vertical density profiles (often used in the field), are used to quantify the effect of$$N$$on turbulence quantities. It is shown that how$$N$$is calculated changes not only the flux Richardson number$$R_{f}$$, which is often used to parameterize turbulent mixing, but also the turbulence activity number or the Gibson number$$Gi$$, leading to potential errors in estimates of the mixing efficiency using$$Gi$-based parameterizations.« less
Estimating the numerical diapycnal mixing in the GO5.0 ocean model
NASA Astrophysics Data System (ADS)
Megann, Alex; Nurser, George
2014-05-01
Constant-depth (or "z-coordinate") ocean models such as MOM and NEMO have become the de facto workhorse in climate applications, and have attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes (e.g. Hofmann and Maqueda, 2006), and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimations have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is the latest ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre (Megann et al, 2013). It uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. Two approaches to quantifying the numerical diapycnal mixing in this model are described: the first is based on the isopycnal watermass analysis of Lee et al (2002), while the second uses a passive tracer to diagnose mixing across density surfaces. Results from these two methods will be compared and contrasted. Hofmann, M. and Maqueda, M. A. M., 2006. Performance of a second-order moments advection scheme in an ocean general circulation model. JGR-Oceans, 111(C5). Lee, M.-M., Coward, A.C., Nurser, A.G., 2002. Spurious diapycnal mixing of deep waters in an eddy-permitting global ocean model. JPO 32, 1522-1535 Megann, A., Storkey, D., Aksenov, Y., Alderson, S., Calvert, D., Graham, T., Hyder, P., Siddorn, J., and Sinha, B., 2013: GO5.0: The joint NERC-Met Office NEMO global ocean model for use in coupled and forced applications, Geosci. Model Dev. Discuss., 6, 5747-5799,.
NASA Technical Reports Server (NTRS)
Atlas, R. M.
1976-01-01
An advective mixed layer ocean model was developed by eliminating the assumption of horizontal homogeneity in an already existing mixed layer model, and then superimposing a mean and anomalous wind driven current field. This model is based on the principle of conservation of heat and mechanical energy and utilizes a box grid for the advective part of the calculation. Three phases of experiments were conducted: evaluation of the model's ability to account for climatological sea surface temperature (SST) variations in the cooling and heating seasons, sensitivity tests in which the effect of hypothetical anomalous winds was evaluated, and a thirty-day synoptic calculation using the model. For the case studied, the accuracy of the predictions was improved by the inclusion of advection, although nonadvective effects appear to have dominated.
Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian
2015-06-01
We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.
Impact of Antarctic mixed-phase clouds on climate.
Lawson, R Paul; Gettelman, Andrew
2014-12-23
Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. We modify the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm(-2), and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. These sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than -20 °C.
Impact of Antarctic mixed-phase clouds on climate
Lawson, R. Paul; Gettelman, Andrew
2014-01-01
Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. We modify the National Center for Atmospheric Research (NCAR) Community Earth System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm−2, and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. These sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than −20 °C. PMID:25489069
Finite element modeling and analysis of tires
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.
1983-01-01
Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.
The Development and Evaluation of Speaking Learning Model by Cooperative Approach
ERIC Educational Resources Information Center
Darmuki, Agus; Andayani; Nurkamto, Joko; Saddhono, Kundharu
2018-01-01
A cooperative approach-based Speaking Learning Model (SLM) has been developed to improve speaking skill of Higher Education students. This research aimed at evaluating the effectiveness of cooperative-based SLM viewed from the development of student's speaking ability and its effectiveness on speaking activity. This mixed method study combined…
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-03-15
Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.
Adaptive mixed finite element methods for Darcy flow in fractured porous media
NASA Astrophysics Data System (ADS)
Chen, Huangxin; Salama, Amgad; Sun, Shuyu
2016-10-01
In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.
Su, Li; Farewell, Vernon T
2013-01-01
For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. PMID:24201470
Functional Additive Mixed Models
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2014-01-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592
Functional Additive Mixed Models.
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2015-04-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.
Making a mixed-model line more efficient and flexible by introducing a bypass line
NASA Astrophysics Data System (ADS)
Matsuura, Sho; Matsuura, Haruki; Asada, Akiko
2017-04-01
This paper provides a design procedure for the bypass subline in a mixed-model assembly line. The bypass subline is installed to reduce the effect of the large difference in operation times among products assembled together in a mixed-model line. The importance of the bypass subline has been increasing in association with the rising necessity for efficiency and flexibility in modern manufacturing. The main topics of this paper are as follows: 1) the conditions in which the bypass subline effectively functions, and 2) how the load should be distributed between the main line and the bypass subline, depending on production conditions such as degree of difference in operation times among products and the mixing ratio of products. To address these issues, we analyzed the lower and the upper bounds of the line length. Based on the results, a design procedure and a numerical example are demonstrated.
Brannock, M; Wang, Y; Leslie, G
2010-05-01
Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).
Park, Chang-Beom; Jang, Jiyi; Kim, Sanghun; Kim, Young Jun
2017-03-01
In freshwater environments, aquatic organisms are generally exposed to mixtures of various chemical substances. In this study, we tested the toxicity of three organic UV-filters (ethylhexyl methoxycinnamate, octocrylene, and avobenzone) to Daphnia magna in order to evaluate the combined toxicity of these substances when in they occur in a mixture. The values of effective concentrations (ECx) for each UV-filter were calculated by concentration-response curves; concentration-combinations of three different UV-filters in a mixture were determined by the fraction of components based on EC 25 values predicted by concentration addition (CA) model. The interaction between the UV-filters were also assessed by model deviation ratio (MDR) using observed and predicted toxicity values obtained from mixture-exposure tests and CA model. The results from this study indicated that observed ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values obtained from mixture-exposure tests were higher than predicted ECx mix (e.g., EC 10mix , EC 25mix , or EC 50mix ) values calculated by CA model. MDR values were also less than a factor of 1.0 in a mixtures of three different UV-filters. Based on these results, we suggest for the first time a reduction of toxic effects in the mixtures of three UV-filters, caused by antagonistic action of the components. Our findings from this study will provide important information for hazard or risk assessment of organic UV-filters, when they existed together in the aquatic environment. To better understand the mixture toxicity and the interaction of components in a mixture, further studies for various combinations of mixture components are also required. Copyright © 2016 Elsevier Inc. All rights reserved.
Generalized linear mixed models with varying coefficients for longitudinal data.
Zhang, Daowen
2004-03-01
The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.
Investigation of micromixing by acoustically oscillated sharp-edges
Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco
2016-01-01
Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel. PMID:27158292
Investigation of micromixing by acoustically oscillated sharp-edges.
Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco
2016-03-01
Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel.
Singlet model interference effects with high scale UV physics
Dawson, S.; Lewis, I. M.
2017-01-06
One of the simplest extensions of the Standard Model (SM) is the addition of a scalar gauge singlet, S . If S is not forbidden by a symmetry from mixing with the Standard Model Higgs boson, the mixing will generate non-SM rates for Higgs production and decays. Generally, there could also be unknown high energy physics that generates additional effective low energy interactions. We show that interference effects between the scalar resonance of the singlet model and the effective field theory (EFT) operators can have significant effects in the Higgs sector. Here, we examine a non- Z 2 symmetricmore » scalar singlet model and demonstrate that a fit to the 125 GeV Higgs boson couplings and to limits on high mass resonances, S , exhibit an interesting structure and possible large cancellations of effects between the resonance contribution and the new EFT interactions, that invalidate conclusions based on the renormalizable singlet model alone.« less
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, Chunguang; Maini, Philip K.
2005-10-01
The Penna bit-string model successfully encompasses many phenomena of population evolution, including inheritance, mutation, evolution, and aging. If we consider social interactions among individuals in the Penna model, the population will form a complex network. In this paper, we first modify the Verhulst factor to control only the birth rate, and introduce activity-based preferential reproduction of offspring in the Penna model. The social interactions among individuals are generated by both inheritance and activity-based preferential increase. Then we study the properties of the complex network generated by the modified Penna model. We find that the resulting complex network has a small-world effect and the assortative mixing property.
Impact of Antarctic mixed-phase clouds on climate
Lawson, R. Paul; Gettelman, Andrew
2014-12-08
Precious little is known about the composition of low-level clouds over the Antarctic Plateau and their effect on climate. In situ measurements at the South Pole using a unique tethered balloon system and ground-based lidar reveal a much higher than anticipated incidence of low-level, mixed-phase clouds (i.e., consisting of supercooled liquid water drops and ice crystals). The high incidence of mixed-phase clouds is currently poorly represented in global climate models (GCMs). As a result, the effects that mixed-phase clouds have on climate predictions are highly uncertain. In this paper, we modify the National Center for Atmospheric Research (NCAR) Community Earthmore » System Model (CESM) GCM to align with the new observations and evaluate the radiative effects on a continental scale. The net cloud radiative effects (CREs) over Antarctica are increased by +7.4 Wm –2, and although this is a significant change, a much larger effect occurs when the modified model physics are extended beyond the Antarctic continent. The simulations show significant net CRE over the Southern Ocean storm tracks, where recent measurements also indicate substantial regions of supercooled liquid. Finally, these sensitivity tests confirm that Southern Ocean CREs are strongly sensitive to mixed-phase clouds colder than –20 °C.« less
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Linear mixed model for heritability estimation that explicitly addresses environmental variation.
Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S
2016-07-05
The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.
The estimation of branching curves in the presence of subject-specific random effects.
Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng
2014-12-20
Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.
Tom, Brian Dm; Su, Li; Farewell, Vernon T
2016-10-01
For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. © The Author(s) 2013.
Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard
2011-01-01
Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.
A Turbulence model taking into account the longitudinal flow inhomogeneity in mixing layers and jets
NASA Astrophysics Data System (ADS)
Troshin, A. I.
2017-06-01
The problem of potential core length overestimation of subsonic free jets by Reynolds-averaged Navier-Stokes (RANS) based turbulence models is addressed. It is shown that the issue is due to the incorrect velocity profile modeling of the jet mixing layers. An additional source term in ω equation is proposed which takes into account the effect of longitudinal flow inhomogeneity on turbulence in mixing layers. Computations confirm that the modified Speziale-Sarkar-Gatski/Launder- Reece-Rodi-omega (SSG/LRR-ω) turbulence model correctly predicts the mean velocity profiles in both initial and far-field regions of subsonic free plane jet as well as the centerline velocity decay rate.
Synergistic effect of mixed neutron and gamma irradiation in bipolar operational amplifier OP07
NASA Astrophysics Data System (ADS)
Yan, Liu; Wei, Chen; Shanchao, Yang; Xiaoming, Jin; Chaohui, He
2016-09-01
This paper presents the synergistic effects in bipolar operational amplifier OP07. The radiation effects are studied by neutron beam, gamma ray, and mixed neutron/gamma ray environments. The characterateristics of the synergistic effects are studied through comparison of different experiment results. The results show that the bipolar operational amplifier OP07 exhibited significant synergistic effects in the mixed neutron and gamma irradiation. The bipolar transistor is identified as the most radiation sensitive unit of the operational amplifier. In this paper, a series of simulations are performed on bipolar transistors in different radiation environments. In the theoretical simulation, the geometric model and calculations based on the Medici toolkit are built to study the radiation effects in bipolar components. The effect of mixed neutron and gamma irradiation is simulated based on the understanding of the underlying mechanisms of radiation effects in bipolar transistors. The simulated results agree well with the experimental data. The results of the experiments and simulation indicate that the radiation effects in the bipolar devices subjected to mixed neutron and gamma environments is not a simple combination of total ionizing dose (TID) effects and displacement damage. The data suggests that the TID effect could enhance the displacement damage. The synergistic effect should not be neglected in complex radiation environments.
Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer.
Covarrubias-Pazaran, Giovanny
2016-01-01
Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.
Modeling reactive transport with particle tracking and kernel estimators
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernandez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-04-01
Groundwater reactive transport models are useful to assess and quantify the fate and transport of contaminants in subsurface media and are an essential tool for the analysis of coupled physical, chemical, and biological processes in Earth Systems. Particle Tracking Method (PTM) provides a computationally efficient and adaptable approach to solve the solute transport partial differential equation. On a molecular level, chemical reactions are the result of collisions, combinations, and/or decay of different species. For a well-mixed system, the chem- ical reactions are controlled by the classical thermodynamic rate coefficient. Each of these actions occurs with some probability that is a function of solute concentrations. PTM is based on considering that each particle actually represents a group of molecules. To properly simulate this system, an infinite number of particles is required, which is computationally unfeasible. On the other hand, a finite number of particles lead to a poor-mixed system which is limited by diffusion. Recent works have used this effect to actually model incomplete mix- ing in naturally occurring porous media. In this work, we demonstrate that this effect in most cases should be attributed to a defficient estimation of the concentrations and not to the occurrence of true incomplete mixing processes in porous media. To illustrate this, we show that a Kernel Density Estimation (KDE) of the concentrations can approach the well-mixed solution with a limited number of particles. KDEs provide weighting functions of each particle mass that expands its region of influence, hence providing a wider region for chemical reactions with time. Simulation results show that KDEs are powerful tools to improve state-of-the-art simulations of chemical reactions and indicates that incomplete mixing in diluted systems should be modeled based on alternative conceptual models and not on a limited number of particles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, S.; Lewis, I. M.
One of the simplest extensions of the Standard Model (SM) is the addition of a scalar gauge singlet, S . If S is not forbidden by a symmetry from mixing with the Standard Model Higgs boson, the mixing will generate non-SM rates for Higgs production and decays. Generally, there could also be unknown high energy physics that generates additional effective low energy interactions. We show that interference effects between the scalar resonance of the singlet model and the effective field theory (EFT) operators can have significant effects in the Higgs sector. Here, we examine a non- Z 2 symmetricmore » scalar singlet model and demonstrate that a fit to the 125 GeV Higgs boson couplings and to limits on high mass resonances, S , exhibit an interesting structure and possible large cancellations of effects between the resonance contribution and the new EFT interactions, that invalidate conclusions based on the renormalizable singlet model alone.« less
Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie
2017-08-01
Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.
Scheduling Real-Time Mixed-Criticality Jobs
NASA Astrophysics Data System (ADS)
Baruah, Sanjoy K.; Bonifaci, Vincenzo; D'Angelo, Gianlorenzo; Li, Haohan; Marchetti-Spaccamela, Alberto; Megow, Nicole; Stougie, Leen
Many safety-critical embedded systems are subject to certification requirements; some systems may be required to meet multiple sets of certification requirements, from different certification authorities. Certification requirements in such "mixed-criticality" systems give rise to interesting scheduling problems, that cannot be satisfactorily addressed using techniques from conventional scheduling theory. In this paper, we study a formal model for representing such mixed-criticality workloads. We demonstrate first the intractability of determining whether a system specified in this model can be scheduled to meet all its certification requirements, even for systems subject to two sets of certification requirements. Then we quantify, via the metric of processor speedup factor, the effectiveness of two techniques, reservation-based scheduling and priority-based scheduling, that are widely used in scheduling such mixed-criticality systems, showing that the latter of the two is superior to the former. We also show that the speedup factors are tight for these two techniques.
NASA Technical Reports Server (NTRS)
Kuchar, A. P.; Chamberlin, R.
1980-01-01
A scale model performance test was conducted as part of the NASA Energy Efficient Engine (E3) Program, to investigate the geometric variables that influence the aerodynamic design of exhaust system mixers for high-bypass, mixed-flow engines. Mixer configuration variables included lobe number, penetration and perimeter, as well as several cutback mixer geometries. Mixing effectiveness and mixer pressure loss were determined using measured thrust and nozzle exit total pressure and temperature surveys. Results provide a data base to aid the analysis and design development of the E3 mixed-flow exhaust system.
Effects of Transition-Metal Mixing on Na Ordering and Kinetics in Layered P 2 Oxides
NASA Astrophysics Data System (ADS)
Zheng, Chen; Radhakrishnan, Balachandran; Chu, Iek-Heng; Wang, Zhenbin; Ong, Shyue Ping
2017-06-01
Layered P 2 oxides are promising cathode materials for rechargeable sodium-ion batteries. In this work, we systematically investigate the effects of transition-metal (TM) mixing on Na ordering and kinetics in the NaxCo1 -yMnyO2 model system using density-functional-theory (DFT) calculations. The DFT-predicted 0-K stability diagrams indicate that Co-Mn mixing reduces the energetic differences between Na orderings, which may account for the reduction of the number of phase transformations observed during the cycling of mixed-TM P 2 layered oxides compared to a single TM. Using ab initio molecular-dynamics simulations and nudged elastic-band calculations, we show that the TM composition at the Na(1) (face-sharing) site has a strong influence on the Na site energies, which in turn impacts the kinetics of Na diffusion towards the end of the charge. By employing a site-percolation model, we establish theoretical upper and lower bounds for TM concentrations based on their effect on Na(1) site energies, providing a framework to rationally tune mixed-TM compositions for optimal Na diffusion.
Numerical analysis of mixing by sharp-edge-based acoustofluidic micromixer
NASA Astrophysics Data System (ADS)
Nama, Nitesh; Huang, Po-Hsun; Jun Huang, Tony; Costanzo, Francesco
2015-11-01
Recently, acoustically oscillated sharp-edges have been employed to realize rapid and homogeneous mixing at microscales (Huang, Lab on a Chip, 13, 2013). Here, we present a numerical model, qualitatively validated by experimental results, to analyze the acoustic mixing inside a sharp-edge-based micromixer. We extend our previous numerical model (Nama, Lab on a Chip, 14, 2014) to combine the Generalized Lagrangian Mean (GLM) theory with the convection-diffusion equation, while also allowing for the presence of a background flow as observed in a typical sharp-edge-based micromixer. We employ a perturbation approach to divide the flow variables into zeroth-, first- and second-order fields which are successively solved to obtain the Lagrangian mean velocity. The Langrangian mean velocity and the background flow velocity are further employed with the convection-diffusion equation to obtain the concentration profile. We characterize the effects of various operational and geometrical parameters to suggest potential design changes for improving the mixing performance of the sharp-edge-based micromixer. Lastly, we investigate the possibility of generation of a spatio-temporally controllable concentration gradient by placing sharp-edge structures inside the microchannel.
NASA Astrophysics Data System (ADS)
Zhang, Yong; Papelis, Charalambos; Sun, Pengtao; Yu, Zhongbo
2013-08-01
Particle-based models and continuum models have been developed to quantify mixing-limited bimolecular reactions for decades. Effective model parameters control reaction kinetics, but the relationship between the particle-based model parameter (such as the interaction radius R) and the continuum model parameter (i.e., the effective rate coefficient Kf) remains obscure. This study attempts to evaluate and link R and Kf for the second-order bimolecular reaction in both the bulk and the sharp-concentration-gradient (SCG) systems. First, in the bulk system, the agent-based method reveals that R remains constant for irreversible reactions and decreases nonlinearly in time for a reversible reaction, while mathematical analysis shows that Kf transitions from an exponential to a power-law function. Qualitative link between R and Kf can then be built for the irreversible reaction with equal initial reactant concentrations. Second, in the SCG system with a reaction interface, numerical experiments show that when R and Kf decline as t-1/2 (for example, to account for the reactant front expansion), the two models capture the transient power-law growth of product mass, and their effective parameters have the same functional form. Finally, revisiting of laboratory experiments further shows that the best fit factor in R and Kf is on the same order, and both models can efficiently describe chemical kinetics observed in the SCG system. Effective model parameters used to describe reaction kinetics therefore may be linked directly, where the exact linkage may depend on the chemical and physical properties of the system.
Numerical Study of Mixing Thermal Conductivity Models for Nanofluid Heat Transfer Enhancement
NASA Astrophysics Data System (ADS)
Pramuanjaroenkij, A.; Tongkratoke, A.; Kakaç, S.
2018-01-01
Researchers have paid attention to nanofluid applications, since nanofluids have revealed their potentials as working fluids in many thermal systems. Numerical studies of convective heat transfer in nanofluids can be based on considering them as single- and two-phase fluids. This work is focused on improving the single-phase nanofluid model performance, since the employment of this model requires less calculation time and it is less complicated due to utilizing the mixing thermal conductivity model, which combines static and dynamic parts used in the simulation domain alternately. The in-house numerical program has been developed to analyze the effects of the grid nodes, effective viscosity model, boundary-layer thickness, and of the mixing thermal conductivity model on the nanofluid heat transfer enhancement. CuO-water, Al2O3-water, and Cu-water nanofluids are chosen, and their laminar fully developed flows through a rectangular channel are considered. The influence of the effective viscosity model on the nanofluid heat transfer enhancement is estimated through the average differences between the numerical and experimental results for the nanofluids mentioned. The nanofluid heat transfer enhancement results show that the mixing thermal conductivity model consisting of the Maxwell model as the static part and the Yu and Choi model as the dynamic part, being applied to all three nanofluids, brings the numerical results closer to the experimental ones. The average differences between those results for CuO-water, Al2O3-water, and CuO-water nanofluid flows are 3.25, 2.74, and 3.02%, respectively. The mixing thermal conductivity model has been proved to increase the accuracy of the single-phase nanofluid simulation and to reveal its potentials in the single-phase nanofluid numerical studies.
Analyzing Association Mapping in Pedigree-Based GWAS Using a Penalized Multitrait Mixed Model
Liu, Jin; Yang, Can; Shi, Xingjie; Li, Cong; Huang, Jian; Zhao, Hongyu; Ma, Shuangge
2017-01-01
Genome-wide association studies (GWAS) have led to the identification of many genetic variants associated with complex diseases in the past 10 years. Penalization methods, with significant numerical and statistical advantages, have been extensively adopted in analyzing GWAS. This study has been partly motivated by the analysis of Genetic Analysis Workshop (GAW) 18 data, which have two notable characteristics. First, the subjects are from a small number of pedigrees and hence related. Second, for each subject, multiple correlated traits have been measured. Most of the existing penalization methods assume independence between subjects and traits and can be suboptimal. There are a few methods in the literature based on mixed modeling that can accommodate correlations. However, they cannot fully accommodate the two types of correlations while conducting effective marker selection. In this study, we develop a penalized multitrait mixed modeling approach. It accommodates the two different types of correlations and includes several existing methods as special cases. Effective penalization is adopted for marker selection. Simulation demonstrates its satisfactory performance. The GAW 18 data are analyzed using the proposed method. PMID:27247027
The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words
ERIC Educational Resources Information Center
Xu, Joe; Taft, Marcus
2015-01-01
A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…
The effect of different methods to compute N on estimates of mixing in stratified flows
NASA Astrophysics Data System (ADS)
Fringer, Oliver; Arthur, Robert; Venayagamoorthy, Subhas; Koseff, Jeffrey
2017-11-01
The background stratification is typically well defined in idealized numerical models of stratified flows, although it is more difficult to define in observations. This may have important ramifications for estimates of mixing which rely on knowledge of the background stratification against which turbulence must work to mix the density field. Using direct numerical simulation data of breaking internal waves on slopes, we demonstrate a discrepancy in ocean mixing estimates depending on the method in which the background stratification is computed. Two common methods are employed to calculate the buoyancy frequency N, namely a three-dimensionally resorted density field (often used in numerical models) and a locally-resorted vertical density profile (often used in the field). We show that how N is calculated has a significant effect on the flux Richardson number Rf, which is often used to parameterize turbulent mixing, and the turbulence activity number Gi, which leads to errors when estimating the mixing efficiency using Gi-based parameterizations. Supported by ONR Grant N00014-08-1-0904 and LLNL Contract DE-AC52-07NA27344.
Hesse-Biber, Sharlene
2016-04-01
Current trends in health care research point to a shift from disciplinary models to interdisciplinary team-based mixed methods inquiry designs. This keynote address discusses the problems and prospects of creating vibrant mixed methods health care interdisciplinary research teams that can harness their potential synergy that holds the promise of addressing complex health care issues. We examine the range of factors and issues these types of research teams need to consider to facilitate efficient interdisciplinary mixed methods team-based research. It is argued that concepts such as disciplinary comfort zones, a lack of attention to team dynamics, and low levels of reflexivity among interdisciplinary team members can inhibit the effectiveness of a research team. This keynote suggests a set of effective strategies to address the issues that emanate from the new field of research inquiry known as team science as well as lessons learned from tapping into research on organizational dynamics. © The Author(s) 2016.
Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.
Zhang, Yue; Berhane, Kiros
2016-01-01
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.
Mixed conditional logistic regression for habitat selection studies.
Duchesne, Thierry; Fortin, Daniel; Courbin, Nicolas
2010-05-01
1. Resource selection functions (RSFs) are becoming a dominant tool in habitat selection studies. RSF coefficients can be estimated with unconditional (standard) and conditional logistic regressions. While the advantage of mixed-effects models is recognized for standard logistic regression, mixed conditional logistic regression remains largely overlooked in ecological studies. 2. We demonstrate the significance of mixed conditional logistic regression for habitat selection studies. First, we use spatially explicit models to illustrate how mixed-effects RSFs can be useful in the presence of inter-individual heterogeneity in selection and when the assumption of independence from irrelevant alternatives (IIA) is violated. The IIA hypothesis states that the strength of preference for habitat type A over habitat type B does not depend on the other habitat types also available. Secondly, we demonstrate the significance of mixed-effects models to evaluate habitat selection of free-ranging bison Bison bison. 3. When movement rules were homogeneous among individuals and the IIA assumption was respected, fixed-effects RSFs adequately described habitat selection by simulated animals. In situations violating the inter-individual homogeneity and IIA assumptions, however, RSFs were best estimated with mixed-effects regressions, and fixed-effects models could even provide faulty conclusions. 4. Mixed-effects models indicate that bison did not select farmlands, but exhibited strong inter-individual variations in their response to farmlands. Less than half of the bison preferred farmlands over forests. Conversely, the fixed-effect model simply suggested an overall selection for farmlands. 5. Conditional logistic regression is recognized as a powerful approach to evaluate habitat selection when resource availability changes. This regression is increasingly used in ecological studies, but almost exclusively in the context of fixed-effects models. Fitness maximization can imply differences in trade-offs among individuals, which can yield inter-individual differences in selection and lead to departure from IIA. These situations are best modelled with mixed-effects models. Mixed-effects conditional logistic regression should become a valuable tool for ecological research.
NASA Astrophysics Data System (ADS)
Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal
2017-04-01
The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable even after equilibrium. Therefore inclusion of FA concentrations of the sources in the IMM formulation is standard procedure for accurate estimation of source contributions. The post model correction approach that dominates the CSSI fingerprinting causes bias, especially if the FAs concentration of sources differs substantially.
Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models
NASA Astrophysics Data System (ADS)
Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn
2018-05-01
The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.
Pang, Liuyong; Shen, Lin; Zhao, Zhong
2016-01-01
To begin with, in this paper, single immunotherapy, single chemotherapy, and mixed treatment are discussed, and sufficient conditions under which tumor cells will be eliminated ultimately are obtained. We analyze the impacts of the least effective concentration and the half-life of the drug on therapeutic results and then find that increasing the least effective concentration or extending the half-life of the drug can achieve better therapeutic effects. In addition, since most types of tumors are resistant to common chemotherapy drugs, we consider the impact of drug resistance on therapeutic results and propose a new mathematical model to explain the cause of the chemotherapeutic failure using single drug. Based on this, in the end, we explore the therapeutic effects of two-drug combination chemotherapy, as well as mixed immunotherapy with combination chemotherapy. Numerical simulations indicate that combination chemotherapy is very effective in controlling tumor growth. In comparison, mixed immunotherapy with combination chemotherapy can achieve a better treatment effect. PMID:26997972
Pang, Liuyong; Shen, Lin; Zhao, Zhong
2016-01-01
To begin with, in this paper, single immunotherapy, single chemotherapy, and mixed treatment are discussed, and sufficient conditions under which tumor cells will be eliminated ultimately are obtained. We analyze the impacts of the least effective concentration and the half-life of the drug on therapeutic results and then find that increasing the least effective concentration or extending the half-life of the drug can achieve better therapeutic effects. In addition, since most types of tumors are resistant to common chemotherapy drugs, we consider the impact of drug resistance on therapeutic results and propose a new mathematical model to explain the cause of the chemotherapeutic failure using single drug. Based on this, in the end, we explore the therapeutic effects of two-drug combination chemotherapy, as well as mixed immunotherapy with combination chemotherapy. Numerical simulations indicate that combination chemotherapy is very effective in controlling tumor growth. In comparison, mixed immunotherapy with combination chemotherapy can achieve a better treatment effect.
Line mixing calculation in the ν 6Q-branches of N 2-broadened CH 3Br at low temperatures
NASA Astrophysics Data System (ADS)
Gomez, L.; Tran, H.; Jacquemart, D.
2009-07-01
In an early study [H. Tran, D. Jacquemart, J.Y. Mandin, N. Lacome, JQSRT 109 (2008) 119-131], line mixing effects of the ν 6 band of methyl bromide were observed and modeled at room temperature. In the present work, line mixing effects have been considered at low temperatures using state-to-state collisional rates which were modeled by a fitting law based on the energy gap and a few fitting parameters. To validate the model, several spectra of methyl bromide perturbed by nitrogen have been recorded at various temperatures (205-299 K) and pressures (230-825 hPa). Comparisons between measured spectra and calculations using both direct calculation from relaxation operator and Rosenkranz profile have been performed showing improvement compared to the usual Lorentz profile. Note that the temperature dependence of the spectroscopic parameters has been taken into account using results of previous studies.
Genetic mixed linear models for twin survival data.
Ha, Il Do; Lee, Youngjo; Pawitan, Yudi
2007-07-01
Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.
Internal friction and vulnerability of mixed alkali glasses.
Peibst, Robby; Schott, Stephan; Maass, Philipp
2005-09-09
Based on a hopping model we show how the mixed alkali effect in glasses can be understood if only a small fraction c(V) of the available sites for the mobile ions is vacant. In particular, we reproduce the peculiar behavior of the internal friction and the steep fall ("vulnerability") of the mobility of the majority ion upon small replacements by the minority ion. The single and mixed alkali internal friction peaks are caused by ion-vacancy and ion-ion exchange processes. If c(V) is small, they can become comparable in height even at small mixing ratios. The large vulnerability is explained by a trapping of vacancies induced by the minority ions. Reasonable choices of model parameters yield typical behaviors found in experiments.
Nguyen, Huy Truong; Lee, Dong-Kyu; Choi, Young-Geun; Min, Jung-Eun; Yoon, Sang Jun; Yu, Yun-Hyun; Lim, Johan; Lee, Jeongmi; Kwon, Sung Won; Park, Jeong Hill
2016-05-30
Ginseng, the root of Panax ginseng has long been the subject of adulteration, especially regarding its origins. Here, 60 ginseng samples from Korea and China initially displayed similar genetic makeup when investigated by DNA-based technique with 23 chloroplast intergenic space regions. Hence, (1)H NMR-based metabolomics with orthogonal projections on the latent structure-discrimination analysis (OPLS-DA) were applied and successfully distinguished between samples from two countries using seven primary metabolites as discrimination markers. Furthermore, to recreate adulteration in reality, 21 mixed samples of numerous Korea/China ratios were tested with the newly built OPLS-DA model. The results showed satisfactory separation according to the proportion of mixing. Finally, a procedure for assessing mixing proportion of intentionally blended samples that achieved good predictability (adjusted R(2)=0.8343) was constructed, thus verifying its promising application to quality control of herbal foods by pointing out the possible mixing ratio of falsified samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Mixed models approaches for joint modeling of different types of responses.
Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert
2016-01-01
In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.
A Parameter Subset Selection Algorithm for Mixed-Effects Models
Schmidt, Kathleen L.; Smith, Ralph C.
2016-01-01
Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less
Performance of nonlinear mixed effects models in the presence of informative dropout.
Björnsson, Marcus A; Friberg, Lena E; Simonsson, Ulrika S H
2015-01-01
Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.
GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.
Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N
2018-01-01
Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.
Using Poisson mixed-effects model to quantify transcript-level gene expression in RNA-Seq.
Hu, Ming; Zhu, Yu; Taylor, Jeremy M G; Liu, Jun S; Qin, Zhaohui S
2012-01-01
RNA sequencing (RNA-Seq) is a powerful new technology for mapping and quantifying transcriptomes using ultra high-throughput next-generation sequencing technologies. Using deep sequencing, gene expression levels of all transcripts including novel ones can be quantified digitally. Although extremely promising, the massive amounts of data generated by RNA-Seq, substantial biases and uncertainty in short read alignment pose challenges for data analysis. In particular, large base-specific variation and between-base dependence make simple approaches, such as those that use averaging to normalize RNA-Seq data and quantify gene expressions, ineffective. In this study, we propose a Poisson mixed-effects (POME) model to characterize base-level read coverage within each transcript. The underlying expression level is included as a key parameter in this model. Since the proposed model is capable of incorporating base-specific variation as well as between-base dependence that affect read coverage profile throughout the transcript, it can lead to improved quantification of the true underlying expression level. POME can be freely downloaded at http://www.stat.purdue.edu/~yuzhu/pome.html. yuzhu@purdue.edu; zhaohui.qin@emory.edu Supplementary data are available at Bioinformatics online.
Physiological effects of diet mixing on consumer fitness: a meta-analysis.
Lefcheck, Jonathan S; Whalen, Matthew A; Davenport, Theresa M; Stone, Joshua P; Duffy, J Emmett
2013-03-01
The degree of dietary generalism among consumers has important consequences for population, community, and ecosystem processes, yet the effects on consumer fitness of mixing food types have not been examined comprehensively. We conducted a meta-analysis of 161 peer-reviewed studies reporting 493 experimental manipulations of prey diversity to test whether diet mixing enhances consumer fitness based on the intrinsic nutritional quality of foods and consumer physiology. Averaged across studies, mixed diets conferred significantly higher fitness than the average of single-species diets, but not the best single prey species. More than half of individual experiments, however, showed maximal growth and reproduction on mixed diets, consistent with the predicted benefits of a balanced diet. Mixed diets including chemically defended prey were no better than the average prey type, opposing the prediction that a diverse diet dilutes toxins. Finally, mixed-model analysis showed that the effect of diet mixing was stronger for herbivores than for higher trophic levels. The generally weak evidence for the nutritional benefits of diet mixing in these primarily laboratory experiments suggests that diet generalism is not strongly favored by the inherent physiological benefits of mixing food types, but is more likely driven by ecological and environmental influences on consumer foraging.
NASA Technical Reports Server (NTRS)
Li, Xiaofan; Sui, C.-H.; Lau, K-M.; Adamec, D.
1999-01-01
A two-dimensional coupled ocean-cloud resolving atmosphere model is used to investigate possible roles of convective scale ocean disturbances induced by atmospheric precipitation on ocean mixed-layer heat and salt budgets. The model couples a cloud resolving model with an embedded mixed layer-ocean circulation model. Five experiment are performed under imposed large-scale atmospheric forcing in terms of vertical velocity derived from the TOGA COARE observations during a selected seven-day period. The dominant variability of mixed-layer temperature and salinity are simulated by the coupled model with imposed large-scale forcing. The mixed-layer temperatures in the coupled experiments with 1-D and 2-D ocean models show similar variations when salinity effects are not included. When salinity effects are included, however, differences in the domain-mean mixed-layer salinity and temperature between coupled experiments with 1-D and 2-D ocean models could be as large as 0.3 PSU and 0.4 C respectively. Without fresh water effects, the nocturnal heat loss over ocean surface causes deep mixed layers and weak cooling rates so that the nocturnal mixed-layer temperatures tend to be horizontally-uniform. The fresh water flux, however, causes shallow mixed layers over convective areas while the nocturnal heat loss causes deep mixed layer over convection-free areas so that the mixed-layer temperatures have large horizontal fluctuations. Furthermore, fresh water flux exhibits larger spatial fluctuations than surface heat flux because heavy rainfall occurs over convective areas embedded in broad non-convective or clear areas, whereas diurnal signals over whole model areas yield high spatial correlation of surface heat flux. As a result, mixed-layer salinities contribute more to the density differences than do mixed-layer temperatures.
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Nakshatrala, K. B.
2016-12-01
Fundamental to enhancement and control of the macroscopic spreading, mixing, and dilution of solute plumes in porous media structures is the topology of flow field and underlying heterogeneity and anisotropy contrast of porous media. Traditionally, in literature, the main focus was limited to the shearing effects of flow field (i.e., flow has zero helical density, meaning that flow is always perpendicular to vorticity vector) on scalar mixing [2]. However, the combined effect of anisotropy of the porous media and the helical structure (or chaotic nature) of the flow field on the species reactive-transport and mixing has been rarely studied. Recently, it has been experimentally shown that there is an irrefutable evidence that chaotic advection and helical flows are inherent in porous media flows [1,2]. In this poster presentation, we present a non-intrusive physics-based model-order reduction framework to quantify the effects of species mixing in-terms of reduced-order models (ROMs) and scaling laws. The ROM framework is constructed based on the recent advancements in non-negative formulations for reactive-transport in heterogeneous anisotropic porous media [3] and non-intrusive ROM methods [4]. The objective is to generate computationally efficient and accurate ROMs for species mixing for different values of input data and reactive-transport model parameters. This is achieved by using multiple ROMs, which is a way to determine the robustness of the proposed framework. Sensitivity analysis is performed to identify the important parameters. Representative numerical examples from reactive-transport are presented to illustrate the importance of the proposed ROMs to accurately describe mixing process in porous media. [1] Lester, Metcalfe, and Trefry, "Is chaotic advection inherent to porous media flow?," PRL, 2013. [2] Ye, Chiogna, Cirpka, Grathwohl, and Rolle, "Experimental evidence of helical flow in porous media," PRL, 2015. [3] Mudunuru, and Nakshatrala, "On enforcing maximum principles and achieving element-wise species balance for advection-diffusion-reaction equations under the finite element method," JCP, 2016. [4] Quarteroni, Manzoni, and Negri. "Reduced Basis Methods for Partial Differential Equations: An Introduction," Springer, 2016.
Pinto, B M; Lynn, H; Marcus, B H; DePue, J; Goldstein, M G
2001-01-01
In theory-based interventions for behavior change, there is a need to examine the effects of interventions on the underlying theoretical constructs and the mediating role of such constructs. These two questions are addressed in the Physically Active for Life study, a randomized trial of physician-based exercise counseling for older adults. Three hundred fifty-five patients participated (intervention n = 181, control n = 174; mean age = 65.6 years). The underlying theories used were the Transtheoretical Model, Social Cognitive Theory and the constructs of decisional balance (benefits and barriers), self-efficacy, and behavioral and cognitive processes of change. Motivational readiness for physical activity and related constructs were assessed at baseline, 6 weeks, and 8 months. Linear or logistic mixed effects models were used to examine intervention effects on the constructs, and logistic mixed effects models were used for mediator analyses. At 6 weeks, the intervention had significant effects on decisional balance, self-efficacy, and behavioral processes, but these effects were not maintained at 8 months. At 6 weeks, only decisional balance and behavioral processes were identified as mediators of motivational readiness outcomes. Results suggest that interventions of greater intensity and duration may be needed for sustained changes in mediators and motivational readiness for physical activity among older adults.
Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.
Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng
2014-06-01
Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.
Extended Mixed-Efects Item Response Models with the MH-RM Algorithm
ERIC Educational Resources Information Center
Chalmers, R. Philip
2015-01-01
A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…
Model for toroidal velocity in H-mode plasmas in the presence of internal transport barriers
NASA Astrophysics Data System (ADS)
Chatthong, B.; Onjun, T.; Singhsomroje, W.
2010-06-01
A model for predicting toroidal velocity in H-mode plasmas in the presence of internal transport barriers (ITBs) is developed using an empirical approach. In this model, it is assumed that the toroidal velocity is directly proportional to the local ion temperature. This model is implemented in the BALDUR integrated predictive modelling code so that simulations of ITB plasmas can be carried out self-consistently. In these simulations, a combination of a semi-empirical mixed Bohm/gyro-Bohm (mixed B/gB) core transport model that includes ITB effects and NCLASS neoclassical transport is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The combination of the mixed B/gB core transport model with ITB effects, together with the pedestal and the toroidal velocity models, is used to simulate the time evolution of plasma current, temperature and density profiles of 10 JET optimized shear discharges. It is found that the simulations can reproduce an ITB formation in these discharges. Statistical analyses including root mean square error (RMSE) and offset are used to quantify the agreement. It is found that the averaged RMSE and offset among these discharges are about 24.59% and -0.14%, respectively.
NASA Astrophysics Data System (ADS)
Rahbarimanesh, Saeed; Brinkerhoff, Joshua
2017-11-01
The mutual interaction of shear layer instabilities and phase change in a two-dimensional cryogenic cavitating mixing layer is investigated using a numerical model. The developed model employs the homogeneous equilibrium mixture (HEM) approach in a density-based framework to compute the temperature-dependent cavitation field for liquefied natural gas (LNG). Thermal and baroclinic effects are captured via iterative coupled solution of the governing equations with dynamic thermophysical models that accurately capture the properties of LNG. The mixing layer is simulated for vorticity-thickness Reynolds numbers of 44 to 215 and cavitation numbers of 0.1 to 1.1. Attached cavity structures develop on the splitter plate followed by roll-up of the separated shear layer via the well-known Kelvin-Helmholtz mode, leading to streamwise accumulation of vorticity and eventual shedding of discrete vortices. Cavitation occurs as vapor cavities nucleate and grow from the low-pressure cores in the rolled-up vortices. Thermal effects and baroclinic vorticity production are found to have significant impacts on the mixing layer instability and cavitation processes.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Curtis L. Vanderschaaf
2008-01-01
Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...
NASA Astrophysics Data System (ADS)
Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan
2015-02-01
The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean-sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice-ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.
NASA Astrophysics Data System (ADS)
Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan
2015-04-01
The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean--sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice--ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.
Convective Overshoot in Stellar Interior
NASA Astrophysics Data System (ADS)
Zhang, Q. S.
2015-07-01
In stellar interiors, the turbulent thermal convection transports matters and energy, and dominates the structure and evolution of stars. The convective overshoot, which results from the non-local convective transport from the convection zone to the radiative zone, is one of the most uncertain and difficult factors in stellar physics at present. The classical method for studying the convective overshoot is the non-local mixing-length theory (NMLT). However, the NMLT bases on phenomenological assumptions, and leads to contradictions, thus the NMLT was criticized in literature. At present, the helioseismic studies have shown that the NMLT cannot satisfy the helioseismic requirements, and have pointed out that only the turbulent convection models (TCMs) can be accepted. In the first part of this thesis, models and derivations of both the NMLT and the TCM were introduced. In the second part, i.e., the work part, the studies on the TCM (theoretical analysis and applications), and the development of a new model of the convective overshoot mixing were described in detail. In the work of theoretical analysis on the TCM, the approximate solution and the asymptotic solution were obtained based on some assumptions. The structure of the overshoot region was discussed. In a large space of the free parameters, the approximate/asymptotic solutions are in good agreement with the numerical results. We found an important result that the scale of the overshoot region in which the thermal energy transport is effective is 1 HK (HK is the scale height of turbulence kinetic energy), which does not depend on the free parameters of the TCM. We applied the TCM and a simple overshoot mixing model in three cases. In the solar case, it was found that the temperature gradient in the overshoot region is in agreement with the helioseismic requirements, and the profiles of the solar lithium abundance, sound speed, and density of the solar models are also improved. In the low-mass stars of open clusters Hyades, Praesepe, NGC6633, NGC752, NGC3680, and M67, using the model and parameter same to the solar case to deal with the convective envelope overshoot mixing, the lithium abundances on the surface of the stellar models were consistent with the observations. In the case of the binary HY Vir, the same model and parameter also make the radii and effective temperatures of HY Vir stars with convective cores be consistent with the observations. Based on the implications of the above results, we found that the simple overshoot mixing model may need to be improved significantly. Motivated by those implications, we established a new model of the overshoot mixing based on the fluid dynamic equations, and worked out the diffusion coefficient of convective mixing. The diffusion coefficient shows different behaviors in convection zone and overshoot region. In the overshoot region, the buoyancy does negative works on flows, thus the fluid flows around the equilibrium location, which leads to a small scale and low efficiency of overshoot mixing. The physical properties are significantly different from the classical NMLT, and consistent with the helioseismic studies and numerical simulations. The new model was tested in stellar evolution, and its parameter was calibrated.
Zhang, Xueying; Chu, Yiyi; Wang, Yuxuan; Zhang, Kai
2018-08-01
The regulatory monitoring data of particulate matter with an aerodynamic diameter <2.5μm (PM 2.5 ) in Texas have limited spatial and temporal coverage. The purpose of this study is to estimate the ground-level PM 2.5 concentrations on a daily basis using satellite-retrieved Aerosol Optical Depth (AOD) in the state of Texas. We obtained the AOD values at 1-km resolution generated through the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm based on the images retrieved from the Moderate Resolution Imaging Spectroradiometer (MODIS) satellites. We then developed mixed-effects models based on AODs, land use features, geographic characteristics, and weather conditions, and the day-specific as well as site-specific random effects to estimate the PM 2.5 concentrations (μg/m 3 ) in the state of Texas during the period 2008-2013. The mixed-effects models' performance was evaluated using the coefficient of determination (R 2 ) and square root of the mean squared prediction error (RMSPE) from ten-fold cross-validation, which randomly selected 90% of the observations for training purpose and 10% of the observations for assessing the models' true prediction ability. Mixed-effects regression models showed good prediction performance (R 2 values from 10-fold cross validation: 0.63-0.69). The model performance varied by regions and study years, and the East region of Texas, and year of 2009 presented relatively higher prediction precision (R 2 : 0.62 for the East region; R 2 : 0.69 for the year of 2009). The PM 2.5 concentrations generated through our developed models at 1-km grid cells in the state of Texas showed a decreasing trend from 2008 to 2013 and a higher reduction of predicted PM 2.5 in more polluted areas. Our findings suggest that mixed-effects regression models developed based on MAIAC AOD are a feasible approach to predict ground-level PM 2.5 in Texas. Predicted PM 2.5 concentrations at the 1-km resolution on a daily basis can be used for epidemiological studies to investigate short- and long-term health impact of PM 2.5 in Texas. Copyright © 2017 Elsevier B.V. All rights reserved.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling Cloud Phase Fraction Based on In-situ Observations in Stratiform Clouds
NASA Astrophysics Data System (ADS)
Boudala, F. S.; Isaac, G. A.
2005-12-01
Mixed-phase clouds influence weather and climate in several ways. Due to the fact that they exhibit very different optical properties as compared to ice or liquid only clouds, they play an important role in the earth's radiation balance by modifying the optical properties of clouds. Precipitation development in clouds is also enhanced under mixed-phase conditions and these clouds may contain large supercooled drops that freeze quickly in contact with aircraft surfaces that may be a hazard to aviation. The existence of ice and liquid phase clouds together in the same environment is thermodynamically unstable, and thus they are expected to disappear quickly. However, several observations show that mixed-phase clouds are relatively stable in the natural environment and last for several hours. Although there have been some efforts being made in the past to study the microphysical properties of mixed-phase clouds, there are still a number of uncertainties in modeling these clouds particularly in large scale numerical models. In most models, very simple temperature dependent parameterizations of cloud phase fraction are being used to estimate the fraction of ice or liquid phase in a given mixed-phase cloud. In this talk, two different parameterizations of ice fraction using in-situ aircraft measurements of cloud microphysical properties collected in extratropical stratiform clouds during several field programs will be presented. One of the parameterizations has been tested using a single prognostic equation developed by Tremblay et al. (1996) for application in the Canadian regional weather prediction model. The addition of small ice particles significantly increased the vapor deposition rate when the natural atmosphere is assumed to be water saturated, and thus this enhanced the glaciation of simulated mixed-phase cloud via the Bergeron-Findeisen process without significantly affecting the other cloud microphysical processes such as riming and particle sedimentation rates. After the water vapor pressure in mixed-phase cloud was modified based on the Lord et al. (1984) scheme by weighting the saturation water vapor pressure with ice fraction, it was possible to simulate more stable mixed-phase cloud. It was also noted that the ice particle concentration (L>100 μm) in mixed-phase cloud is lower on average by a factor 3 and as a result the parameterization should be corrected for this effect. After accounting for this effect, the parameterized ice fraction agreed well with observed mean ice fraction.
Verdon, Megan; Morrison, R S; Hemsworth, P H
2018-05-01
This experiment examined the effects of group composition on sow aggressive behaviour and welfare. Over 6 time replicates, 360 sows (parity 1-6) were mixed into groups (10 sows per pen, 1.8 m 2 /sow) composed of animals that were predicted to be aggressive (n = 18 pens) or groups composed of animals that were randomly selected (n = 18 pens). Predicted aggressive sows were selected based on a model-pig test that has been shown to be related to the aggressive behaviour of parity 2 sows when subsequently mixed in groups. Measurements were taken on aggression delivered post-mixing, and aggression delivered around feeding, fresh skin injuries and plasma cortisol concentrations at days 2 and 24 post-mixing. Live weight gain, litter size (born alive, total born, stillborn piglets), and farrowing rate were also recorded. Manipulating the group composition based on predicted sow aggressiveness had no effect (P > 0.05) on sow aggression delivered at mixing or around feeding, fresh injuries, cortisol, weight gain from day 2 to day 24, farrowing rate, or litter size. The lack of treatment effects in the present experiment could be attributed to (1) a failure of the model-pig test to predict aggression in older sows in groups, or (2) the dependence of the expression of the aggressive phenotype on factors such as social experience and characteristics (e.g., physical size and aggressive phenotype) of pen mates. This research draws attention to the intrinsic difficulties associated with predicting behaviour across contexts, particularly when the behaviour is highly dependent on interactions with conspecifics, and highlights the social complexities involved in the presentation of a behavioural phenotype. Copyright © 2018 Elsevier B.V. All rights reserved.
Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.
2015-01-01
In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871
NASA Technical Reports Server (NTRS)
Kuchar, A. P.; Chamberlin, R.
1983-01-01
As part of the NASA Energy Efficient Engine program, scale-model performance tests of a mixed flow exhaust system were conducted. The tests were used to evaluate the performance of exhaust system mixers for high-bypass, mixed-flow turbofan engines. The tests indicated that: (1) mixer penetration has the most significant affect on both mixing effectiveness and mixer pressure loss; (2) mixing/tailpipe length improves mixing effectiveness; (3) gap reduction between the mixer and centerbody increases high mixing effectiveness; (4) mixer cross-sectional shape influences mixing effectiveness; (5) lobe number affects mixing degree; and (6) mixer aerodynamic pressure losses are a function of secondary flows inherent to the lobed mixer concept.
NASA Astrophysics Data System (ADS)
Ahmed, Tarek Nabil; Khan, Ilyas
2018-03-01
This article aims to study the mixed convection heat transfer in non-Newtonian nanofluids over an infinite vertical plate. Mixed convection is caused due to buoyancy force and sudden plate motion. Sodium alginate (SA-NaAlg) is considered as non-Newtonian base fluid and molybdenum disulphide (MoS2) as nanoparticles are suspended in it. The effective thermal conductivity and viscosity of nanofluid are calculated using the Maxwell-Garnetts (MG) and Brinkman models, respectively. The flow is modeled in the form of partial differential equations with imposed physical conditions. Exact solutions for velocity and temperature fields are developed by means of the Laplace transform technique. Numerical computations are performed for different governing parameters such as non-Newtonian parameter, Grashof number and nanoparticle volume fraction and the results are plotted in various graphs. Results for skin friction and Nusselt number are presented in tabular form which show that increasing nanoparticle volume fraction leads to heat transfer enhancement and increasing skin friction.
Ellis, Mark; Holloway, Steven R.; Wright, Richard; Fowler, Christopher S.
2014-01-01
This article explores the effects of mixed-race household formation on trends in neighborhood-scale racial segregation. Census data show that these effects are nontrivial in relation to the magnitude of decadal changes in residential segregation. An agent-based model illustrates the potential long-run impacts of rising numbers of mixed-race households on measures of neighborhood-scale segregation. It reveals that high rates of mixed-race household formation will reduce residential segregation considerably. This occurs even when preferences for own-group neighbors are high enough to maintain racial separation in residential space in a Schelling-type model. We uncover a disturbing trend, however; levels of neighborhood-scale segregation of single-race households can remain persistently high even while a growing number of mixed-race households drives down the overall rate of residential segregation. Thus, the article’s main conclusion is that parsing neighborhood segregation levels by household type—single versus mixed race—is essential to interpret correctly trends in the spatial separation of racial groups, especially when the fraction of households that are mixed race is dynamic. More broadly, the article illustrates the importance of household-scale processes for urban outcomes and joins debates in geography about interscalar relationships. PMID:25082984
NASA Astrophysics Data System (ADS)
Cremer, Jonas; Segota, Igor; Yang, Chih-Yu; Arnoldini, Markus; Groisman, Alex; Hwa, Terence
2016-11-01
More than half of fecal dry weight is bacterial mass with bacterial densities reaching up to 1012 cells per gram. Mostly, these bacteria grow in the proximal large intestine where lateral flow along the intestine is strong: flow can in principal lead to a washout of bacteria from the proximal large intestine. Active mixing by contractions of the intestinal wall together with bacterial growth might counteract such a washout and allow high bacterial densities to occur. As a step towards understanding bacterial growth in the presence of mixing and flow, we constructed an in-vitro setup where controlled wall-deformations of a channel emulate contractions. We investigate growth along the channel under a steady nutrient inflow. Depending on mixing and flow, we observe varying spatial gradients in bacterial density along the channel. Active mixing by deformations of the channel wall is shown to be crucial in maintaining a steady-state bacterial population in the presence of flow. The growth-dynamics is quantitatively captured by a simple mathematical model, with the effect of mixing described by an effective diffusion term. Based on this model, we discuss bacterial growth dynamics in the human large intestine using flow- and mixing-behavior having been observed for humans.
Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik
2009-06-01
The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.
Modeling optimal treatment strategies in a heterogeneous mixing model.
Choe, Seoyun; Lee, Sunmi
2015-11-25
Many mathematical models assume random or homogeneous mixing for various infectious diseases. Homogeneous mixing can be generalized to mathematical models with multi-patches or age structure by incorporating contact matrices to capture the dynamics of the heterogeneously mixing populations. Contact or mixing patterns are difficult to measure in many infectious diseases including influenza. Mixing patterns are considered to be one of the critical factors for infectious disease modeling. A two-group influenza model is considered to evaluate the impact of heterogeneous mixing on the influenza transmission dynamics. Heterogeneous mixing between two groups with two different activity levels includes proportionate mixing, preferred mixing and like-with-like mixing. Furthermore, the optimal control problem is formulated in this two-group influenza model to identify the group-specific optimal treatment strategies at a minimal cost. We investigate group-specific optimal treatment strategies under various mixing scenarios. The characteristics of the two-group influenza dynamics have been investigated in terms of the basic reproduction number and the final epidemic size under various mixing scenarios. As the mixing patterns become proportionate mixing, the basic reproduction number becomes smaller; however, the final epidemic size becomes larger. This is due to the fact that the number of infected people increases only slightly in the higher activity level group, while the number of infected people increases more significantly in the lower activity level group. Our results indicate that more intensive treatment of both groups at the early stage is the most effective treatment regardless of the mixing scenario. However, proportionate mixing requires more treated cases for all combinations of different group activity levels and group population sizes. Mixing patterns can play a critical role in the effectiveness of optimal treatments. As the mixing becomes more like-with-like mixing, treating the higher activity group in the population is almost as effective as treating the entire populations since it reduces the number of disease cases effectively but only requires similar treatments. The gain becomes more pronounced as the basic reproduction number increases. This can be a critical issue which must be considered for future pandemic influenza interventions, especially when there are limited resources available.
Effects of partitioned enthalpy of mixing on glass-forming ability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Wen-Xiong; Zhao, Shi-Jin, E-mail: shijin.zhao@shu.edu.cn
2015-04-14
We explore the inherent reason at atomic level for the glass-forming ability of alloys by molecular simulation, in which the effect of partitioned enthalpy of mixing is studied. Based on Morse potential, we divide the enthalpy of mixing into three parts: the chemical part (Δ E{sub nn}), strain part (Δ E{sub strain}), and non-bond part (Δ E{sub nnn}). We find that a large negative Δ E{sub nn} value represents strong AB chemical bonding in AB alloy and is the driving force to form a local ordered structure, meanwhile the transformed local ordered structure needs to satisfy the condition (Δ E{submore » nn}/2 + Δ E{sub strain}) < 0 to be stabilized. Understanding the chemical and strain parts of enthalpy of mixing is helpful to design a new metallic glass with a good glass forming ability. Moreover, two types of metallic glasses (i.e., “strain dominant” and “chemical dominant”) are classified according to the relative importance between chemical effect and strain effect, which enriches our knowledge of the forming mechanism of metallic glass. Finally, a soft sphere model is established, different from the common hard sphere model.« less
Penjumras, Patpen; Abdul Rahman, Russly; Talib, Rosnita A.; Abdan, Khalina
2015-01-01
Response surface methodology was used to optimize preparation of biocomposites based on poly(lactic acid) and durian peel cellulose. The effects of cellulose loading, mixing temperature, and mixing time on tensile strength and impact strength were investigated. A central composite design was employed to determine the optimum preparation condition of the biocomposites to obtain the highest tensile strength and impact strength. A second-order polynomial model was developed for predicting the tensile strength and impact strength based on the composite design. It was found that composites were best fit by a quadratic regression model with high coefficient of determination (R 2) value. The selected optimum condition was 35 wt.% cellulose loading at 165°C and 15 min of mixing, leading to a desirability of 94.6%. Under the optimum condition, the tensile strength and impact strength of the biocomposites were 46.207 MPa and 2.931 kJ/m2, respectively. PMID:26167523
Blanchin, Myriam; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Blanchard, Claire; Mirallié, Eric; Sébille, Véronique
2011-04-15
Health sciences frequently deal with Patient Reported Outcomes (PRO) data for the evaluation of concepts, in particular health-related quality of life, which cannot be directly measured and are often called latent variables. Two approaches are commonly used for the analysis of such data: Classical Test Theory (CTT) and Item Response Theory (IRT). Longitudinal data are often collected to analyze the evolution of an outcome over time. The most adequate strategy to analyze longitudinal latent variables, which can be either based on CTT or IRT models, remains to be identified. This strategy must take into account the latent characteristic of what PROs are intended to measure as well as the specificity of longitudinal designs. A simple and widely used IRT model is the Rasch model. The purpose of our study was to compare CTT and Rasch-based approaches to analyze longitudinal PRO data regarding type I error, power, and time effect estimation bias. Four methods were compared: the Score and Mixed models (SM) method based on the CTT approach, the Rasch and Mixed models (RM), the Plausible Values (PV), and the Longitudinal Rasch model (LRM) methods all based on the Rasch model. All methods have shown comparable results in terms of type I error, all close to 5 per cent. LRM and SM methods presented comparable power and unbiased time effect estimations, whereas RM and PV methods showed low power and biased time effect estimations. This suggests that RM and PV methods should be avoided to analyze longitudinal latent variables. Copyright © 2010 John Wiley & Sons, Ltd.
The effects of mixed layer dynamics on ice growth in the central Arctic
NASA Astrophysics Data System (ADS)
Kitchen, Bruce R.
1992-09-01
The thermodynamic model of Thorndike (1992) is coupled to a one dimensional, two layer ocean entrainment model to study the effect of mixed layer dynamics on ice growth and the variation in the ocean heat flux into the ice due to mixed layer entrainment. Model simulations show the existence of a negative feedback between the ice growth and the mixed layer entrainment, and that the underlying ocean salinity has a greater effect on the ocean beat flux than does variations in the underlying ocean temperature. Model simulations for a variety of surface forcings and initial conditions demonstrate the need to include mixed layer dynamics for realistic ice prediction in the arctic.
Modelling of upper ocean mixing by wave-induced turbulence
NASA Astrophysics Data System (ADS)
Ghantous, Malek; Babanin, Alexander
2013-04-01
Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into models, and real gains have been made in terms of increased fidelity to observational data. However our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing models need refinement and propose an alternative model. We use two of the models to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.
Nursing home case mix in Wisconsin. Findings and policy implications.
Arling, G; Zimmerman, D; Updike, L
1989-02-01
Along with many other states, Wisconsin is considering a case mix approach to Medicaid nursing home reimbursement. To support this effort, a nursing home case mix model was developed from a representative sample of 410 Medicaid nursing home residents from 56 facilities in Wisconsin. The model classified residents into mutually exclusive groups that were homogeneous in their use of direct care resources, i.e., minutes of direct care time (weighted for nurse skill level) over a 7-day period. Groups were defined initially by intense, Special, or Routine nursing requirements. Within these nursing requirement categories, subgroups were formed by the presence/absence of behavioral problems and dependency in activities of daily living (ADL). Wisconsin's current Skilled/Intermediate Care (SNF/ICF) classification system was analyzed in light of the case mix model and found to be less effective in distinguishing residents by resource use. The case mix model accounted for 48% of the variance in resource use, whereas the SNF/ICF classification system explained 22%. Comparisons were drawn with nursing home case mix models in New York State (RUG-II) and Minnesota. Despite progress in the study of nursing home case mix and its application to reimbursement reform, methodologic and policy issues remain. These include the differing operational definitions for nursing requirements and ADL dependency, the inconsistency in findings concerning psychobehavioral problems, and the problem of promoting positive health and functional outcomes based on models that may be insensitive to change in resident conditions over time.
Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A
2010-07-01
Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
NASA Astrophysics Data System (ADS)
Watanabe, T.; Nagata, K.
2016-08-01
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting a value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES-LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.
Mixing model with multi-particle interactions for Lagrangian simulations of turbulent mixing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, T., E-mail: watanabe.tomoaki@c.nagoya-u.jp; Nagata, K.
We report on the numerical study of the mixing volume model (MVM) for molecular diffusion in Lagrangian simulations of turbulent mixing problems. The MVM is based on the multi-particle interaction in a finite volume (mixing volume). A priori test of the MVM, based on the direct numerical simulations of planar jets, is conducted in the turbulent region and the interfacial layer between the turbulent and non-turbulent fluids. The results show that the MVM predicts well the mean effects of the molecular diffusion under various numerical and flow parameters. The number of the mixing particles should be large for predicting amore » value of the molecular diffusion term positively correlated to the exact value. The size of the mixing volume relative to the Kolmogorov scale η is important in the performance of the MVM. The scalar transfer across the turbulent/non-turbulent interface is well captured by the MVM especially with the small mixing volume. Furthermore, the MVM with multiple mixing particles is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (LES–LPS) of the planar jet with the characteristic length of the mixing volume of O(100η). Despite the large mixing volume, the MVM works well and decays the scalar variance in a rate close to the reference LES. The statistics in the LPS are very robust to the number of the particles used in the simulations and the computational grid size of the LES. Both in the turbulent core region and the intermittent region, the LPS predicts a scalar field well correlated to the LES.« less
Development and Validation of a 3-Dimensional CFB Furnace Model
NASA Astrophysics Data System (ADS)
Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti
At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents CFB process analysis focused on combustion and NO profiles in pilot and industrial scale bituminous coal combustion.
Turbulence closure for mixing length theories
NASA Astrophysics Data System (ADS)
Jermyn, Adam S.; Lesaffre, Pierre; Tout, Christopher A.; Chitre, Shashikumar M.
2018-05-01
We present an approach to turbulence closure based on mixing length theory with three-dimensional fluctuations against a two-dimensional background. This model is intended to be rapidly computable for implementation in stellar evolution software and to capture a wide range of relevant phenomena with just a single free parameter, namely the mixing length. We incorporate magnetic, rotational, baroclinic, and buoyancy effects exactly within the formalism of linear growth theories with non-linear decay. We treat differential rotation effects perturbatively in the corotating frame using a novel controlled approximation, which matches the time evolution of the reference frame to arbitrary order. We then implement this model in an efficient open source code and discuss the resulting turbulent stresses and transport coefficients. We demonstrate that this model exhibits convective, baroclinic, and shear instabilities as well as the magnetorotational instability. It also exhibits non-linear saturation behaviour, and we use this to extract the asymptotic scaling of various transport coefficients in physically interesting limits.
3D Visualization of Global Ocean Circulation
NASA Astrophysics Data System (ADS)
Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.
2015-12-01
Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.
NASA Astrophysics Data System (ADS)
Sund, Nicole L.; Porta, Giovanni M.; Bolster, Diogo
2017-05-01
The Spatial Markov Model (SMM) is an upscaled model that has been used successfully to predict effective mean transport across a broad range of hydrologic settings. Here we propose a novel variant of the SMM, applicable to spatially periodic systems. This SMM is built using particle trajectories, rather than travel times. By applying the proposed SMM to a simple benchmark problem we demonstrate that it can predict mean effective transport, when compared to data from fully resolved direct numerical simulations. Next we propose a methodology for using this SMM framework to predict measures of mixing and dilution, that do not just depend on mean concentrations, but are strongly impacted by pore-scale concentration fluctuations. We use information from trajectories of particles to downscale and reconstruct pore-scale approximate concentration fields from which mixing and dilution measures are then calculated. The comparison between measurements from fully resolved simulations and predictions with the SMM agree very favorably.
Rain Impact Model Assessment of Near-Surface Salinity Stratification Following Rainfall
NASA Astrophysics Data System (ADS)
Drushka, K.; Jones, L.; Jacob, M. M.; Asher, W.; Santos-Garcia, A.
2016-12-01
Rainfall over oceans produces a layer of fresher surface water, which can have a significant effect on the exchanges between the surface and the bulk mixed layer and also on satellite/in-situ comparisons. For satellite sea surface salinity (SSS) measurements, the standard is the Hybrid Coordinate Ocean Model (HYCOM), but there is a significant difference between the remote sensing sampling depth of 0.01 m and the typical range of 5-10 m of in-situ instruments. Under normal conditions the upper layer of the ocean is well mixed and there is uniform salinity; however, under rainy conditions, there is a dilution of the near-surface salinity that mixes downward by diffusion and by mechanical mixing (gravity waves/wind speed). This significantly modifies the salinity gradient in the upper 1-2 m of the ocean, but these transient salinity stratifications dissipate in a few hours, and the upper layer becomes well mixed at a slightly fresher salinity. Based upon research conducted within the NASA/CONAE Aquarius/SAC-D mission, a rain impact model (RIM) was developed to estimate the change in SSS due to rainfall near the time of the satellite observation, with the objective to identify the probability of salinity stratification. RIM uses HYCOM (which does not include the short-term rain effects) and a NOAA global rainfall product CMORPH to model changes in the near-surface salinity profile in 0.5 h increments. Based upon SPURS-2 experimental near-surface salinity measurements with rain, this paper introduces a term in the RIM model that accounts for the effect of wind speed in the mechanical mixing, which translates into a dynamic vertical diffusivity; whereby a Generalized Ocean Turbulence Model (GOTM) is used to investigate the response to rain events of the upper few meters of the ocean. The objective is to determine how rain and wind forcing control the thickness, stratification strength, and lifetime of fresh lenses and to quantify the impacts of rain-formed fresh lenses on the fresh bias in satellite retrievals of salinity. Results will be presented of comparisons of RIM measurements at depth of a few meters with measurements from in-situ salinity instruments. Also, analytical results will be shown, which assess the accuracy of RIM salinity profiles under a variety of rain rate, wind/wave conditions.
Prediction of stock markets by the evolutionary mix-game model
NASA Astrophysics Data System (ADS)
Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping
2008-06-01
This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.
Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.
Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed
2013-01-01
In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.
An R2 statistic for fixed effects in the linear mixed model.
Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver
2008-12-20
Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.
Hayat, Tasawar; Nawaz, Sadaf; Alsaedi, Ahmed; Rafiq, Maimona
2016-01-01
Main objective of present study is to analyze the mixed convective peristaltic transport of water based nanofluids using five different nanoparticles i.e. (Al2O3, CuO, Cu, Ag and TiO2). Two thermal conductivity models namely the Maxwell's and Hamilton-Crosser's are used in this study. Hall and Joule heating effects are also given consideration. Convection boundary conditions are employed. Furthermore, viscous dissipation and heat generation/absorption are used to model the energy equation. Problem is simplified by employing lubrication approach. System of equations are solved numerically. Influence of pertinent parameters on the velocity and temperature are discussed. Also the heat transfer rate at the wall is observed for considered five nanofluids using the two phase models via graphs. PMID:27104596
A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine the mechanism of metabolic interactions occurring during simultaneous exposures to the organic solvents chloroform and trichloroethylene (TCE). Visualization-based se...
Application of mixing-controlled combustion models to gas turbine combustors
NASA Technical Reports Server (NTRS)
Nguyen, Hung Lee
1990-01-01
Gas emissions were studied from a staged Rich Burn/Quick-Quench Mix/Lean Burn combustor were studied under test conditions encountered in High Speed Research engines. The combustor was modeled at conditions corresponding to different engine power settings, and the effect of primary dilution airflow split on emissions, flow field, flame size and shape, and combustion intensity, as well as mixing, was investigated. A mathematical model was developed from a two-equation model of turbulence, a quasi-global kinetics mechanism for the oxidation of propane, and the Zeldovich mechanism for nitric oxide formation. A mixing-controlled combustion model was used to account for turbulent mixing effects on the chemical reaction rate. This model assumes that the chemical reaction rate is much faster than the turbulent mixing rate.
NASA Astrophysics Data System (ADS)
Han, Yingying; Gong, Pu; Zhou, Xiang
2016-02-01
In this paper, we apply time varying Gaussian and SJC copula models to study the correlations and risk contagion between mixed assets: financial (stock), real estate and commodity (gold) assets in China firstly. Then we study the dynamic mixed-asset portfolio risk through VaR measurement based on the correlations computed by the time varying copulas. This dynamic VaR-copula measurement analysis has never been used on mixed-asset portfolios. The results show the time varying estimations fit much better than the static models, not only for the correlations and risk contagion based on time varying copulas, but also for the VaR-copula measurement. The time varying VaR-SJC copula models are more accurate than VaR-Gaussian copula models when measuring more risky portfolios with higher confidence levels. The major findings suggest that real estate and gold play a role on portfolio risk diversification and there exist risk contagion and flight to quality between mixed-assets when extreme cases happen, but if we take different mixed-asset portfolio strategies with the varying of time and environment, the portfolio risk will be reduced.
NASA Astrophysics Data System (ADS)
Luan, Deyu; Zhang, Shengfeng; Wei, Xing; Duan, Zhenya
The aim of this work is to investigate the effect of the shaft eccentricity on the flow field and mixing characteristics in a stirred tank with the novel stirrer composed of perturbed six-bent-bladed turbine (6PBT). The difference between coaxial and eccentric agitations is studied using computational fluid dynamics (CFD) simulations combined with standard k-ε turbulent equations, that offer a complete image of the three-dimensional flow field. In order to determine the capability of CFD to forecast the mixing process, particle image velocimetry (PIV), which provide an accurate representation of the time-averaged velocity, was used to measure fluid velocity. The test liquid used was 1.25% (wt) xanthan gum solution, a pseudoplastic fluid with a yield stress. The comparison of the experimental and simulated mean flow fields has demonstrated that calculations based on Reynolds-averaged Navier-Stokes equations are suitable for obtaining accurate results. The effects of the shaft eccentricity and the stirrer off-bottom distance on the flow model, mixing time and mixing efficiency were extensively analyzed. It is observed that the microstructure of the flow field has a significant effect on the tracer mixing process. The eccentric agitation can lead to the flow model change and the non-symmetric flow structure, which would possess an obvious superiority of mixing behavior. Moreover, the mixing rate and mixing efficiency are dependent on the shaft eccentricity and the stirrer off-bottom distance, showing the corresponding increase of the eccentricity with the off-bottom distance. The efficient mixing process of pseudoplastic fluid stirred by 6PBT impeller is obtained with the considerably low mixing energy per unit volume when the stirrer off-bottom distance, C, is T/3 and the eccentricity, e, is 0.2. The research results provide valuable references for the improvement of pseudoplastic fluid agitation technology.
Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs
2018-01-01
Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344
The Effects of a Simulation Game on Mental Models about Organizational Systems
ERIC Educational Resources Information Center
Reese, Rebecca M.
2017-01-01
This mixed methods study was designed to uncover evidence of change to mental models about organizational systems resulting from participation in a simulation game that is based on a system dynamics model. Thirty participants in a 2 day experiential workshop completed a pretest and posttest to assess learning about particular systems concepts.…
NASA Astrophysics Data System (ADS)
Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine
2017-06-01
The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.
Effects of land use data on dry deposition in a regional photochemical model for eastern Texas.
McDonald-Buller, E; Wiedinmyer, C; Kimura, Y; Allen, D
2001-08-01
Land use data are among the inputs used to determine dry deposition velocities for photochemical grid models such as the Comprehensive Air Quality Model with extensions (CAMx) that is currently used for attainment demonstrations and air quality planning by the state of Texas. The sensitivity of dry deposition and O3 mixing ratios to land use classification was investigated by comparing predictions based on default U.S. Geological Survey (USGS) land use data to predictions based on recently compiled land use data that were collected to improve biogenic emissions estimates. Dry deposition of O3 decreased throughout much of eastern Texas, especially in urban areas, with the new land use data. Predicted 1-hr averaged O3 mixing ratios with the new land use data were as much as 11 ppbv greater and 6 ppbv less than predictions based on USGS land use data during the late afternoon. In addition, the area with peak O3 mixing ratios in excess of 100 ppbv increased significantly in urban areas when deposition velocities were calculated based on the new land use data. Finally, more detailed data on land use within urban areas resulted in peak changes in O3 mixing ratios of approximately 2 ppbv. These results indicate the importance of establishing accurate, internally consistent land use data for photochemical modeling in urban areas in Texas. They also indicate the need for field validation of deposition rates in areas experiencing changing land use patterns, such as during urban reforestation programs or residential and commercial development.
Effect of exercise on patient specific abdominal aortic aneurysm flow topology and mixing
Arzani, Amirhossein; Les, Andrea S.; Dalman, Ronald L.; Shadden, Shawn C.
2014-01-01
SUMMARY Computational fluid dynamics modeling was used to investigate changes in blood transport topology between rest and exercise conditions in five patient-specific abdominal aortic aneurysm models. Magnetic resonance imaging was used to provide the vascular anatomy and necessary boundary conditions for simulating blood velocity and pressure fields inside each model. Finite-time Lyapunov exponent fields, and associated Lagrangian coherent structures, were computed from blood velocity data, and used to compare features of the transport topology between rest and exercise both mechanistically and qualitatively. A mix-norm and mix-variance measure based on fresh blood distribution throughout the aneurysm over time were implemented to quantitatively compare mixing between rest and exercise. Exercise conditions resulted in higher and more uniform mixing, and reduced the overall residence time in all aneurysms. Separated regions of recirculating flow were commonly observed in rest, and these regions were either reduced or removed by attached and unidirectional flow during exercise, or replaced with regional chaotic and transiently turbulent mixing, or persisted and even extended during exercise. The main factor that dictated the change in flow topology from rest to exercise was the behavior of the jet of blood penetrating into the aneurysm during systole. PMID:24493404
A Multi-wavenumber Theory for Eddy Diffusivities: Applications to the DIMES Region
NASA Astrophysics Data System (ADS)
Chen, R.; Gille, S. T.; McClean, J.; Flierl, G.; Griesel, A.
2014-12-01
Climate models are sensitive to the representation of ocean mixing processes. This has motivated recent efforts to collect observations aimed at improving mixing estimates and parameterizations. The US/UK field program Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES), begun in 2009, is providing such estimates upstream of and within the Drake Passage. This region is characterized by topography, and strong zonal jets. In previous studies, mixing length theories, based on the assumption that eddies are dominated by a single wavenumber and phase speed, were formulated to represent the estimated mixing patterns in jets. However, in spite of the success of the single wavenumber theory in some other scenarios, it does not effectively predict the vertical structures of observed eddy diffusivities in the DIMES area. Considering that eddy motions encompass a wide range of wavenumbers, which all contribute to mixing, in this study we formulated a multi-wavenumber theory to predict eddy mixing rates. We test our theory for a domain encompassing the entire Southern Ocean. We estimated eddy diffusivities and mixing lengths from one million numerical floats in a global eddying model. These float-based mixing estimates were compared with the predictions from both the single-wavenumber and the multi-wavenumber theories. Our preliminary results in the DIMES area indicate that, compared to the single-wavenumber theory, the multi-wavenumber theory better predicts the vertical mixing structures in the vast areas where the mean flow is weak; however in the intense jet region, both theories have similar predictive skill.
Prediction of heat release effects on a mixing layer
NASA Technical Reports Server (NTRS)
Farshchi, M.
1986-01-01
A fully second-order closure model for turbulent reacting flows is suggested based on Favre statistics. For diffusion flames the local thermodynamic state is related to single conserved scalar. The properties of pressure fluctuations are analyzed for turbulent flows with fluctuating density. Closure models for pressure correlations are discussed and modeled transport equations for Reynolds stresses, turbulent kinetic energy dissipation, density-velocity correlations, scalar moments and dissipation are presented and solved, together with the mean equations for momentum and mixture fraction. Solutions of these equations are compared with the experimental data for high heat release free mixing layers of fluorine and hydrogen in a nitrogen diluent.
ERIC Educational Resources Information Center
Elaldi, Senel
2016-01-01
This study aimed to determine the effect of mastery learning model supported with reflective thinking activities on the fifth grade medical students' academic achievement. Mixed methods approach was applied in two samples (n = 64 and n = 6). Quantitative part of the study was based on a pre-test-post-test control group design with an experiment…
ERIC Educational Resources Information Center
Wood, Lynda C.; Ebenezer, Jazlin; Boone, Relena
2013-01-01
The purpose of this article is to study the effects of an intellectually caring model of teaching and learning on alternative African American high school students' conceptual change and achievement in a chemistry unit on acids and bases. A mixed-methods approach using retrospective data was utilized. Data secured from the teacher were the…
MIXOR: a computer program for mixed-effects ordinal regression analysis.
Hedeker, D; Gibbons, R D
1996-03-01
MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.
CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS
Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...
Impact of Lateral Mixing in the Ocean on El Nino in Fully Coupled Climate Models
NASA Astrophysics Data System (ADS)
Gnanadesikan, A.; Russell, A.; Pradal, M. A. S.; Abernathey, R. P.
2016-02-01
Given the large number of processes that can affect El Nino, it is difficult to understand why different climate models simulate El Nino differently. This paper focusses on the role of lateral mixing by mesoscale eddies. There is significant disagreement about the value of the mixing coefficient ARedi which parameterizes the lateral mixing of tracers. Coupled climate models usually prescribe small values of this coefficient, ranging between a few hundred and a few thousand m2/s. Observations, however, suggest values that are much larger. We present a sensitivity study with a suite of Earth System Models that examines the impact of varying ARedi on the amplitude of El Nino. We examine the effect of varying a spatially constant ARedi over a range of values similar to that seen in the IPCC AR5 models, as well as looking at two spatially varying distributions based on altimetric velocity estimates. While the expectation that higher values of ARedi should damp anomalies is borne out in the model, it is more than compensated by a weaker damping due to vertical mixing and a stronger response of atmospheric winds to SST anomalies. Under higher mixing, a weaker zonal SST gradient causes the center of convection over the Warm pool to shift eastward and to become more sensitive to changes in cold tongue SSTs . Changes in the SST gradient also explain interdecadal ENSO variability within individual model runs.
Decision-case mix model for analyzing variation in cesarean rates.
Eldenburg, L; Waller, W S
2001-01-01
This article contributes a decision-case mix model for analyzing variation in c-section rates. Like recent contributions to the literature, the model systematically takes into account the effect of case mix. Going beyond past research, the model highlights differences in physician decision making in response to obstetric factors. Distinguishing the effects of physician decision making and case mix is important in understanding why c-section rates vary and in developing programs to effect change in physician behavior. The model was applied to a sample of deliveries at a hospital where physicians exhibited considerable variation in their c-section rates. Comparing groups with a low versus high rate, the authors' general conclusion is that the difference in physician decision tendencies (to perform a c-section), in response to specific obstetric factors, is at least as important as case mix in explaining variation in c-section rates. The exact effects of decision making versus case mix depend on how the model application defines the obstetric condition of interest and on the weighting of deliveries by their estimated "risk of Cesarean." The general conclusion is supported by an additional analysis that uses the model's elements to predict individual physicians' annual c-section rates.
Hierarchical Bayes approach for subgroup analysis.
Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C
2017-01-01
In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.
Van Ael, Evy; De Cooman, Ward; Blust, Ronny; Bervoets, Lieven
2015-01-01
Large datasets from total and dissolved metal concentrations in Flemish (Belgium) fresh water systems and the associated macroinvertebrate-based biotic index MMIF (Multimetric Macroinvertebrate Index Flanders) were used to estimate critical metal concentrations for good ecological water quality, as imposed by the European Water Framework Directive (2000). The contribution of different stressors (metals and water characteristics) to the MMIF were studied by constructing generalized linear mixed effect models. Comparison between estimated critical concentrations and the European and Flemish EQS, shows that the EQS for As, Cd, Cu and Zn seem to be sufficient to reach a good ecological quality status as expressed by the invertebrate-based biotic index. In contrast, the EQS for Cr, Hg and Pb are higher than the estimated critical concentrations, which suggests that when environmental concentrations are at the same level as the EQS a good quality status might not be reached. The construction of mixed models that included metal concentrations in their structure did not lead to a significant outcome. However, mixed models showed the primary importance of water characteristics (oxygen level, temperature, ammonium concentration and conductivity) for the MMIF. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
Hackemann, Eva; Hasse, Hans
2017-10-27
Using salt mixtures instead of single salts can be beneficial for hydrophobic interaction chromatography (HIC). The effect of electrolytes on the adsorption of proteins, however, depends on the pH. Little is known on that dependence for mixed electrolytes. Therefore, the effect of the pH on protein adsorption from aqueous solutions containing mixed salts is systematically studied in the present work for a model system: the adsorption of bovine serum albumin (BSA) on the mildly hydrophobic resin Toyopearl PPG-600M. The pH is adjusted to 4.0, 4.7 or 7.0 using 25mM sodium phosphate or sodium citrate buffer. Binary and ternary salt mixtures of sodium chloride, ammonium chloride, sodium sulfate and ammonium sulfate as well as the pure salts are used at overall ionic strengths between 1500 and 4200mM. The temperature is always 25°C. The influence of the mixed electrolytes on the adsorption behavior of BSA changes completely with varying pH. Positive as well as negative cooperative effects of the mixed electrolytes are observed. The results are analyzed using a mathematical model which was recently introduced by our group. In that model the influence of the electrolytes is described by a Taylor series expansion in the individual ion molarities. After suitable parametrization using a subset of the data determined in the present work, the model successfully predicts the influence of mixed electrolytes on the protein adsorption. Furthermore, results for BSA from the present study are compared to literature data for lysozyme, which are available for the same adsorbent, temperature and salts. By calculating the ratio of the loading of the adsorbent for both proteins particularly favorable separation conditions can be selected. Hence, a model-based optimization of solvents for protein separation is possible. Copyright © 2017 Elsevier B.V. All rights reserved.
The modelling of dispersion in 2-D tidal flow over an uneven bed
NASA Astrophysics Data System (ADS)
Kalkwijk, Jan P. Th.
This paper deals with the effective mixing by topographic induced velocity variations in 2-D tidal flow. This type of mixing is characterized by tidally-averaged dispersion coefficients, which depend on the magnitude of the depth variations with respect to a mean depth, the velocity variations and the basic dispersion coefficients. The analysis is principally based on a Taylor type approximation (large clouds, small concentration variations) of the 2-D advection diffusion equation and a 2-D velocity field that behaves harmonically both in time and in space. Neglecting transient phenomena and applying time and space averaging the effective dispersion coefficients can be derived. Under certain circumstances it is possible to relate the velocity variations to the depth variations, so that finally effective dispersion coefficients can be determined using the power spectrum of the depth variations. In a special paragraph attention is paid to the modelling of sub-grid mixing in case of numerical integration of the advection-diffusion equation. It appears that the dispersion coefficients taking account of the sub-grid mixing are not only determined by the velocity variations within a certain grid cell, but also by the velocity variations at a larger scale.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
Incomplete Mixing and Reactions - A Lagrangian Approach in a Pure Shear Flow
NASA Astrophysics Data System (ADS)
Paster, A.; Aquino, T.; Bolster, D.
2014-12-01
Incomplete mixing of reactive solutes is well known to slow down reaction rates relative to what would be expected from assuming perfect mixing. As reactions progress in a system and deplete reactant concentrations, initial fluctuations in the concentrations of reactions can be amplified relative to mean background concentrations and lead to spatial segregation of reactants. As the system evolves, in the absence of sufficient mixing, this segregation will increase, leading to a persistence of incomplete mixing that fundamentally changes the effective rate at which overall reactions will progress. On the other hand, non-uniform fluid flows are known to affect mixing between interacting solutes. Thus a natural question arises: Can non-uniform flows sufficiently enhance mixing to suppress incomplete mixing effects, and if so, under what conditions? In this work we address this question by considering one of the simplest possible flows, a laminar pure shear flow, which is known to significantly enhance mixing relative to diffusion alone. To study this system we adapt a novel Lagrangian particle-based random walk method, originally designed to simulate reactions in purely diffusive systems, to the case of advection and diffusion in a shear flow. To interpret the results we develop a semi-analytical solution, by proposing a closure approximation that aims to capture the effect of incomplete mixing. The results obtained via the Lagrangian model and the semi-analytical solutions consistently highlight that if shear effects in the system are not sufficiently strong, incomplete mixing effects initially similar to purely diffusive systems will occur, slowing down the overall reaction rate. Then, at some later time, dependent on the strength of the shear, the system will return to behaving as if it were well-mixed, but represented by a reduced effective reaction rate. If shear effects are sufficiently strong, the incomplete mixing regime never emerges and the system can behave as well-mixed at all times.
Incomplete Mixing and Reactions - A Lagrangian Approach in a Pure Shear Flow
NASA Astrophysics Data System (ADS)
Paster, Amir; Bolster, Diogo; Aquino, Tomas
2015-04-01
Incomplete mixing of reactive solutes is well known to slow down reaction rates relative to what would be expected from assuming perfect mixing. As reactions progress in a system and deplete reactant concentrations, initial fluctuations in the concentrations of reactions can be amplified relative to mean background concentrations and lead to spatial segregation of reactants. As the system evolves, in the absence of sufficient mixing, this segregation will increase, leading to a persistence of incomplete mixing that fundamentally changes the effective rate at which overall reactions will progress. On the other hand, nonuniform fluid flows are known to affect mixing between interacting solutes. Thus a natural question arises: Can non-uniform flows sufficiently enhance mixing to suppress incomplete mixing effects, and if so, under what conditions? In this work we address this question by considering one of the simplest possible flows, a laminar pure shear flow, which is known to significantly enhance mixing relative to diffusion alone. To study this system we adapt a novel Lagrangian particle-based random walk method, originally designed to simulate reactions in purely diffusive systems, to the case of advection and diffusion in a shear flow. To interpret the results we develop a semi-analytical solution, by proposing a closure approximation that aims to capture the effect of incomplete mixing. The results obtained via the Lagrangian model and the semi-analytical solutions consistently highlight that if shear effects in the system are not sufficiently strong, incomplete mixing effects initially similar to purely diffusive systems will occur, slowing down the overall reaction rate. Then, at some later time, dependent on the strength of the shear, the system will return to behaving as if it were well-mixed, but represented by a reduced effective reaction rate. If shear effects are sufficiently strong, the incomplete mixing regime never emerges and the system can behave as well-mixed at all times.
Henthorn, Thomas K; Avram, Michael J; Dahan, Albert; Gustafsson, Lars L; Persson, Jan; Krejcie, Tom C; Olofsen, Erik
2018-05-16
The pharmacokinetics of infused drugs have been modeled without regard for recirculatory or mixing kinetics. We used a unique ketamine dataset with simultaneous arterial and venous blood sampling, during and after separate S(+) and R(-) ketamine infusions, to develop a simplified recirculatory model of arterial and venous plasma drug concentrations. S(+) or R(-) ketamine was infused over 30 min on two occasions to 10 healthy male volunteers. Frequent, simultaneous arterial and forearm venous blood samples were obtained for up to 11 h. A multicompartmental pharmacokinetic model with front-end arterial mixing and venous blood components was developed using nonlinear mixed effects analyses. A three-compartment base pharmacokinetic model with additional arterial mixing and arm venous compartments and with shared S(+)/R(-) distribution kinetics proved superior to standard compartmental modeling approaches. Total pharmacokinetic flow was estimated to be 7.59 ± 0.36 l/min (mean ± standard error of the estimate), and S(+) and R(-) elimination clearances were 1.23 ± 0.04 and 1.06 ± 0.03 l/min, respectively. The arm-tissue link rate constant was 0.18 ± 0.01 min and the fraction of arm blood flow estimated to exchange with arm tissue was 0.04 ± 0.01. Arterial drug concentrations measured during drug infusion have two kinetically distinct components: partially or lung-mixed drug and fully mixed-recirculated drug. Front-end kinetics suggest the partially mixed concentration is proportional to the ratio of infusion rate and total pharmacokinetic flow. This simplified modeling approach could lead to more generalizable models for target-controlled infusions and improved methods for analyzing pharmacokinetic-pharmacodynamic data.
A Best-Practice Model for Academic Advising of University Biology Majors
ERIC Educational Resources Information Center
Heekin, Jonathan Ralph Calvin
2013-01-01
Biology faculty at an East Coast university believed their undergraduate students were not being well served by the existing academic advising program. The purpose of this mixed methods project study was to evaluate the effectiveness of the academic advising model in a biology department. Guided by system-based organizational theory, a learning…
ERIC Educational Resources Information Center
Zhou, Hong; Muellerleile, Paige; Ingram, Debra; Wong, Seok P.
2011-01-01
Intraclass correlation coefficients (ICCs) are commonly used in behavioral measurement and psychometrics when a researcher is interested in the relationship among variables of a common class. The formulas for deriving ICCs, or generalizability coefficients, vary depending on which models are specified. This article gives the equations for…
Estimating Pressure Reactivity Using Noninvasive Doppler-Based Systolic Flow Index.
Zeiler, Frederick A; Smielewski, Peter; Donnelly, Joseph; Czosnyka, Marek; Menon, David K; Ercole, Ari
2018-04-05
The study objective was to derive models that estimate the pressure reactivity index (PRx) using the noninvasive transcranial Doppler (TCD) based systolic flow index (Sx_a) and mean flow index (Mx_a), both based on mean arterial pressure, in traumatic brain injury (TBI). Using a retrospective database of 347 patients with TBI with intracranial pressure and TCD time series recordings, we derived PRx, Sx_a, and Mx_a. We first derived the autocorrelative structure of PRx based on: (A) autoregressive integrative moving average (ARIMA) modeling in representative patients, and (B) within sequential linear mixed effects (LME) models with various embedded ARIMA error structures for PRx for the entire population. Finally, we performed sequential LME models with embedded PRx ARIMA modeling to find the best model for estimating PRx using Sx_a and Mx_a. Model adequacy was assessed via normally distributed residual density. Model superiority was assessed via Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log likelihood (LL), and analysis of variance testing between models. The most appropriate ARIMA structure for PRx in this population was (2,0,2). This was applied in sequential LME modeling. Two models were superior (employing random effects in the independent variables and intercept): (A) PRx ∼ Sx_a, and (B) PRx ∼ Sx_a + Mx_a. Correlation between observed and estimated PRx with these two models was: (A) 0.794 (p < 0.0001, 95% confidence interval (CI) = 0.788-0.799), and (B) 0.814 (p < 0.0001, 95% CI = 0.809-0.819), with acceptable agreement on Bland-Altman analysis. Through using linear mixed effects modeling and accounting for the ARIMA structure of PRx, one can estimate PRx using noninvasive TCD-based indices. We have described our first attempts at such modeling and PRx estimation, establishing the strong link between two aspects of cerebral autoregulation: measures of cerebral blood flow and those of pulsatile cerebral blood volume. Further work is required to validate.
Joint physical and numerical modeling of water distribution networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, Adam; O'Hern, Timothy John; Orear, Leslie Jr.
2009-01-01
This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in somemore » cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.« less
Tropical Cyclone Footprint in the Ocean Mixed Layer Observed by Argo in the Northwest Pacific
2014-10-25
668. Hu, A., and G. A. Meehl (2009), Effect of the Atlantic hurricanes on the oceanic meridional overturning circulation and heat transport, Geo...atmospheric circulation [Hart et al., 2007]. Several studies, based on observations and modeling, suggest that TC-induced energy input and mixing may play...an important role in climate variability through regulating the oceanic general circulation and its variability [e.g., Emanuel, 2001; Sriver and Huber
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.
NASA Astrophysics Data System (ADS)
Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.
2014-06-01
A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, K. N.; Takano, Y.; He, Cenlin
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions ofmore » BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.« less
Microstructural modeling of thermal conductivity of high burn-up mixed oxide fuel
NASA Astrophysics Data System (ADS)
Teague, Melissa; Tonks, Michael; Novascone, Stephen; Hayes, Steven
2014-01-01
Predicting the thermal conductivity of oxide fuels as a function of burn-up and temperature is fundamental to the efficient and safe operation of nuclear reactors. However, modeling the thermal conductivity of fuel is greatly complicated by the radially inhomogeneous nature of irradiated fuel in both composition and microstructure. In this work, radially and temperature-dependent models for effective thermal conductivity were developed utilizing optical micrographs of high burn-up mixed oxide fuel. The micrographs were employed to create finite element meshes with the OOF2 software. The meshes were then used to calculate the effective thermal conductivity of the microstructures using the BISON [1] fuel performance code. The new thermal conductivity models were used to calculate thermal profiles at end of life for the fuel pellets. These results were compared to thermal conductivity models from the literature, and comparison between the new finite element-based thermal conductivity model and the Duriez-Lucuta model was favorable.
Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng
2013-05-01
Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.
Microstructural Modeling of Thermal Conductivity of High Burn-up Mixed Oxide Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melissa Teague; Michael Tonks; Stephen Novascone
2014-01-01
Predicting the thermal conductivity of oxide fuels as a function of burn-up and temperature is fundamental to the efficient and safe operation of nuclear reactors. However, modeling the thermal conductivity of fuel is greatly complicated by the radially inhomogeneous nature of irradiated fuel in both composition and microstructure. In this work, radially and temperature-dependent models for effective thermal conductivity were developed utilizing optical micrographs of high burn-up mixed oxide fuel. The micrographs were employed to create finite element meshes with the OOF2 software. The meshes were then used to calculate the effective thermal conductivity of the microstructures using the BISONmore » fuel performance code. The new thermal conductivity models were used to calculate thermal profiles at end of life for the fuel pellets. These results were compared to thermal conductivity models from the literature, and comparison between the new finite element-based thermal conductivity model and the Duriez–Lucuta model was favorable.« less
Effect of electrode positions on the mixing characteristics of an electroosmotic micromixer.
Seo, H S; Kim, Y J
2014-08-01
In this study, an electrokinetic microchannel with a ring-type mixing chamber is introduced for fast mixing. The modeled micromixer that is used for the study of the electroosmotic effect takes two fluids from different inlets and combines them in a ring-type mixing chamber and, then, they are mixed by the electric fields at the electrodes. In order to compare the mixing performance in the modeled micromixer, we numerically investigated the flow characteristics with different positions of the electrodes in the mixing chamber using the commercial code, COMSOL. In addition, we discussed the concentration distributions of the dissolved substances in the flow fields and compared the mixing efficiency in the modeled micromixer with different electrode positions and operating conditions, such as the frequencies and electric potentials at the electrodes.
One-dimensional modelling of upper ocean mixing by turbulence due to wave orbital motion
NASA Astrophysics Data System (ADS)
Ghantous, M.; Babanin, A. V.
2014-02-01
Mixing of the upper ocean affects the sea surface temperature by bringing deeper, colder water to the surface. Because even small changes in the surface temperature can have a large impact on weather and climate, accurately determining the rate of mixing is of central importance for forecasting. Although there are several mixing mechanisms, one that has until recently been overlooked is the effect of turbulence generated by non-breaking, wind-generated surface waves. Lately there has been a lot of interest in introducing this mechanism into ocean mixing models, and real gains have been made in terms of increased fidelity to observational data. However, our knowledge of the mechanism is still incomplete. We indicate areas where we believe the existing parameterisations need refinement and propose an alternative one. We use two of the parameterisations to demonstrate the effect on the mixed layer of wave-induced turbulence by applying them to a one-dimensional mixing model and a stable temperature profile. Our modelling experiment suggests a strong effect on sea surface temperature due to non-breaking wave-induced turbulent mixing.
Pore-scale and continuum simulations of solute transport micromodel benchmark experiments
Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...
2014-06-18
Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less
Kaneko, Masato; Tanigawa, Takahiko; Hashizume, Kensei; Kajikawa, Mariko; Tajiri, Masahiro; Mueck, Wolfgang
2013-01-01
This study was designed to confirm the appropriateness of the dose setting for a Japanese phase III study of rivaroxaban in patients with non-valvular atrial fibrillation (NVAF), which had been based on model simulation employing phase II study data. The previously developed mixed-effects pharmacokinetic/pharmacodynamic (PK-PD) model, which consisted of an oral one-compartment model parameterized in terms of clearance, volume and a first-order absorption rate, was rebuilt and optimized using the data for 597 subjects from the Japanese phase III study, J-ROCKET AF. A mixed-effects modeling technique in NONMEM was used to quantify both unexplained inter-individual variability and inter-occasion variability, which are random effect parameters. The final PK and PK-PD models were evaluated to identify influential covariates. The empirical Bayes estimates of AUC and C(max) from the final PK model were consistent with the simulated results from the Japanese phase II study. There was no clear relationship between individual estimated exposures and safety-related events, and the estimated exposure levels were consistent with the global phase III data. Therefore, it was concluded that the dose selected for the phase III study with Japanese NVAF patients by means of model simulation employing phase II study data had been appropriate from the PK-PD perspective.
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Ding, Jinliang; Chai, Tianyou; Wang, Hong
2011-03-01
This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.
Wu, Binxin
2010-12-01
In this paper, 12 turbulence models for single-phase non-newtonian fluid flow in a pipe are evaluated by comparing the frictional pressure drops obtained from computational fluid dynamics (CFD) with those from three friction factor correlations. The turbulence models studied are (1) three high-Reynolds-number k-ε models, (2) six low-Reynolds-number k-ε models, (3) two k-ω models, and (4) the Reynolds stress model. The simulation results indicate that the Chang-Hsieh-Chen version of the low-Reynolds-number k-ε model performs better than the other models in predicting the frictional pressure drops while the standard k-ω model has an acceptable accuracy and a low computing cost. In the model applications, CFD simulation of mixing in a full-scale anaerobic digester with pumped circulation is performed to propose an improvement in the effective mixing standards recommended by the U.S. EPA based on the effect of rheology on the flow fields. Characterization of the velocity gradient is conducted to quantify the growth or breakage of an assumed floc size. Placement of two discharge nozzles in the digester is analyzed to show that spacing two nozzles 180° apart with each one discharging at an angle of 45° off the wall is the most efficient. Moreover, the similarity rules of geometry and mixing energy are checked for scaling up the digester.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Allen, John M.; Elbasiouny, Sherif M.
2018-06-01
Objective. Computational models often require tradeoffs, such as balancing detail with efficiency; yet optimal balance should incorporate sound design features that do not bias the results of the specific scientific question under investigation. The present study examines how model design choices impact simulation results. Approach. We developed a rigorously-validated high-fidelity computational model of the spinal motoneuron pool to study three long-standing model design practices which have yet to be examined for their impact on motoneuron recruitment, firing rate, and force simulations. The practices examined were the use of: (1) generic cell models to simulate different motoneuron types, (2) discrete property ranges for different motoneuron types, and (3) biological homogeneity of cell properties within motoneuron types. Main results. Our results show that each of these practices accentuates conditions of motoneuron recruitment based on the size principle, and minimizes conditions of mixed and reversed recruitment orders, which have been observed in animal and human recordings. Specifically, strict motoneuron orderly size recruitment occurs, but in a compressed range, after which mixed and reverse motoneuron recruitment occurs due to the overlap in electrical properties of different motoneuron types. Additionally, these practices underestimate the motoneuron firing rates and force data simulated by existing models. Significance. Our results indicate that current modeling practices increase conditions of motoneuron recruitment based on the size principle, and decrease conditions of mixed and reversed recruitment order, which, in turn, impacts the predictions made by existing models on motoneuron recruitment, firing rate, and force. Additionally, mixed and reverse motoneuron recruitment generated higher muscle force than orderly size motoneuron recruitment in these simulations and represents one potential scheme to increase muscle efficiency. The examined model design practices, as well as the present results, are applicable to neuronal modeling throughout the nervous system.
Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ
Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less
Allen, John M; Elbasiouny, Sherif M
2018-06-01
Computational models often require tradeoffs, such as balancing detail with efficiency; yet optimal balance should incorporate sound design features that do not bias the results of the specific scientific question under investigation. The present study examines how model design choices impact simulation results. We developed a rigorously-validated high-fidelity computational model of the spinal motoneuron pool to study three long-standing model design practices which have yet to be examined for their impact on motoneuron recruitment, firing rate, and force simulations. The practices examined were the use of: (1) generic cell models to simulate different motoneuron types, (2) discrete property ranges for different motoneuron types, and (3) biological homogeneity of cell properties within motoneuron types. Our results show that each of these practices accentuates conditions of motoneuron recruitment based on the size principle, and minimizes conditions of mixed and reversed recruitment orders, which have been observed in animal and human recordings. Specifically, strict motoneuron orderly size recruitment occurs, but in a compressed range, after which mixed and reverse motoneuron recruitment occurs due to the overlap in electrical properties of different motoneuron types. Additionally, these practices underestimate the motoneuron firing rates and force data simulated by existing models. Our results indicate that current modeling practices increase conditions of motoneuron recruitment based on the size principle, and decrease conditions of mixed and reversed recruitment order, which, in turn, impacts the predictions made by existing models on motoneuron recruitment, firing rate, and force. Additionally, mixed and reverse motoneuron recruitment generated higher muscle force than orderly size motoneuron recruitment in these simulations and represents one potential scheme to increase muscle efficiency. The examined model design practices, as well as the present results, are applicable to neuronal modeling throughout the nervous system.
Ab Initio Modeling of Structure and Properties of Single and Mixed Alkali Silicate Glasses.
Baral, Khagendra; Li, Aize; Ching, Wai-Yim
2017-10-12
A density functional theory (DFT)-based ab initio molecular dynamics (AIMD) has been applied to simulate models of single and mixed alkali silicate glasses with two different molar concentrations of alkali oxides. The structural environments and spatial distributions of alkali ions in the 10 simulated models with 20% and 30% of Li, Na, K and equal proportions of Li-Na and Na-K are studied in detail for subtle variations among the models. Quantum mechanical calculations of electronic structures, interatomic bonding, and mechanical and optical properties are carried out for each of the models, and the results are compared with available experimental observation and other simulations. The calculated results are in good agreement with the experimental data. We have used the novel concept of using the total bond order density (TBOD), a quantum mechanical metric, to characterize internal cohesion in these glass models. The mixed alkali effect (MAE) is visible in the bulk mechanical properties but not obvious in other physical properties studied in this paper. We show that Li doping deviates from expected trend due to the much stronger Li-O bonding than those of Na and K doping. The approach used in this study is in contrast with current studies in alkali-doped silicate glasses based only on geometric characterizations.
Towards a Theory-Based Design Framework for an Effective E-Learning Computer Programming Course
ERIC Educational Resources Information Center
McGowan, Ian S.
2016-01-01
Built on Dabbagh (2005), this paper presents a four component theory-based design framework for an e-learning session in introductory computer programming. The framework, driven by a body of exemplars component, emphasizes the transformative interaction between the knowledge building community (KBC) pedagogical model, a mixed instructional…
Mixing with applications to inertial-confinement-fusion implosions
NASA Astrophysics Data System (ADS)
Rana, V.; Lim, H.; Melvin, J.; Glimm, J.; Cheng, B.; Sharp, D. H.
2017-01-01
Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.
Mixing with applications to inertial-confinement-fusion implosions.
Rana, V; Lim, H; Melvin, J; Glimm, J; Cheng, B; Sharp, D H
2017-01-01
Approximate one-dimensional (1D) as well as 2D and 3D simulations are playing an important supporting role in the design and analysis of future experiments at National Ignition Facility. This paper is mainly concerned with 1D simulations, used extensively in design and optimization. We couple a 1D buoyancy-drag mix model for the mixing zone edges with a 1D inertial confinement fusion simulation code. This analysis predicts that National Ignition Campaign (NIC) designs are located close to a performance cliff, so modeling errors, design features (fill tube and tent) and additional, unmodeled instabilities could lead to significant levels of mix. The performance cliff we identify is associated with multimode plastic ablator (CH) mix into the hot-spot deuterium and tritium (DT). The buoyancy-drag mix model is mode number independent and selects implicitly a range of maximum growth modes. Our main conclusion is that single effect instabilities are predicted not to lead to hot-spot mix, while combined mode mixing effects are predicted to affect hot-spot thermodynamics and possibly hot-spot mix. Combined with the stagnation Rayleigh-Taylor instability, we find the potential for mix effects in combination with the ice-to-gas DT boundary, numerical effects of Eulerian species CH concentration diffusion, and ablation-driven instabilities. With the help of a convenient package of plasma transport parameters developed here, we give an approximate determination of these quantities in the regime relevant to the NIC experiments, while ruling out a variety of mix possibilities. Plasma transport parameters affect the 1D buoyancy-drag mix model primarily through its phenomenological drag coefficient as well as the 1D hydro model to which the buoyancy-drag equation is coupled.
A physiologically-based pharmacokinetic (PBPK) model incorporating mixed enzyme inhibition was used to determine mechanism of the metabolic interactions occurring during simultaneous inhalation exposures to the organic solvents chloroform and trichloroethylene (TCE).
V...
Model free simulations of a high speed reacting mixing layer
NASA Technical Reports Server (NTRS)
Steinberger, Craig J.
1992-01-01
The effects of compressibility, chemical reaction exothermicity and non-equilibrium chemical modeling in a combusting plane mixing layer were investigated by means of two-dimensional model free numerical simulations. It was shown that increased compressibility generally had a stabilizing effect, resulting in reduced mixing and chemical reaction conversion rate. The appearance of 'eddy shocklets' in the flow was observed at high convective Mach numbers. Reaction exothermicity was found to enhance mixing at the initial stages of the layer's growth, but had a stabilizing effect at later times. Calculations were performed for a constant rate chemical rate kinetics model and an Arrhenius type kinetics prototype. The Arrhenius model was found to cause a greater temperature increase due to reaction than the constant kinetics model. This had the same stabilizing effect as increasing the exothermicity of the reaction. Localized flame quenching was also observed when the Zeldovich number was relatively large.
Fent, Kenneth W.; Gaines, Linda G. Trelles; Thomasen, Jennifer M.; Flack, Sheila L.; Ding, Kai; Herring, Amy H.; Whittaker, Stephen G.; Nylander-French, Leena A.
2009-01-01
We conducted a repeated exposure-assessment survey for task-based breathing-zone concentrations (BZCs) of monomeric and polymeric 1,6-hexamethylene diisocyanate (HDI) during spray painting on 47 automotive spray painters from North Carolina and Washington State. We report here the use of linear mixed modeling to identify the primary determinants of the measured BZCs. Both one-stage (N = 98 paint tasks) and two-stage (N = 198 paint tasks) filter sampling was used to measure concentrations of HDI, uretidone, biuret, and isocyanurate. The geometric mean (GM) level of isocyanurate (1410 μg m−3) was higher than all other analytes (i.e. GM < 7.85 μg m−3). The mixed models were unique to each analyte and included factors such as analyte-specific paint concentration, airflow in the paint booth, and sampler type. The effect of sampler type was corroborated by side-by-side one- and two-stage personal air sampling (N = 16 paint tasks). According to paired t-tests, significantly higher concentrations of HDI (P = 0.0363) and isocyanurate (P = 0.0035) were measured using one-stage samplers. Marginal R2 statistics were calculated for each model; significant fixed effects were able to describe 25, 52, 54, and 20% of the variability in BZCs of HDI, uretidone, biuret, and isocyanurate, respectively. Mixed models developed in this study characterize the processes governing individual polyisocyanate BZCs. In addition, the mixed models identify ways to reduce polyisocyanate BZCs and, hence, protect painters from potential adverse health effects. PMID:19622637
2017-11-07
This final rule updates the home health prospective payment system (HH PPS) payment rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor, effective for home health episodes of care ending on or after January 1, 2018. This rule also: Updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking; implements the third year of a 3-year phase-in of a reduction to the national, standardized 60-day episode payment to account for estimated case-mix growth unrelated to increases in patient acuity (that is, nominal case-mix growth) between calendar year (CY) 2012 and CY 2014; and discusses our efforts to monitor the potential impacts of the rebasing adjustments that were implemented in CY 2014 through CY 2017. In addition, this rule finalizes changes to the Home Health Value-Based Purchasing (HHVBP) Model and to the Home Health Quality Reporting Program (HH QRP). We are not finalizing the implementation of the Home Health Groupings Model (HHGM) in this final rule.
Schmidt, James R; De Houwer, Jan; Rothermund, Klaus
2016-12-01
The current paper presents an extension of the Parallel Episodic Processing model. The model is developed for simulating behaviour in performance (i.e., speeded response time) tasks and learns to anticipate both how and when to respond based on retrieval of memories of previous trials. With one fixed parameter set, the model is shown to successfully simulate a wide range of different findings. These include: practice curves in the Stroop paradigm, contingency learning effects, learning acquisition curves, stimulus-response binding effects, mixing costs, and various findings from the attentional control domain. The results demonstrate several important points. First, the same retrieval mechanism parsimoniously explains stimulus-response binding, contingency learning, and practice effects. Second, as performance improves with practice, any effects will shrink with it. Third, a model of simple learning processes is sufficient to explain phenomena that are typically (but perhaps incorrectly) interpreted in terms of higher-order control processes. More generally, we argue that computational models with a fixed parameter set and wider breadth should be preferred over those that are restricted to a narrow set of phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.
Testing homogeneity in Weibull-regression models.
Bolfarine, Heleno; Valença, Dione M
2005-10-01
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ching, Ping Pui; Riemer, Nicole; West, Matthew
2016-05-27
Black carbon (BC) is usually mixed with other aerosol species within individual aerosol particles. This mixture, along with the particles' size and morphology, determines the particles' optical and cloud condensation nuclei properties, and hence black carbon's climate impacts. In this study the particle-resolved aerosol model PartMC-MOSAIC was used to quantify the importance of black carbon mixing state for predicting cloud microphysical quantities. Based on a set of about 100 cloud parcel simulations a process level analysis framework was developed to attribute the response in cloud microphysical properties to changes in the underlying aerosol population ("plume effect") and the cloud parcelmore » cooling rate ("parcel effect"). It shows that the response of cloud droplet number concentration to changes in BC emissions depends on the BC mixing state. When the aerosol population contains mainly aged BC particles an increase in BC emission results in increasing cloud droplet number concentrations ("additive effect"). In contrast, when the aerosol population contains mainly fresh BC particles they act as sinks for condensable gaseous species, resulting in a decrease in cloud droplet number concentration as BC emissions are increased ("competition effect"). Additionally, we quantified the error in cloud microphysical quantities when neglecting the information on BC mixing state, which is often done in aerosol models. The errors ranged from -12% to +45% for the cloud droplet number fraction, from 0% to +1022% for the nucleation-scavenged black carbon (BC) mass fraction, from -12% to +4% for the effective radius, and from -30% to +60% for the relative dispersion.« less
ATLAS - A new Lagrangian transport and mixing model with detailed stratospheric chemistry
NASA Astrophysics Data System (ADS)
Wohltmann, I.; Rex, M.; Lehmann, R.
2009-04-01
We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS. Lagrangian models have some crucial advantages over Eulerian grid-box based models, like no numerical diffusion, no limitation of the time step of the model by the CFL criterion, conservation of mixing ratios by design and easy parallelization of code. The transport module is based on a trajectory code developed at the Alfred Wegener Institute. The horizontal and vertical resolution, the vertical coordinate system (pressure, potential temperature, hybrid coordinate) and the time step of the model are flexible, so that the model can be used both for process studies and long-time runs over several decades. Mixing of the Lagrangian air parcels is parameterized based on the local shear and strain of the flow with a method similar to that used in the CLaMS model, but with some modifications like a triangulation that introduces no vertical layers. The stratospheric chemistry module was developed at the Institute and includes 49 species and 170 reactions and a detailed treatment of heterogenous chemistry on polar stratospheric clouds. We present an overview over the model architecture, the transport and mixing concept and some validation results. Comparison of model results with tracer data from flights of the ER2 aircraft in the stratospheric polar vortex in 1999/2000 which are able to resolve fine tracer filaments show that excellent agreement with observed tracer structures can be achieved with a suitable mixing parameterization.
Amiaud, Lionel; Fillion, Jean-Hugues; Dulieu, François; Momeni, Anouchah; Lemaire, Jean-Louis
2015-11-28
We study the adsorption and desorption of three isotopologues of molecular hydrogen mixed on 10 ML of porous amorphous water ice (ASW) deposited at 10 K. Thermally programmed desorption (TPD) of H2, D2 and HD adsorbed at 10 K have been performed with different mixings. Various coverages of H2, HD and D2 have been explored and a model taking into account all species adsorbed on the surface is presented in detail. The model we propose allows to extract the parameters required to fully reproduce the desorption of H2, HD and D2 for various coverages and mixtures in the sub-monolayer regime. The model is based on a statistical description of the process in a grand-canonical ensemble where adsorbed molecules are described following a Fermi-Dirac distribution.
NASA Astrophysics Data System (ADS)
Liu, X.; Shi, Y.; Wu, M.; Zhang, K.
2017-12-01
Mixed-phase clouds frequently observed in the Arctic and mid-latitude storm tracks have the substantial impacts on the surface energy budget, precipitation and climate. In this study, we first implement the two empirical parameterizations (Niemand et al. 2012 and DeMott et al. 2015) of heterogeneous ice nucleation for mixed-phase clouds in the NCAR Community Atmosphere Model Version 5 (CAM5) and DOE Accelerated Climate Model for Energy Version 1 (ACME1). Model simulated ice nucleating particle (INP) concentrations based on Niemand et al. and DeMott et al. are compared with those from the default ice nucleation parameterization based on the classical nucleation theory (CNT) in CAM5 and ACME, and with in situ observations. Significantly higher INP concentrations (by up to a factor of 5) are simulated from Niemand et al. than DeMott et al. and CNT especially over the dust source regions in both CAM5 and ACME. Interestingly the ACME model simulates higher INP concentrations than CAM5, especially in the Polar regions. This is also the case when we nudge the two models' winds and temperature towards the same reanalysis, indicating more efficient transport of aerosols (dust) to the Polar regions in ACME. Next, we examine the responses of model simulated cloud liquid water and ice water contents to different INP concentrations from three ice nucleation parameterizations (Niemand et al., DeMott et al., and CNT) in CAM5 and ACME. Changes in liquid water path (LWP) reach as much as 20% in the Arctic regions in ACME between the three parameterizations while the LWP changes are smaller and limited in the Northern Hemispheric mid-latitudes in CAM5. Finally, the impacts on cloud radiative forcing and dust indirect effects on mixed-phase clouds are quantified with the three ice nucleation parameterizations in CAM5 and ACME.
Analysis and modeling of subgrid scalar mixing using numerical data
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Zhou, YE
1995-01-01
Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.
A flavor symmetry model for bilarge leptonic mixing and the lepton masses
NASA Astrophysics Data System (ADS)
Ohlsson, Tommy; Seidl, Gerhart
2002-11-01
We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.
Wu, Yupan; Ren, Yukun; Jiang, Hongyuan
2017-01-01
We propose a 3D microfluidic mixer based on the alternating current electrothermal (ACET) flow. The ACET vortex is produced by 3D electrodes embedded in the sidewall of the microchannel and is used to stir the fluidic sample throughout the entire channel depth. An optimized geometrical structure of the proposed 3D micromixer device is obtained based on the enhanced theoretical model of ACET flow and natural convection. We quantitatively analyze the flow field driven by the ACET, and a pattern of electrothermal microvortex is visualized by the micro-particle imaging velocimetry. Then, the mixing experiment is conducted using dye solutions with varying solution conductivities. Mixing efficiency can exceed 90% for electrolytes with 0.2 S/m (1 S/m) when the flow rate is 0.364 μL/min (0.728 μL/min) and the imposed peak-to-peak voltage is 52.5 V (35 V). A critical analysis of our micromixer in comparison with different mixer designs using a comparative mixing index is also performed. The ACET micromixer embedded with sidewall 3D electrodes can achieve a highly effective mixing performance and can generate high throughput in the continuous-flow condition. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
He, C.; Liou, K. N.; Takano, Y.; Yang, P.; Li, Q.; Chen, F.
2017-12-01
A set of parameterizations is developed for spectral single-scattering properties of clean and black carbon (BC)-contaminated snow based on geometric-optic surface-wave (GOS) computations, which explicitly resolves BC-snow internal mixing and various snow grain shapes. GOS calculations show that, compared with nonspherical grains, volume-equivalent snow spheres show up to 20% larger asymmetry factors and hence stronger forward scattering, particularly at wavelengths <1 mm. In contrast, snow grain sizes have a rather small impact on the asymmetry factor at wavelengths <1 mm, whereas size effects are important at longer wavelengths. The snow asymmetry factor is parameterized as a function of effective size, aspect ratio, and shape factor, and shows excellent agreement with GOS calculations. According to GOS calculations, the single-scattering coalbedo of pure snow is predominantly affected by grain sizes, rather than grain shapes, with higher values for larger grains. The snow single-scattering coalbedo is parameterized in terms of the effective size that combines shape and size effects, with an accuracy of >99%. Based on GOS calculations, BC-snow internal mixing enhances the snow single-scattering coalbedo at wavelengths <1 mm, but it does not alter the snow asymmetry factor. The BC-induced enhancement ratio of snow single-scattering coalbedo, independent of snow grain size and shape, is parameterized as a function of BC concentration with an accuracy of >99%. Overall, in addition to snow grain size, both BC-snow internal mixing and snow grain shape play critical roles in quantifying BC effects on snow optical properties. The present parameterizations can be conveniently applied to snow, land surface, and climate models including snowpack radiative transfer processes.
NASA Astrophysics Data System (ADS)
Schilling, Oliver S.; Gerber, Christoph; Partington, Daniel J.; Purtschert, Roland; Brennwald, Matthias S.; Kipfer, Rolf; Hunkeler, Daniel; Brunner, Philip
2017-12-01
To provide a sound understanding of the sources, pathways, and residence times of groundwater water in alluvial river-aquifer systems, a combined multitracer and modeling experiment was carried out in an important alluvial drinking water wellfield in Switzerland. 222Rn, 3H/3He, atmospheric noble gases, and the novel 37Ar-method were used to quantify residence times and mixing ratios of water from different sources. With a half-life of 35.1 days, 37Ar allowed to successfully close a critical observational time gap between 222Rn and 3H/3He for residence times of weeks to months. Covering the entire range of residence times of groundwater in alluvial systems revealed that, to quantify the fractions of water from different sources in such systems, atmospheric noble gases and helium isotopes are tracers suited for end-member mixing analysis. A comparison between the tracer-based mixing ratios and mixing ratios simulated with a fully-integrated, physically-based flow model showed that models, which are only calibrated against hydraulic heads, cannot reliably reproduce mixing ratios or residence times of alluvial river-aquifer systems. However, the tracer-based mixing ratios allowed the identification of an appropriate flow model parametrization. Consequently, for alluvial systems, we recommend the combination of multitracer studies that cover all relevant residence times with fully-coupled, physically-based flow modeling to better characterize the complex interactions of river-aquifer systems.
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
Schifferdecker, Karen E; Adachi-Mejia, Anna M; Butcher, Rebecca L; O'Connor, Sharon; Li, Zhigang; Bazos, Dorothy A
2016-01-01
Action Learning Collaboratives (ALCs), whereby teams apply quality improvement (QI) tools and methods, have successfully improved patient care delivery and outcomes. We adapted and tested the ALC model as a community-based obesity prevention intervention focused on physical activity and healthy eating. The intervention used QI tools (e.g., progress monitoring) and team-based activities and was implemented in three communities through nine monthly meetings. To assess process and outcomes, we used a longitudinal repeated-measures and mixed-methods triangulation approach with a quasi-experimental design including objective measures at three time points. Most of the 97 participants were female (85.4%), White (93.8%), and non-Hispanic/Latino (95.9%). Average age was 52 years; 28.0% had annual household income of $20,000 or less; and mean body mass index was 35. Through mixed-effects models, we found some physical activity outcomes improved. Other outcomes did not significantly change. Although participants favorably viewed the QI tools, components of the QI process such as sharing goals and data on progress in teams and during meetings were limited. Participants' requests for more education or activities around physical activity and healthy eating, rather than progress monitoring and data sharing required for QI activities, challenged ALC model implementation. An ALC model for community-based obesity prevention may be more effective when applied to preexisting teams in community-based organizations. © 2015 Society for Public Health Education.
On the validity of effective formulations for transport through heterogeneous porous media
NASA Astrophysics Data System (ADS)
de Dreuzy, Jean-Raynald; Carrera, Jesus
2016-04-01
Geological heterogeneity enhances spreading of solutes and causes transport to be anomalous (i.e., non-Fickian), with much less mixing than suggested by dispersion. This implies that modeling transport requires adopting either stochastic approaches that model heterogeneity explicitly or effective transport formulations that acknowledge the effects of heterogeneity. A number of such formulations have been developed and tested as upscaled representations of enhanced spreading. However, their ability to represent mixing has not been formally tested, which is required for proper reproduction of chemical reactions and which motivates our work. We propose that, for an effective transport formulation to be considered a valid representation of transport through heterogeneous porous media (HPM), it should honor mean advection, mixing and spreading. It should also be flexible enough to be applicable to real problems. We test the capacity of the multi-rate mass transfer (MRMT) model to reproduce mixing observed in HPM, as represented by the classical multi-Gaussian log-permeability field with a Gaussian correlation pattern. Non-dispersive mixing comes from heterogeneity structures in the concentration fields that are not captured by macrodispersion. These fine structures limit mixing initially, but eventually enhance it. Numerical results show that, relative to HPM, MRMT models display a much stronger memory of initial conditions on mixing than on dispersion because of the sensitivity of the mixing state to the actual values of concentration. Because MRMT does not restitute the local concentration structures, it induces smaller non-dispersive mixing than HPM. However long-lived trapping in the immobile zones may sustain the deviation from dispersive mixing over much longer times. While spreading can be well captured by MRMT models, in general non-dispersive mixing cannot.
NASA Astrophysics Data System (ADS)
Laminack, William; Gole, James
2015-12-01
A unique MEMS/NEMS approach is presented for the modeling of a detection platform for mixed gas interactions. Mixed gas analytes interact with nanostructured decorating metal oxide island sites supported on a microporous silicon substrate. The Inverse Hard/Soft acid/base (IHSAB) concept is used to assess a diversity of conductometric responses for mixed gas interactions as a function of these nanostructured metal oxides. The analyte conductometric responses are well represented using a combination diffusion/absorption-based model for multi-gas interactions where a newly developed response absorption isotherm, based on the Fermi distribution function is applied. A further coupling of this model with the IHSAB concept describes the considerations in modeling of multi-gas mixed analyte-interface, and analyte-analyte interactions. Taking into account the molecular electronic interaction of both the analytes with each other and an extrinsic semiconductor interface we demonstrate how the presence of one gas can enhance or diminish the reversible interaction of a second gas with the extrinsic semiconductor interface. These concepts demonstrate important considerations in the array-based formats for multi-gas sensing and its applications.
Effect of exercise on patient specific abdominal aortic aneurysm flow topology and mixing.
Arzani, Amirhossein; Les, Andrea S; Dalman, Ronald L; Shadden, Shawn C
2014-02-01
Computational fluid dynamics modeling was used to investigate changes in blood transport topology between rest and exercise conditions in five patient-specific abdominal aortic aneurysm models. MRI was used to provide the vascular anatomy and necessary boundary conditions for simulating blood velocity and pressure fields inside each model. Finite-time Lyapunov exponent fields and associated Lagrangian coherent structures were computed from blood velocity data and were used to compare features of the transport topology between rest and exercise both mechanistically and qualitatively. A mix-norm and mix-variance measure based on fresh blood distribution throughout the aneurysm over time were implemented to quantitatively compare mixing between rest and exercise. Exercise conditions resulted in higher and more uniform mixing and reduced the overall residence time in all aneurysms. Separated regions of recirculating flow were commonly observed in rest, and these regions were either reduced or removed by attached and unidirectional flow during exercise, or replaced with regional chaotic and transiently turbulent mixing, or persisted and even extended during exercise. The main factor that dictated the change in flow topology from rest to exercise was the behavior of the jet of blood penetrating into the aneurysm during systole. Copyright © 2013 John Wiley & Sons, Ltd.
Mapping nighttime PM2.5 from VIIRS DNB using a linear mixed-effect model
NASA Astrophysics Data System (ADS)
Fu, D.; Xia, X.; Duan, M.; Zhang, X.; Li, X.; Wang, J.; Liu, J.
2018-04-01
Estimation of particulate matter with aerodynamic diameter less than 2.5 μm (PM2.5) from daytime satellite aerosol products is widely reported in the literature; however, remote sensing of nighttime surface PM2.5 from space is very limited. PM2.5 shows a distinct diurnal cycle and PM2.5 concentration at 1:00 local standard time (LST) has a linear correlation coefficient (R) of 0.80 with daily-mean PM2.5. Therefore, estimation of nighttime PM2.5 is required toward an improved understanding of temporal variation of PM2.5 and its effects on air quality. Using data from the Day/Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) and hourly PM2.5 data at 35 stations in Beijing, a mixed-effect model is developed here to estimate nighttime PM2.5 from nighttime light radiance measurements based on the assumption that the DNB-PM2.5 relationship is constant spatially but varies temporally. Cross-validation showed that the model developed using all stations predict daily PM2.5 with mean determination coefficient (R2) of 0.87 ± 0.12, 0.83 ± 0.10 , 0.87 ± 0.09, 0.83 ± 0.10 in spring, summer, autumn and winter. Further analysis showed that the best model performance was achieved in urban stations with average cross-validation R2 of 0.92. In rural stations, DNB light signal is weak and was likely smeared by lunar illuminance that resulted in relatively poor estimation of PM2.5. The fixed and random parameters of the mixed-effect model in urban stations differed from those in suburban stations, which indicated that the assumption of the mixed-effect model should be carefully evaluated when used at a regional scale.
NASA Astrophysics Data System (ADS)
Lamer, K.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.; Clothiaux, E. E.
2017-12-01
An important aspect of evaluating Artic cloud representation in a general circulation model (GCM) consists of using observational benchmarks which are as equivalent as possible to model output in order to avoid methodological bias and focus on correctly diagnosing model dynamical and microphysical misrepresentations. However, current cloud observing systems are known to suffer from biases such as limited sensitivity, and stronger response to large or small hydrometeors. Fortunately, while these observational biases cannot be corrected, they are often well understood and can be reproduced in forward simulations. Here a ground-based millimeter wavelength Doppler radar and micropulse lidar forward simulator able to interface with output from the Goddard Institute for Space Studies (GISS) ModelE GCM is presented. ModelE stratiform hydrometeor fraction, mixing ratio, mass-weighted fall speed and effective radius are forward simulated to vertically-resolved profiles of radar reflectivity, Doppler velocity and spectrum width as well as lidar backscatter and depolarization ratio. These forward simulated fields are then compared to Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) ground-based observations to assess cloud vertical structure (CVS). Model evalution of Arctic mixed-phase cloud would also benefit from hydrometeor phase evaluation. While phase retrieval from synergetic observations often generates large uncertainties, the same retrieval algorithm can be applied to observed and forward-simulated radar-lidar fields, thereby producing retrieved hydrometeor properties with potentially the same uncertainties. Comparing hydrometeor properties retrieved in exactly the same way aims to produce the best apples-to-apples comparisons between GCM ouputs and observations. The use of a comprenhensive ground-based forward simulator coupled with a hydrometeor classification retrieval algorithm provides a new perspective for GCM evaluation of Arctic mixed-phase clouds from the ground where low-level supercooled liquid layer are more easily observed and where additional environmental properties such as cloud condensation nuclei are quantified. This should help assist in choosing between several possible diagnostic ice nucleation schemes for ModelE stratiform cloud.
Mei, J.; Dong, P.; Kalnaus, S.; ...
2017-07-21
It has been well established that fatigue damage process is load-path dependent under non-proportional multi-axial loading conditions. Most of studies to date have been focusing on interpretation of S-N based test data by constructing a path-dependent fatigue damage model. Our paper presents a two-parameter mixed-mode fatigue crack growth model which takes into account of crack growth dependency on both load path traversed and a maximum effective stress intensity attained in a stress intensity factor plane (e.g.,KI-KIII plane). Furthermore, by taking advantage of a path-dependent maximum range (PDMR) cycle definition (Dong et al., 2010; Wei and Dong, 2010), the two parametersmore » are formulated by introducing a moment of load path (MLP) based equivalent stress intensity factor range (ΔKNP) and a maximum effective stress intensity parameter KMax incorporating an interaction term KI·KIII. To examine the effectiveness of the proposed model, two sets of crack growth rate test data are considered. The first set is obtained as a part of this study using 304 stainless steel disk specimens subjected to three combined non-proportional modes I and III loading conditions (i.e., with a phase angle of 0°, 90°, and 180°). The second set was obtained by Feng et al. (2007) using 1070 steel disk specimens subjected to similar types of non-proportional mixed-mode conditions. Once the proposed two-parameter non-proportional mixed-mode crack growth model is used, it is shown that a good correlation can be achieved for both sets of the crack growth rate test data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, J.; Dong, P.; Kalnaus, S.
It has been well established that fatigue damage process is load-path dependent under non-proportional multi-axial loading conditions. Most of studies to date have been focusing on interpretation of S-N based test data by constructing a path-dependent fatigue damage model. Our paper presents a two-parameter mixed-mode fatigue crack growth model which takes into account of crack growth dependency on both load path traversed and a maximum effective stress intensity attained in a stress intensity factor plane (e.g.,KI-KIII plane). Furthermore, by taking advantage of a path-dependent maximum range (PDMR) cycle definition (Dong et al., 2010; Wei and Dong, 2010), the two parametersmore » are formulated by introducing a moment of load path (MLP) based equivalent stress intensity factor range (ΔKNP) and a maximum effective stress intensity parameter KMax incorporating an interaction term KI·KIII. To examine the effectiveness of the proposed model, two sets of crack growth rate test data are considered. The first set is obtained as a part of this study using 304 stainless steel disk specimens subjected to three combined non-proportional modes I and III loading conditions (i.e., with a phase angle of 0°, 90°, and 180°). The second set was obtained by Feng et al. (2007) using 1070 steel disk specimens subjected to similar types of non-proportional mixed-mode conditions. Once the proposed two-parameter non-proportional mixed-mode crack growth model is used, it is shown that a good correlation can be achieved for both sets of the crack growth rate test data.« less
Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes
Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.
2018-01-01
Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes the biosynthetic pathways for the main components of biomass—namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-α-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production. PMID:29381705
Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes.
Nag, Ambarish; St John, Peter C; Crowley, Michael F; Bomble, Yannick J
2018-01-01
Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes the biosynthetic pathways for the main components of biomass-namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-α-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.
Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G.
2000-01-01
The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.
Modelling the Progression of Competitive Performance of an Academy's Soccer Teams.
Malcata, Rita M; Hopkins, Will G; Richardson, Scott
2012-01-01
Progression of a team's performance is a key issue in competitive sport, but there appears to have been no published research on team progression for periods longer than a season. In this study we report the game-score progression of three teams of a youth talent-development academy over five seasons using a novel analytic approach based on generalised mixed modelling. The teams consisted of players born in 1991, 1992 and 1993; they played totals of 115, 107 and 122 games in Asia and Europe between 2005 and 2010 against teams differing in age by up to 3 years. Game scores predicted by the mixed model were assumed to have an over-dispersed Poisson distribution. The fixed effects in the model estimated an annual linear pro-gression for Aspire and for the other teams (grouped as a single opponent) with adjustment for home-ground advantage and for a linear effect of age difference between competing teams. A random effect allowed for different mean scores for Aspire and opposition teams. All effects were estimated as factors via log-transformation and presented as percent differences in scores. Inferences were based on the span of 90% confidence intervals in relation to thresholds for small factor effects of x/÷1.10 (+10%/-9%). Most effects were clear only when data for the three teams were combined. Older teams showed a small 27% increase in goals scored per year of age difference (90% confidence interval 13 to 42%). Aspire experienced a small home-ground advantage of 16% (-5 to 41%), whereas opposition teams experienced 31% (7 to 60%) on their own ground. After adjustment for these effects, the Aspire teams scored on average 1.5 goals per match, with little change in the five years of their existence, whereas their opponents' scores fell from 1.4 in their first year to 1.0 in their last. The difference in progression was trivial over one year (7%, -4 to 20%), small over two years (15%, -8 to 44%), but unclear over >2 years. In conclusion, the generalized mixed model has marginal utility for estimating progression of soccer scores, owing to the uncertainty arising from low game scores. The estimates are likely to be more precise and useful in sports with higher game scores. Key pointsA generalized linear mixed model is the approach for tracking game scores, key performance indicators or other measures of performance based on counts in sports where changes within and/or between games/seasons have to be considered.Game scores in soccer could be useful to track performance progression of teams, but hundreds of games are needed.Fewer games will be needed for tracking performance represented by counts with high scores, such as game scores in rugby or key performance indicators based on frequent events or player actions in any team sport.
Modelling the Progression of Competitive Performance of an Academy’s Soccer Teams
Malcata, Rita M.; Hopkins, Will G; Richardson, Scott
2012-01-01
Progression of a team’s performance is a key issue in competitive sport, but there appears to have been no published research on team progression for periods longer than a season. In this study we report the game-score progression of three teams of a youth talent-development academy over five seasons using a novel analytic approach based on generalised mixed modelling. The teams consisted of players born in 1991, 1992 and 1993; they played totals of 115, 107 and 122 games in Asia and Europe between 2005 and 2010 against teams differing in age by up to 3 years. Game scores predicted by the mixed model were assumed to have an over-dispersed Poisson distribution. The fixed effects in the model estimated an annual linear pro-gression for Aspire and for the other teams (grouped as a single opponent) with adjustment for home-ground advantage and for a linear effect of age difference between competing teams. A random effect allowed for different mean scores for Aspire and opposition teams. All effects were estimated as factors via log-transformation and presented as percent differences in scores. Inferences were based on the span of 90% confidence intervals in relation to thresholds for small factor effects of x/÷1.10 (+10%/-9%). Most effects were clear only when data for the three teams were combined. Older teams showed a small 27% increase in goals scored per year of age difference (90% confidence interval 13 to 42%). Aspire experienced a small home-ground advantage of 16% (-5 to 41%), whereas opposition teams experienced 31% (7 to 60%) on their own ground. After adjustment for these effects, the Aspire teams scored on average 1.5 goals per match, with little change in the five years of their existence, whereas their opponents’ scores fell from 1.4 in their first year to 1.0 in their last. The difference in progression was trivial over one year (7%, -4 to 20%), small over two years (15%, -8 to 44%), but unclear over >2 years. In conclusion, the generalized mixed model has marginal utility for estimating progression of soccer scores, owing to the uncertainty arising from low game scores. The estimates are likely to be more precise and useful in sports with higher game scores. Key pointsA generalized linear mixed model is the approach for tracking game scores, key performance indicators or other measures of performance based on counts in sports where changes within and/or between games/seasons have to be considered.Game scores in soccer could be useful to track performance progression of teams, but hundreds of games are needed.Fewer games will be needed for tracking performance represented by counts with high scores, such as game scores in rugby or key performance indicators based on frequent events or player actions in any team sport. PMID:24149364
ERIC Educational Resources Information Center
Van Norman, Ethan R.; Christ, Theodore J.; Zopluoglu, Cengiz
2013-01-01
This study examined the effect of baseline estimation on the quality of trend estimates derived from Curriculum Based Measurement of Oral Reading (CBM-R) progress monitoring data. The authors used a linear mixed effects regression (LMER) model to simulate progress monitoring data for schedules ranging from 6-20 weeks for datasets with high and low…
NASA Astrophysics Data System (ADS)
Fagan, Mike; Dueben, Peter; Palem, Krishna; Carver, Glenn; Chantry, Matthew; Palmer, Tim; Schlacter, Jeremy
2017-04-01
It has been shown that a mixed precision approach that judiciously replaces double precision with single precision calculations can speed-up global simulations. In particular, a mixed precision variation of the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) showed virtually the same quality model results as the standard double precision version (Vana et al., Single precision in weather forecasting models: An evaluation with the IFS, Monthly Weather Review, in print). In this study, we perform detailed measurements of savings in computing time and energy using a mixed precision variation of the -OpenIFS- model. The mixed precision variation of OpenIFS is analogous to the IFS variation used in Vana et al. We (1) present results for energy measurements for simulations in single and double precision using Intel's RAPL technology, (2) conduct a -scaling- study to quantify the effects that increasing model resolution has on both energy dissipation and computing cycles, (3) analyze the differences between single core and multicore processing, and (4) compare the effects of different compiler technologies on the mixed precision OpenIFS code. In particular, we compare intel icc/ifort with gnu gcc/gfortran.
Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis
Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas
2016-01-01
The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246
Gundersen, Kenneth; Kvaløy, Jan Terje; Eftestøl, Trygve; Kramer-Johansen, Jo
2015-10-15
For patients undergoing cardiopulmonary resuscitation (CPR) and being in a shockable rhythm, the coarseness of the electrocardiogram (ECG) signal is an indicator of the state of the patient. In the current work, we show how mixed effects stochastic differential equations (SDE) models, commonly used in pharmacokinetic and pharmacodynamic modelling, can be used to model the relationship between CPR quality measurements and ECG coarseness. This is a novel application of mixed effects SDE models to a setting quite different from previous applications of such models and where using such models nicely solves many of the challenges involved in analysing the available data. Copyright © 2015 John Wiley & Sons, Ltd.
A Bayesian Semiparametric Latent Variable Model for Mixed Responses
ERIC Educational Resources Information Center
Fahrmeir, Ludwig; Raach, Alexander
2007-01-01
In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…
Vučićević, Katarina; Jovanović, Marija; Golubović, Bojana; Kovačević, Sandra Vezmar; Miljković, Branislava; Martinović, Žarko; Prostran, Milica
2015-02-01
The present study aimed to establish population pharmacokinetic model for phenobarbital (PB), examining and quantifying the magnitude of PB interactions with other antiepileptic drugs concomitantly used and to demonstrate its use for individualization of PB dosing regimen in adult epileptic patients. In total 205 PB concentrations were obtained during routine clinical monitoring of 136 adult epilepsy patients. PB steady state concentrations were measured by homogeneous enzyme immunoassay. Nonlinear mixed effects modelling (NONMEM) was applied for data analyses and evaluation of the final model. According to the final population model, significant determinant of apparent PB clearance (CL/F) was daily dose of concomitantly given valproic acid (VPA). Typical value of PB CL/F for final model was estimated at 0.314 l/h. Based on the final model, co-therapy with usual VPA dose of 1000 mg/day, resulted in PB CL/F average decrease of about 25 %, while 2000 mg/day leads to an average 50 % decrease in PB CL/F. Developed population PB model may be used in estimating individual CL/F for adult epileptic patients and could be applied for individualizing dosing regimen taking into account dose-dependent effect of concomitantly given VPA.
Curriculum-Based Measurement of Oral Reading: Quality of Progress Monitoring Outcomes
ERIC Educational Resources Information Center
Christ, Theodore J.; Zopluoglu, Cengiz; Long, Jeffery D.; Monaghen, Barbara D.
2012-01-01
Curriculum-based measurement of oral reading (CBM-R) is frequently used to set student goals and monitor student progress. This study examined the quality of growth estimates derived from CBM-R progress monitoring data. The authors used a linear mixed effects regression (LMER) model to simulate progress monitoring data for multiple levels of…
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Southern Ocean vertical iron fluxes; the ocean model effect
NASA Astrophysics Data System (ADS)
Schourup-Kristensen, V.; Haucke, J.; Losch, M. J.; Wolf-Gladrow, D.; Voelker, C. D.
2016-02-01
The Southern Ocean plays a key role in the climate system, but commonly used large-scale ocean general circulation biogeochemical models give different estimates of current and future Southern Ocean net primary and export production. The representation of the Southern Ocean iron sources plays an important role for the modeled biogeochemistry. Studies of the iron supply to the surface mixed layer have traditionally focused on the aeolian and sediment contributions, but recent work has highlighted the importance of the vertical supply from below. We have performed a model study in which the biogeochemical model REcoM2 was coupled to two different ocean models, the Finite Element Sea-ice Ocean Model (FESOM) and the MIT general circulation model (MITgcm) and analyzed the magnitude of the iron sources to the surface mixed layer from below in the two models. Our results revealed a remarkable difference in terms of mechanism and magnitude of transport. The mean iron supply from below in the Southern Ocean was on average four times higher in MITgcm than in FESOM and the dominant pathway was entrainment in MITgcm, whereas diffusion dominated in FESOM. Differences in the depth and seasonal amplitude of the mixed layer between the models affect on the vertical iron profile, the relative position of the base of the mixed layer and ferricline and thereby also on the iron fluxes. These differences contribute to differences in the phytoplankton composition in the two models, as well as in the timing of the onset of the spring bloom. The study shows that the choice of ocean model has a significant impact on the iron supply to the Southern Ocean mixed layer and thus on the modeled carbon cycle, with possible implications for model runs predicting the future carbon uptake in the region.
Determining the impact of cell mixing on signaling during development.
Uriu, Koichiro; Morelli, Luis G
2017-06-01
Cell movement and intercellular signaling occur simultaneously to organize morphogenesis during embryonic development. Cell movement can cause relative positional changes between neighboring cells. When intercellular signals are local such cell mixing may affect signaling, changing the flow of information in developing tissues. Little is known about the effect of cell mixing on intercellular signaling in collective cellular behaviors and methods to quantify its impact are lacking. Here we discuss how to determine the impact of cell mixing on cell signaling drawing an example from vertebrate embryogenesis: the segmentation clock, a collective rhythm of interacting genetic oscillators. We argue that comparing cell mixing and signaling timescales is key to determining the influence of mixing. A signaling timescale can be estimated by combining theoretical models with cell signaling perturbation experiments. A mixing timescale can be obtained by analysis of cell trajectories from live imaging. After comparing cell movement analyses in different experimental settings, we highlight challenges in quantifying cell mixing from embryonic timelapse experiments, especially a reference frame problem due to embryonic motions and shape changes. We propose statistical observables characterizing cell mixing that do not depend on the choice of reference frames. Finally, we consider situations in which both cell mixing and signaling involve multiple timescales, precluding a direct comparison between single characteristic timescales. In such situations, physical models based on observables of cell mixing and signaling can simulate the flow of information in tissues and reveal the impact of observed cell mixing on signaling. © 2017 Japanese Society of Developmental Biologists.
Assessing and Upgrading Ocean Mixing for the Study of Climate Change
NASA Astrophysics Data System (ADS)
Howard, A. M.; Fells, J.; Lindo, F.; Tulsee, V.; Canuto, V.; Cheng, Y.; Dubovikov, M. S.; Leboissetier, A.
2016-12-01
Climate is critical. Climate variability affects us all; Climate Change is a burning issue. Droughts, floods, other extreme events, and Global Warming's effects on these and problems such as sea-level rise and ecosystem disruption threaten lives. Citizens must be informed to make decisions concerning climate such as "business as usual" vs. mitigating emissions to keep warming within bounds. Medgar Evers undergraduates aid NASA research while learning climate science and developing computer&math skills. To make useful predictions we must realistically model each component of the climate system, including the ocean, whose critical role includes transporting&storing heat and dissolved CO2. We need physically based parameterizations of key ocean processes that can't be put explicitly in a global climate model, e.g. vertical&lateral mixing. The NASA-GISS turbulence group uses theory to model mixing including: 1) a comprehensive scheme for small scale vertical mixing, including convection&shear, internal waves & double-diffusion, and bottom tides 2) a new parameterization for the lateral&vertical mixing by mesoscale eddies. For better understanding we write our own programs. To assess the modelling MATLAB programs visualize and calculate statistics, including means, standard deviations and correlations, on NASA-GISS OGCM output with different mixing schemes and help us study drift from observations. We also try to upgrade the schemes, e.g. the bottom tidal mixing parameterizations' roughness, calculated from high resolution topographic data using Gaussian weighting functions with cut-offs. We study the effects of their parameters to improve them. A FORTRAN program extracts topography data subsets of manageable size for a MATLAB program, tested on idealized cases, to visualize&calculate roughness on. Students are introduced to modeling a complex system, gain a deeper appreciation of climate science, programming skills and familiarity with MATLAB, while furthering climate science by improving our mixing schemes. We are incorporating climate research into our college curriculum. The PI is both a member of the turbulence group at NASA-GISS and an associate professor at Medgar Evers College of CUNY, an urban minority serving institution in central Brooklyn. Supported by NSF Award AGS-1359293.
Trial type mixing substantially reduces the response set effect in the Stroop task.
Hasshim, Nabil; Parris, Benjamin A
2017-03-20
The response set effect refers to the finding that an irrelevant incongruent colour-word produces greater interference when it is one of the response options (referred to as a response set trial), compared to when it is not (a non-response set trial). Despite being a key effect for models of selective attention, the magnitude of the effect varies considerably across studies. We report two within-subjects experiments that tested the hypothesis that presentation format modulates the magnitude of the response set effect. Trial types (e.g. response set, non-response set, neutral) were either presented in separate blocks (pure) or in blocks containing trials from all conditions presented randomly (mixed). In the first experiment we show that the response set effect is substantially reduced in the mixed block context as a result of a decrease in RTs to response set trials. By demonstrating the modulation of the response set effect under conditions of trial type mixing we present evidence that is difficult for models of the effect based on strategic, top-down biasing of attention to explain. In a second experiment we tested a stimulus-driven account of the response set effect by manipulating the number of colour-words that make up the non-response set of distractors. The results show that the greater the number of non-response set colour concepts, the smaller the response set effect. Alternative accounts of the data and its implications for research debating the automaticity of reading are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling indoor particulate exposures in inner city school classrooms
Gaffin, Jonathan M.; Petty, Carter R.; Hauptman, Marissa; Kang, Choong-Min; Wolfson, Jack M.; Awad, Yara Abu; Di, Qian; Lai, Peggy S.; Sheehan, William J.; Baxi, Sachin; Coull, Brent A.; Schwartz, Joel D.; Gold, Diane R.; Koutrakis, Petros; Phipatanakul, Wanda
2016-01-01
Outdoor air pollution penetrates buildings and contributes to total indoor exposures. We investigated the relationship of indoor to outdoor particulate matter in inner-city school classrooms. The School Inner City Asthma Study investigates the effect of classroom-based environmental exposures on students with asthma in the northeast United States. Mixed-effects linear models were used to determine the relationships between indoor PM2.5 and BC and their corresponding outdoor concentrations, and to develop a model for predicting exposures to these pollutants. The indoor-outdoor sulfur ratio was used as an infiltration factor of outdoor fine particles. Weeklong concentrations of PM2.5 and BC in 199 samples from 136 classrooms (30 school buildings) were compared to those measured at a central monitoring site averaged over the same timeframe. Mixed effects regression models found significant random intercept and slope effects, which indicate that: 1) there are important PM2.5 sources in classrooms; 2) the penetration of outdoor PM2.5 particles varies by school, and 3) the site-specific outside PM2.5 levels (inferred by the models) differ from those observed at the central monitor site. Similar results were found for BC except for lack of indoor sources. The fitted predictions from the sulfur-adjusted models were moderately predictive of observed indoor pollutant levels (Out of sample correlations: PM2.5: r2 = 0.68, BC; r2 = 0.61). Our results suggest that PM2.5 has important classroom sources, which vary by school. Furthermore, using these mixed effects models, classroom exposures can be accurately predicted for dates when central site measures are available but indoor measures are not available. PMID:27599884
Effective Stochastic Model for Reactive Transport
NASA Astrophysics Data System (ADS)
Tartakovsky, A. M.; Zheng, B.; Barajas-Solano, D. A.
2017-12-01
We propose an effective stochastic advection-diffusion-reaction (SADR) model. Unlike traditional advection-dispersion-reaction models, the SADR model describes mechanical and diffusive mixing as two separate processes. In the SADR model, the mechanical mixing is driven by random advective velocity with the variance given by the coefficient of mechanical dispersion. The diffusive mixing is modeled as a fickian diffusion with the effective diffusion coefficient. Both coefficients are given in terms of Peclet number (Pe) and the coefficient of molecular diffusion. We use the experimental results of to demonstrate that for transport and bimolecular reactions in porous media the SADR model is significantly more accurate than the traditional dispersion model, which overestimates the mass of the reaction product by as much as 25%.
Zhao, Xin; Han, Meng; Ding, Lili; Calin, Adrian Cantemir
2018-01-01
The accurate forecast of carbon dioxide emissions is critical for policy makers to take proper measures to establish a low carbon society. This paper discusses a hybrid of the mixed data sampling (MIDAS) regression model and BP (back propagation) neural network (MIDAS-BP model) to forecast carbon dioxide emissions. Such analysis uses mixed frequency data to study the effects of quarterly economic growth on annual carbon dioxide emissions. The forecasting ability of MIDAS-BP is remarkably better than MIDAS, ordinary least square (OLS), polynomial distributed lags (PDL), autoregressive distributed lags (ADL), and auto-regressive moving average (ARMA) models. The MIDAS-BP model is suitable for forecasting carbon dioxide emissions for both the short and longer term. This research is expected to influence the methodology for forecasting carbon dioxide emissions by improving the forecast accuracy. Empirical results show that economic growth has both negative and positive effects on carbon dioxide emissions that last 15 quarters. Carbon dioxide emissions are also affected by their own change within 3 years. Therefore, there is a need for policy makers to explore an alternative way to develop the economy, especially applying new energy policies to establish a low carbon society.
Restructuring in response to case mix reimbursement in nursing homes: A contingency approach
Zinn, Jacqueline; Feng, Zhanlian; Mor, Vincent; Intrator, Orna; Grabowski, David
2013-01-01
Background Resident-based case mix reimbursement has become the dominant mechanism for publicly funded nursing home care. In 1998 skilled nursing facility reimbursement changed from cost-based to case mix adjusted payments under the Medicare Prospective Payment System for the costs of all skilled nursing facility care provided to Medicare recipients. In addition, as of 2004, 35 state Medicaid programs had implemented some form of case mix reimbursement. Purpose The purpose of the study is to determine if the implementation of Medicare and Medicaid case mix reimbursement increased the administrative burden on nursing homes, as evidenced by increased levels of nurses in administrative functions. Methodology/Approach The primary data for this study come from the Centers for Medicare and Medicaid Services Online Survey Certification and Reporting database from 1997 through 2004, a national nursing home database containing aggregated facility-level information, including staffing, organizational characteristics and resident conditions, on all Medicare/Medicaid certified nursing facilities in the country. We conducted multivariate regression analyses using a facility fixed-effects model to examine the effects of the implementation of Medicaid case mix reimbursement and Medicare Prospective Payment System on changes in the level of total administrative nurse staffing in nursing homes. Findings Both Medicaid case mix reimbursement and Medicare Prospective Payment System increased the level of administrative nurse staffing, on average by 5.5% and 4.0% respectively. However, lack of evidence for a substitution effect suggests that any decline in direct care staffing after the introduction of case mix reimbursement is not attributable to a shift from clinical nursing resources to administrative functions. Practice Implications Our findings indicate that the administrative burden posed by case mix reimbursement has resource implications for all freestanding facilities. At the margin, the increased administrative burden imposed by case mix may become a factor influencing a range of decisions, including resident admission and staff hiring. PMID:18360162
Restructuring in response to case mix reimbursement in nursing homes: a contingency approach.
Zinn, Jacqueline; Feng, Zhanlian; Mor, Vincent; Intrator, Orna; Grabowski, David
2008-01-01
Resident-based case mix reimbursement has become the dominant mechanism for publicly funded nursing home care. In 1998 skilled nursing facility reimbursement changed from cost-based to case mix adjusted payments under the Medicare Prospective Payment System for the costs of all skilled nursing facility care provided to Medicare recipients. In addition, as of 2004, 35 state Medicaid programs had implemented some form of case mix reimbursement. The purpose of the study is to determine if the implementation of Medicare and Medicaid case mix reimbursement increased the administrative burden on nursing homes, as evidenced by increased levels of nurses in administrative functions. The primary data for this study come from the Centers for Medicare and Medicaid Services Online Survey Certification and Reporting database from 1997 through 2004, a national nursing home database containing aggregated facility-level information, including staffing, organizational characteristics and resident conditions, on all Medicare/Medicaid certified nursing facilities in the country. We conducted multivariate regression analyses using a facility fixed-effects model to examine the effects of the implementation of Medicaid case mix reimbursement and Medicare Prospective Payment System on changes in the level of total administrative nurse staffing in nursing homes. Both Medicaid case mix reimbursement and Medicare Prospective Payment System increased the level of administrative nurse staffing, on average by 5.5% and 4.0% respectively. However, lack of evidence for a substitution effect suggests that any decline in direct care staffing after the introduction of case mix reimbursement is not attributable to a shift from clinical nursing resources to administrative functions. Our findings indicate that the administrative burden posed by case mix reimbursement has resource implications for all freestanding facilities. At the margin, the increased administrative burden imposed by case mix may become a factor influencing a range of decisions, including resident admission and staff hiring.
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805
Text-Based Recall and Extra-Textual Generations Resulting from Simplified and Authentic Texts
ERIC Educational Resources Information Center
Crossley, Scott A.; McNamara, Danielle S.
2016-01-01
This study uses a moving windows self-paced reading task to assess text comprehension of beginning and intermediate-level simplified texts and authentic texts by L2 learners engaged in a text-retelling task. Linear mixed effects (LME) models revealed statistically significant main effects for reading proficiency and text level on the number of…
ERIC Educational Resources Information Center
Sperber, Nina R.; Bosworth, Hayden B.; Coffman, Cynthia J.; Lindquist, Jennifer H.; Oddone, Eugene Z.; Weinberger, Morris; Allen, Kelli D.
2013-01-01
We explored whether the effects of a telephone-based osteoarthritis (OA) self-management support intervention differed by race and health literacy. Participants included 515 veterans with hip and/or knee OA. Linear mixed models assessed differential effects of the intervention compared with health education (HE) and usual care (UC) on pain…
NASA Astrophysics Data System (ADS)
Osman, M. K.; Hocking, W. K.; Tarasick, D. W.
2016-06-01
Vertical diffusion and mixing of tracers in the upper troposphere and lower stratosphere (UTLS) are not uniform, but primarily occur due to patches of turbulence that are intermittent in time and space. The effective diffusivity of regions of patchy turbulence is related to statistical parameters describing the morphology of turbulent events, such as lifetime, number, width, depth and local diffusivity (i.e., diffusivity within the turbulent patch) of the patches. While this has been recognized in the literature, the primary focus has been on well-mixed layers, with few exceptions. In such cases the local diffusivity is irrelevant, but this is not true for weakly and partially mixed layers. Here, we use both theory and numerical simulations to consider the impact of intermediate and weakly mixed layers, in addition to well-mixed layers. Previous approaches have considered only one dimension (vertical), and only a small number of layers (often one at each time step), and have examined mixing of constituents. We consider a two-dimensional case, with multiple layers (10 and more, up to hundreds and even thousands), having well-defined, non-infinite, lengths and depths. We then provide new formulas to describe cases involving well-mixed layers which supersede earlier expressions. In addition, we look in detail at layers that are not well mixed, and, as an interesting variation on previous models, our procedure is based on tracking the dispersion of individual particles, which is quite different to the earlier approaches which looked at mixing of constituents. We develop an expression which allows determination of the degree of mixing, and show that layers used in some previous models were in fact not well mixed and so produced erroneous results. We then develop a generalized model based on two dimensional random-walk theory employing Rayleigh distributions which allows us to develop a universal formula for diffusion rates for multiple two-dimensional layers with general degrees of mixing. We show that it is the largest, most vigorous and less common turbulent layers that make the major contribution to global diffusion. Finally, we make estimates of global-scale diffusion coefficients in the lower stratosphere and upper troposphere. For the lower stratosphere, κeff ≈ 2x10-2 m2 s-1, assuming no other processes contribute to large-scale diffusion.
Marketing for a Web-Based Master's Degree Program in Light of Marketing Mix Model
ERIC Educational Resources Information Center
Pan, Cheng-Chang
2012-01-01
The marketing mix model was applied with a focus on Web media to re-strategize a Web-based Master's program in a southern state university in U.S. The program's existing marketing strategy was examined using the four components of the model: product, price, place, and promotion, in hopes to repackage the program (product) to prospective students…
Mixed-sediment transport modelling in Scheldt estuary with a physics-based bottom friction law
NASA Astrophysics Data System (ADS)
Bi, Qilong; Toorman, Erik A.
2015-04-01
In this study, the main object is to investigate the performance of a few new physics-based process models by implementation into a numerical model for the simulation of the flow and morphodynamics in the Western Scheldt estuary. In order to deal with the complexity within the research domain, and improve the prediction accuracy, a 2D depth-averaged model has been set up as realistic as possible, i.e. including two-way hydrodynamic-sediment transport coupling, mixed sand-mud sediment transport (bedload transport as well as suspended load in the water column) and a dynamic non-uniform bed composition. A newly developed bottom friction law, based on a generalised mixing-length (GML) theory, is implemented, with which the new bed shear stress closure is constructed as the superposition of the turbulent and the laminar contribution. It allows the simulation of all turbulence conditions (fully developed turbulence, from hydraulic rough to hydraulic smooth, transient and laminar), and the drying and wetting of intertidal flats can now be modelled without specifying an inundation threshold. The benefit is that intertidal morphodynamics can now be modelled with great detail for the first time. Erosion and deposition in these areas can now be estimated with much higher accuracy, as well as their contribution to the overall net fluxes. Furthermore, Krone's deposition law has been adapted to sand-mud mixtures, and the critical stresses for deposition are computed from suspension capacity theory, instead of being tuned. The model has been calibrated and results show considerable differences in sediment fluxes, compared to a traditional approach and the analysis also reveals that the concentration effects play a very important role. The new bottom friction law with concentration effects can considerably alter the total sediment flux in the estuary not only in terms of magnitude but also in terms of erosion and deposition patterns.
The value of a statistical life: a meta-analysis with a mixed effects regression model.
Bellavance, François; Dionne, Georges; Lebeau, Martin
2009-03-01
The value of a statistical life (VSL) is a very controversial topic, but one which is essential to the optimization of governmental decisions. We see a great variability in the values obtained from different studies. The source of this variability needs to be understood, in order to offer public decision-makers better guidance in choosing a value and to set clearer guidelines for future research on the topic. This article presents a meta-analysis based on 39 observations obtained from 37 studies (from nine different countries) which all use a hedonic wage method to calculate the VSL. Our meta-analysis is innovative in that it is the first to use the mixed effects regression model [Raudenbush, S.W., 1994. Random effects models. In: Cooper, H., Hedges, L.V. (Eds.), The Handbook of Research Synthesis. Russel Sage Foundation, New York] to analyze studies on the value of a statistical life. We conclude that the variability found in the values studied stems in large part from differences in methodologies.
NASA Astrophysics Data System (ADS)
Zheng, Haifei; Tang, Hao; Xu, Xingya; Li, Ming
2014-08-01
Four different secondary airflow angles for the turbine inter-guide-vane burners with trapped vortex cavity were designed. Comparative analysis between combustion performances influenced by the variation of secondary airflow angle was carried out by using numerical simulation method. The turbulence was modeled using the Scale-Adaptive Simulation (SAS) turbulence model. Four cases with different secondary jet-flow angles (-45°, 0°, 30°, 60°) were studied. It was observed that the case with secondary jet-flows at 60° angle directed upwards (1) has good mixing effect; (2) mixing effect is the best although the flow field distributions inside both of the cavity and the main flow passage for the four models are very similar; (3) has complete combustion and symmetric temperature distribution on the exit section of guide vane (X = 70 mm), with uniform temperature distribution, less temperature gradient, and shrank local high temperature regions in the notch located on the guide vane.
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
NASA Astrophysics Data System (ADS)
Farajtabar, Ali; Jaberi, Fatemeh; Gharib, Farrokh
2011-12-01
The solvatochromic properties of the free base and the protonated 5, 10, 15, 20-tetrakis(4-sulfonatophenyl)porphyrin (TPPS) were studied in pure water, methanol, ethanol (protic solvents), dimethylsulfoxide, DMSO, (non-protic solvent), and their corresponding aqueous-organic binary mixed solvents. The correlation of the empirical solvent polarity scale ( ET) values of TPPS with composition of the solvents was analyzed by the solvent exchange model of Bosch and Roses to clarify the preferential solvation of the probe dyes in the binary mixed solvents. The solvation shell composition and the synergistic effects in preferential solvation of the solute dyes were investigated in terms of both solvent-solvent and solute-solvent interactions and also, the local mole fraction of each solvent composition was calculated in cybotactic region of the probe. The effective mole fraction variation may provide significant physico-chemical insights in the microscopic and molecular level of interactions between TPPS species and the solvent components and therefore, can be used to interpret the solvent effect on kinetics and thermodynamics of TPPS. The obtained results from the preferential solvation and solvent-solvent interactions have been successfully applied to explain the variation of equilibrium behavior of protonation of TPPS occurring in aqueous organic mixed solvents of methanol, ethanol and DMSO.
Computational Analyses of Pressurization in Cryogenic Tanks
NASA Technical Reports Server (NTRS)
Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chun P.; Field, Robert E.; Ryan, Harry
2010-01-01
A comprehensive numerical framework utilizing multi-element unstructured CFD and rigorous real fluid property routines has been developed to carry out analyses of propellant tank and delivery systems at NASA SSC. Traditionally CFD modeling of pressurization and mixing in cryogenic tanks has been difficult primarily because the fluids in the tank co-exist in different sub-critical and supercritical states with largely varying properties that have to be accurately accounted for in order to predict the correct mixing and phase change between the ullage and the propellant. For example, during tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. In our modeling framework, we incorporated two different approaches to real fluids modeling: (a) the first approach is based on the HBMS model developed by Hirschfelder, Beuler, McGee and Sutton and (b) the second approach is based on a cubic equation of state developed by Soave, Redlich and Kwong (SRK). Both approaches cover fluid properties and property variation spanning sub-critical gas and liquid states as well as the supercritical states. Both models were rigorously tested and properties for common fluids such as oxygen, nitrogen, hydrogen etc were compared against NIST data in both the sub-critical as well as supercritical regimes.
Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R
2018-01-01
This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided.
NASA Astrophysics Data System (ADS)
Hu, Hongliang; Xin, Zihua; Liu, Weijie
2006-09-01
The phase diagrams of the mixed ferro-ferrimagnets composed of Prussian blue analogs with (AxB1 x)yC, consisting of spins S=1, S=5/2 and S=3/2, are investigated by the use of the effective-field theory with the correlations based on Ising model. The phase diagrams which are related to experimental work of molecule-based ferro-ferrimagnets (NiIIxMnII1 x)1.5[CrIII(CN)6] are obtained. The magnetic properties such as magnetization, the critical temperature, the compensation temperature, internal energy and specific heat are also calculated.
Semiparametric mixed-effects analysis of PK/PD models using differential equations.
Wang, Yi; Eskridge, Kent M; Zhang, Shunpu
2008-08-01
Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.
Stochastic nonlinear mixed effects: a metformin case study.
Matzuka, Brett; Chittenden, Jason; Monteleone, Jonathan; Tran, Hien
2016-02-01
In nonlinear mixed effect (NLME) modeling, the intra-individual variability is a collection of errors due to assay sensitivity, dosing, sampling, as well as model misspecification. Utilizing stochastic differential equations (SDE) within the NLME framework allows the decoupling of the measurement errors from the model misspecification. This leads the SDE approach to be a novel tool for model refinement. Using Metformin clinical pharmacokinetic (PK) data, the process of model development through the use of SDEs in population PK modeling was done to study the dynamics of absorption rate. A base model was constructed and then refined by using the system noise terms of the SDEs to track model parameters and model misspecification. This provides the unique advantage of making no underlying assumptions about the structural model for the absorption process while quantifying insufficiencies in the current model. This article focuses on implementing the extended Kalman filter and unscented Kalman filter in an NLME framework for parameter estimation and model development, comparing the methodologies, and illustrating their challenges and utility. The Kalman filter algorithms were successfully implemented in NLME models using MATLAB with run time differences between the ODE and SDE methods comparable to the differences found by Kakhi for their stochastic deconvolution.
NASA Astrophysics Data System (ADS)
Lu, Guoping; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.
2008-12-01
The standard dual-component and two-member linear mixing model is often used to quantify water mixing of different sources. However, it is no longer applicable whenever actual mixture concentrations are not exactly known because of dilution. For example, low-water-content (low-porosity) rock samples are leached for pore-water chemical compositions, which therefore are diluted in the leachates. A multicomponent, two-member mixing model of dilution has been developed to quantify mixing of water sources and multiple chemical components experiencing dilution in leaching. This extended mixing model was used to quantify fracture-matrix interaction in construction-water migration tests along the Exploratory Studies Facility (ESF) tunnel at Yucca Mountain, Nevada, USA. The model effectively recovers the spatial distribution of water and chemical compositions released from the construction water, and provides invaluable data on the matrix fracture interaction. The methodology and formulations described here are applicable to many sorts of mixing-dilution problems, including dilution in petroleum reservoirs, hydrospheres, chemical constituents in rocks and minerals, monitoring of drilling fluids, and leaching, as well as to environmental science studies.
The effect of state medicaid case-mix payment on nursing home resident acuity.
Feng, Zhanlian; Grabowski, David C; Intrator, Orna; Mor, Vincent
2006-08-01
To examine the relationship between Medicaid case-mix payment and nursing home resident acuity. Longitudinal Minimum Data Set (MDS) resident assessments from 1999 to 2002 and Online Survey Certification and Reporting (OSCAR) data from 1996 to 2002, for all freestanding nursing homes in the 48 contiguous U.S. states. We used a facility fixed-effects model to examine the effect of introducing state case-mix payment on changes in nursing home case-mix acuity. Facility acuity was measured by aggregating the nursing case-mix index (NCMI) from the MDS using the Resource Utilization Group (Version III) resident classification system, separately for new admits and long-stay residents, and by an OSCAR-derived index combining a range of activity of daily living dependencies and special treatment measures. We followed facilities over the study period to create a longitudinal data file based on the MDS and OSCAR, respectively, and linked facilities with longitudinal data on state case-mix payment policies for the same period. Across three acuity measures and two data sources, we found that states shifting to case-mix payment increased nursing home acuity levels over the study period. Specifically, we observed a 2.5 percent increase in the average acuity of new admits and a 1.3 to 1.4 percent increase in the acuity of long-stay residents, following the introduction of case-mix payment. The adoption of case-mix payment increased access to care for higher acuity Medicaid residents.
NASA Technical Reports Server (NTRS)
Steinberger, Craig J.
1991-01-01
The effects of compressibility, chemical reaction exothermicity, and non-equilibrium chemical modeling in a reacting plane mixing layer were investigated by means of two dimensional direct numerical simulations. The chemical reaction was irreversible and second order of the type A + B yields Products + Heat. The general governing fluid equations of a compressible reacting flow field were solved by means of high order finite difference methods. Physical effects were then determined by examining the response of the mixing layer to variation of the relevant non-dimensionalized parameters. The simulations show that increased compressibility generally results in a suppressed mixing, and consequently a reduced chemical reaction conversion rate. Reaction heat release was found to enhance mixing at the initial stages of the layer growth, but had a stabilizing effect at later times. The increased stability manifested itself in the suppression or delay of the formation of large coherent structures within the flow. Calculations were performed for a constant rate chemical kinetics model and an Arrhenius type kinetic prototype. The choice of the model was shown to have an effect on the development of the flow. The Arrhenius model caused a greater temperature increase due to reaction than the constant kinetic model. This had the same effect as increasing the exothermicity of the reaction. Localized flame quenching was also observed when the Zeldovich number was relatively large.
Discrete bivariate population balance modelling of heteroaggregation processes.
Rollié, Sascha; Briesen, Heiko; Sundmacher, Kai
2009-08-15
Heteroaggregation in binary particle mixtures was simulated with a discrete population balance model in terms of two internal coordinates describing the particle properties. The considered particle species are of different size and zeta-potential. Property space is reduced with a semi-heuristic approach to enable an efficient solution. Aggregation rates are based on deterministic models for Brownian motion and stability, under consideration of DLVO interaction potentials. A charge-balance kernel is presented, relating the electrostatic surface potential to the property space by a simple charge balance. Parameter sensitivity with respect to the fractal dimension, aggregate size, hydrodynamic correction, ionic strength and absolute particle concentration was assessed. Results were compared to simulations with the literature kernel based on geometric coverage effects for clusters with heterogeneous surface properties. In both cases electrostatic phenomena, which dominate the aggregation process, show identical trends: impeded cluster-cluster aggregation at low particle mixing ratio (1:1), restabilisation at high mixing ratios (100:1) and formation of complex clusters for intermediate ratios (10:1). The particle mixing ratio controls the surface coverage extent of the larger particle species. Simulation results are compared to experimental flow cytometric data and show very satisfactory agreement.
Shear-flexible finite-element models of laminated composite plates and shells
NASA Technical Reports Server (NTRS)
Noor, A. K.; Mathers, M. D.
1975-01-01
Several finite-element models are applied to the linear static, stability, and vibration analysis of laminated composite plates and shells. The study is based on linear shallow-shell theory, with the effects of shear deformation, anisotropic material behavior, and bending-extensional coupling included. Both stiffness (displacement) and mixed finite-element models are considered. Discussion is focused on the effects of shear deformation and anisotropic material behavior on the accuracy and convergence of different finite-element models. Numerical studies are presented which show the effects of increasing the order of the approximating polynomials, adding internal degrees of freedom, and using derivatives of generalized displacements as nodal parameters.
On the validity of effective formulations for transport through heterogeneous porous media
NASA Astrophysics Data System (ADS)
de Dreuzy, J.-R.; Carrera, J.
2015-11-01
Geological heterogeneity enhances spreading of solutes, and causes transport to be anomalous (i.e., non-Fickian), with much less mixing than suggested by dispersion. This implies that modeling transport requires adopting either stochastic approaches that model heterogeneity explicitly or effective transport formulations that acknowledge the effects of heterogeneity. A number of such formulations have been developed and tested as upscaled representations of enhanced spreading. However, their ability to represent mixing has not been formally tested, which is required for proper reproduction of chemical reactions and which motivates our work. We propose that, for an effective transport formulation to be considered a valid representation of transport through Heterogeneous Porous Media (HPM), it should honor mean advection, mixing and spreading. It should also be flexible enough to be applicable to real problems. We test the capacity of the Multi-Rate Mass Transfer (MRMT) to reproduce mixing observed in HPM, as represented by the classical multi-Gaussian log-permeability field with a Gaussian correlation pattern. Non-dispersive mixing comes from heterogeneity structures in the concentration fields that are not captured by macrodispersion. These fine structures limit mixing initially, but eventually enhance it. Numerical results show that, relative to HPM, MRMT models display a much stronger memory of initial conditions on mixing than on dispersion because of the sensitivity of the mixing state to the actual values of concentration. Because MRMT does not restitute the local concentration structures, it induces smaller non-dispersive mixing than HPM. However long-lived trapping in the immobile zones may sustain the deviation from dispersive mixing over much longer times. While spreading can be well captured by MRMT models, non-dispersive mixing cannot.
System equivalent model mixing
NASA Astrophysics Data System (ADS)
Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis
2018-05-01
This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.
Robins, Meridith T.; Lu, Julie
2016-01-01
The number of highly caffeinated products has increased dramatically in the past few years. Among these products, highly caffeinated energy drinks are the most heavily advertised and purchased, which has resulted in increased incidences of co-consumption of energy drinks with alcohol. Despite the growing number of adolescents and young adults reporting caffeine-mixed alcohol use, knowledge of the potential consequences associated with co-consumption has been limited to survey-based results and in-laboratory human behavioral testing. Here, we investigate the effect of repeated adolescent (post-natal days P35-61) exposure to caffeine-mixed alcohol in C57BL/6 mice on common drug-related behaviors such as locomotor sensitivity, drug reward and cross-sensitivity, and natural reward. To determine changes in neurological activity resulting from adolescent exposure, we monitored changes in expression of the transcription factor ΔFosB in the dopaminergic reward pathway as a sign of long-term increases in neuronal activity. Repeated adolescent exposure to caffeine-mixed alcohol exposure induced significant locomotor sensitization, desensitized cocaine conditioned place preference, decreased cocaine locomotor cross-sensitivity, and increased natural reward consumption. We also observed increased accumulation of ΔFosB in the nucleus accumbens following repeated adolescent caffeine-mixed alcohol exposure compared to alcohol or caffeine alone. Using our exposure model, we found that repeated exposure to caffeine-mixed alcohol during adolescence causes unique behavioral and neurochemical effects not observed in mice exposed to caffeine or alcohol alone. Based on similar findings for different substances of abuse, it is possible that repeated exposure to caffeine-mixed alcohol during adolescence could potentially alter or escalate future substance abuse as means to compensate for these behavioral and neurochemical alterations. PMID:27380261
NASA Astrophysics Data System (ADS)
González, S. J.; Pozzi, E. C. C.; Monti Hughes, A.; Provenzano, L.; Koivunoro, H.; Carando, D. G.; Thorp, S. I.; Casal, M. R.; Bortolussi, S.; Trivillin, V. A.; Garabalino, M. A.; Curotto, P.; Heber, E. M.; Santa Cruz, G. A.; Kankaanranta, L.; Joensuu, H.; Schwint, A. E.
2017-10-01
Boron neutron capture therapy (BNCT) is a treatment modality that combines different radiation qualities. Since the severity of biological damage following irradiation depends on the radiation type, a quantity different from absorbed dose is required to explain the effects observed in the clinical BNCT in terms of outcome compared with conventional photon radiation therapy. A new approach for calculating photon iso-effective doses in BNCT was introduced previously. The present work extends this model to include information from dose-response assessments in animal models and humans. Parameters of the model were determined for tumour and precancerous tissue using dose-response curves obtained from BNCT and photon studies performed in the hamster cheek pouch in vivo models of oral cancer and/or pre-cancer, and from head and neck cancer radiotherapy data with photons. To this end, suitable expressions of the dose-limiting Normal Tissue Complication and Tumour Control Probabilities for the reference radiation and for the mixed field BNCT radiation were developed. Pearson’s correlation coefficients and p-values showed that TCP and NTCP models agreed with experimental data (with r > 0.87 and p-values >0.57). The photon iso-effective dose model was applied retrospectively to evaluate the dosimetry in tumours and mucosa for head and neck cancer patients treated with BNCT in Finland. Photon iso-effective doses in tumour were lower than those obtained with the standard RBE-weighted model (between 10% to 45%). The results also suggested that the probabilities of tumour control derived from photon iso-effective doses are more adequate to explain the clinical responses than those obtained with the RBE-weighted values. The dosimetry in the mucosa revealed that the photon iso-effective doses were about 30% to 50% higher than the corresponding RBE-weighted values. While the RBE-weighted doses are unable to predict mucosa toxicity, predictions based on the proposed model are compatible with the observed clinical outcome. The extension of the photon iso-effective dose model has allowed, for the first time, the determination of the photon iso-effective dose for unacceptable complications in the dose-limiting normal tissue. Finally, the formalism developed in this work to compute photon-equivalent doses can be applied to other therapies that combine mixed radiation fields, such as hadron therapy.
González, S J; Pozzi, E C C; Monti Hughes, A; Provenzano, L; Koivunoro, H; Carando, D G; Thorp, S I; Casal, M R; Bortolussi, S; Trivillin, V A; Garabalino, M A; Curotto, P; Heber, E M; Santa Cruz, G A; Kankaanranta, L; Joensuu, H; Schwint, A E
2017-10-03
Boron neutron capture therapy (BNCT) is a treatment modality that combines different radiation qualities. Since the severity of biological damage following irradiation depends on the radiation type, a quantity different from absorbed dose is required to explain the effects observed in the clinical BNCT in terms of outcome compared with conventional photon radiation therapy. A new approach for calculating photon iso-effective doses in BNCT was introduced previously. The present work extends this model to include information from dose-response assessments in animal models and humans. Parameters of the model were determined for tumour and precancerous tissue using dose-response curves obtained from BNCT and photon studies performed in the hamster cheek pouch in vivo models of oral cancer and/or pre-cancer, and from head and neck cancer radiotherapy data with photons. To this end, suitable expressions of the dose-limiting Normal Tissue Complication and Tumour Control Probabilities for the reference radiation and for the mixed field BNCT radiation were developed. Pearson's correlation coefficients and p-values showed that TCP and NTCP models agreed with experimental data (with r > 0.87 and p-values >0.57). The photon iso-effective dose model was applied retrospectively to evaluate the dosimetry in tumours and mucosa for head and neck cancer patients treated with BNCT in Finland. Photon iso-effective doses in tumour were lower than those obtained with the standard RBE-weighted model (between 10% to 45%). The results also suggested that the probabilities of tumour control derived from photon iso-effective doses are more adequate to explain the clinical responses than those obtained with the RBE-weighted values. The dosimetry in the mucosa revealed that the photon iso-effective doses were about 30% to 50% higher than the corresponding RBE-weighted values. While the RBE-weighted doses are unable to predict mucosa toxicity, predictions based on the proposed model are compatible with the observed clinical outcome. The extension of the photon iso-effective dose model has allowed, for the first time, the determination of the photon iso-effective dose for unacceptable complications in the dose-limiting normal tissue. Finally, the formalism developed in this work to compute photon-equivalent doses can be applied to other therapies that combine mixed radiation fields, such as hadron therapy.
Serious Game-Based and Nongame-Based Online Courses: Learning Experiences and Outcomes
ERIC Educational Resources Information Center
Hess, Taryn; Gunter, Glenda
2013-01-01
When combining the increasing use of online educational environments, the push to use serious video games and the lack of research on the effectiveness of online learning environments and video games, there is a clear need for further investigation into the use of serious video games in an online format. A mix methods model was used to triangulate…
NASA Astrophysics Data System (ADS)
Auduson, Aaron E.
2018-07-01
One of the most common problems in the North Sea is the occurrence of salt (solid) in the pores of Triassic sandstones. Many wells have failed due to interpretation errors based conventional substitution as described by the Gassmann equation. A way forward is to device a means to model and characterize the salt-plugging scenarios. Modelling the effects of fluid and solids on rock velocity and density will ascertain the influence of pore material types on seismic data. In this study, two different rock physics modelling approaches are adopted in solid-fluid substitution, namely the extended Gassmann theory and multi-mineral mixing modelling. Using the modified new Gassmann equation, solid-and-fluid substitutions were performed from gas or water filling in the hydrocarbon reservoirs to salt materials being the pore-filling. Inverse substitutions were also performed from salt-filled case to gas- and water-filled scenarios. The modelling results show very consistent results - Salt-plugged wells clearly showing different elastic parameters when compared with gas- and water-bearing wells. While the Gassmann equation-based modelling was used to discretely compute effective bulk and shear moduli of the salt plugs, the algorithm based on the mineral-mixing (Hashin-Shtrikman) can only predict elastic moduli in a narrow range. Thus, inasmuch as both of these methods can be used to model elastic parameters and characterize pore-fill scenarios, the New Gassmann-based algorithm, which is capable of precisely predicting the elastic parameters, is recommended for use in forward seismic modelling and characterization of this reservoir and other reservoir types. This will significantly help in reducing seismic interpretation errors.
Random effects coefficient of determination for mixed and meta-analysis models
Demidenko, Eugene; Sargent, James; Onega, Tracy
2011-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070
Uriu, Koichiro; Bhavna, Rajasekaran; Oates, Andrew C; Morelli, Luis G
2017-08-15
In development and disease, cells move as they exchange signals. One example is found in vertebrate development, during which the timing of segment formation is set by a 'segmentation clock', in which oscillating gene expression is synchronized across a population of cells by Delta-Notch signaling. Delta-Notch signaling requires local cell-cell contact, but in the zebrafish embryonic tailbud, oscillating cells move rapidly, exchanging neighbors. Previous theoretical studies proposed that this relative movement or cell mixing might alter signaling and thereby enhance synchronization. However, it remains unclear whether the mixing timescale in the tissue is in the right range for this effect, because a framework to reliably measure the mixing timescale and compare it with signaling timescale is lacking. Here, we develop such a framework using a quantitative description of cell mixing without the need for an external reference frame and constructing a physical model of cell movement based on the data. Numerical simulations show that mixing with experimentally observed statistics enhances synchronization of coupled phase oscillators, suggesting that mixing in the tailbud is fast enough to affect the coherence of rhythmic gene expression. Our approach will find general application in analyzing the relative movements of communicating cells during development and disease. © 2017. Published by The Company of Biologists Ltd.
Bhavna, Rajasekaran; Oates, Andrew C.; Morelli, Luis G.
2017-01-01
ABSTRACT In development and disease, cells move as they exchange signals. One example is found in vertebrate development, during which the timing of segment formation is set by a ‘segmentation clock’, in which oscillating gene expression is synchronized across a population of cells by Delta-Notch signaling. Delta-Notch signaling requires local cell-cell contact, but in the zebrafish embryonic tailbud, oscillating cells move rapidly, exchanging neighbors. Previous theoretical studies proposed that this relative movement or cell mixing might alter signaling and thereby enhance synchronization. However, it remains unclear whether the mixing timescale in the tissue is in the right range for this effect, because a framework to reliably measure the mixing timescale and compare it with signaling timescale is lacking. Here, we develop such a framework using a quantitative description of cell mixing without the need for an external reference frame and constructing a physical model of cell movement based on the data. Numerical simulations show that mixing with experimentally observed statistics enhances synchronization of coupled phase oscillators, suggesting that mixing in the tailbud is fast enough to affect the coherence of rhythmic gene expression. Our approach will find general application in analyzing the relative movements of communicating cells during development and disease. PMID:28652318
Simulation of low clouds in the Southeast Pacific by the NCEP GFS: sensitivity to vertical mixing
NASA Astrophysics Data System (ADS)
Sun, R.; Moorthi, S.; Xiao, H.; Mechoso, C. R.
2010-12-01
The NCEP Global Forecast System (GFS) model has an important systematic error shared by many other models: stratocumuli are missed over the subtropical eastern oceans. It is shown that this error can be alleviated in the GFS by introducing a consideration of the low-level inversion and making two modifications in the model's representation of vertical mixing. The modifications consist of (a) the elimination of background vertical diffusion above the inversion and (b) the incorporation of a stability parameter based on the cloud-top entrainment instability (CTEI) criterion, which limits the strength of shallow convective mixing across the inversion. A control simulation and three experiments are performed in order to examine both the individual and combined effects of modifications on the generation of the stratocumulus clouds. Individually, both modifications result in enhanced cloudiness in the Southeast Pacific (SEP) region, although the cloudiness is still low compared to the ISCCP climatology. If the modifications are applied together, however, the total cloudiness produced in the southeast Pacific has realistic values. This nonlinearity arises as the effects of both modifications reinforce each other in reducing the leakage of moisture across the inversion. Increased moisture trapped below the inversion than in the control run without modifications leads to an increase in cloud amount and cloud-top radiative cooling. Then a positive feedback due to enhanced turbulent mixing in the planetary boundary layer by cloud-top radiative cooling leads to and maintains the stratocumulus cover. Although the amount of total cloudiness obtained with both modifications has realistic values, the relative contributions of low, middle, and high layers tend to differ from the observations. These results demonstrate that it is possible to simulate realistic marine boundary clouds in large-scale models by implementing direct and physically based improvements in the model parameterizations.
Simulation of low clouds in the Southeast Pacific by the NCEP GFS: sensitivity to vertical mixing
NASA Astrophysics Data System (ADS)
Sun, R.; Moorthi, S.; Xiao, H.; Mechoso, C.-R.
2010-08-01
The NCEP Global Forecast System (GFS) model has an important systematic error shared by many other models: stratocumuli are missed over the subtropical eastern oceans. It is shown that this error can be alleviated in the GFS by introducing a consideration of the low-level inversion and making two modifications in the model's representation of vertical mixing. The modifications consist of (a) the elimination of background vertical diffusion above the inversion and (b) the incorporation of a stability parameter based on the cloud-top entrainment instability (CTEI) criterion, which limits the strength of shallow convective mixing across the inversion. A control simulation and three experiments are performed in order to examine both the individual and combined effects of modifications on the generation of the stratocumulus clouds. Individually, both modifications result in enhanced cloudiness in the Southeast Pacific (SEP) region, although the cloudiness is still low compared to the ISCCP climatology. If the modifications are applied together, however, the total cloudiness produced in the southeast Pacific has realistic values. This nonlinearity arises as the effects of both modifications reinforce each other in reducing the leakage of moisture across the inversion. Increased moisture trapped below the inversion than in the control run without modifications leads to an increase in cloud amount and cloud-top radiative cooling. Then a positive feedback due to enhanced turbulent mixing in the planetary boundary layer by cloud-top radiative cooling leads to and maintains the stratocumulus cover. Although the amount of total cloudiness obtained with both modifications has realistic values, the relative contributions of low, middle, and high layers tend to differ from the observations. These results demonstrate that it is possible to simulate realistic marine boundary clouds in large-scale models by implementing direct and physically based improvements in the model parameterizations.
The Mixed Effects Trend Vector Model
ERIC Educational Resources Information Center
de Rooij, Mark; Schouteden, Martijn
2012-01-01
Maximum likelihood estimation of mixed effect baseline category logit models for multinomial longitudinal data can be prohibitive due to the integral dimension of the random effects distribution. We propose to use multidimensional unfolding methodology to reduce the dimensionality of the problem. As a by-product, readily interpretable graphical…
NASA Astrophysics Data System (ADS)
Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan
2018-01-01
This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rupšys, P.
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
NASA Astrophysics Data System (ADS)
Bukoski, J. J.; Broadhead, J. S.; Donato, D.; Murdiyarso, D.; Gregoire, T. G.
2016-12-01
Mangroves provide extensive ecosystem services that support both local livelihoods and international environmental goals, including coastal protection, water filtration, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects that seek to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through measurement, reporting and verification (MRV) activities. To streamline MRV activities in mangrove C forestry projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We use linear mixed effect models to account for spatial correlation in modeling the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, and are found to explain a substantial proportion of variance within the estimation datasets. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm 3 (14.1% of mean soil C). A substantial proportion of the variation in soil C, however, is explained by the random effects and thus the use of the SOC model may be most valuable for sites in which field measurements of soil C exist.
Pillai, Goonaseelan Colin; Mentré, France; Steimer, Jean-Louis
2005-04-01
Few scientific contributions have made significant impact unless there was a champion who had the vision to see the potential for its use in seemingly disparate areas-and who then drove active implementation. In this paper, we present a historical summary of the development of non-linear mixed effects (NLME) modeling up to the more recent extensions of this statistical methodology. The paper places strong emphasis on the pivotal role played by Lewis B. Sheiner (1940-2004), who used this statistical methodology to elucidate solutions to real problems identified in clinical practice and in medical research and on how he drove implementation of the proposed solutions. A succinct overview of the evolution of the NLME modeling methodology is presented as well as ideas on how its expansion helped to provide guidance for a more scientific view of (model-based) drug development that reduces empiricism in favor of critical quantitative thinking and decision making.
Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine
2018-01-01
Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.
Development of a Medicaid Behavioral Health Case-Mix Model
ERIC Educational Resources Information Center
Robst, John
2009-01-01
Many Medicaid programs have either fully or partially carved out mental health services. The evaluation of carve-out plans requires a case-mix model that accounts for differing health status across Medicaid managed care plans. This article develops a diagnosis-based case-mix adjustment system specific to Medicaid behavioral health care. Several…
NASA Astrophysics Data System (ADS)
Guillermo Nuñez Ramirez, Tonatiuh; Houweling, Sander; Marshall, Julia; Williams, Jason; Brailsford, Gordon; Schneising, Oliver; Heimann, Martin
2013-04-01
The atmospheric hydroxyl radical concentration (OH) varies due to changes in the incoming UV radiation, in the abundance of atmospheric species involved in the production, recycling and destruction of OH molecules and due to climate variability. Variability in carbon monoxide emissions from biomass burning induced by El Niño Southern Oscillation are particularly important. Although the OH sink accounts for the oxidation of approximately 90% of atmospheric CH4, the effect of the variability in the distribution and strength of the OH sink on the interannual variability of atmospheric methane (CH4) mixing ratio and stable carbon isotope composition (δ13C-CH4) has often been ignored. To show this effect we simulated the atmospheric signals of CH4 in a three-dimensional atmospheric transport model (TM3). ERA Interim reanalysis data provided the atmospheric transport and temperature variability from 1990 to 2010. We performed simulations using time dependent OH concentration estimations from an atmospheric chemistry transport model and an atmospheric chemistry climate model. The models assumed a different set of reactions and algorithms which caused a very different strength and distribution of the OH concentration. Methane emissions were based on published bottom-up estimates including inventories, upscaled estimations and modeled fluxes. The simulations also included modeled concentrations of atomic chlorine (Cl) and excited oxygen atoms (O(1D)). The isotopic signal of the sources and the fractionation factors of the sinks were based on literature values, however the isotopic signal from wetlands and enteric fermentation processes followed a linear relationship with a map of C4 plant fraction. The same set of CH4emissions and stratospheric reactants was used in all simulations. Two simulations were done per OH field: one in which the CH4 sources were allowed to vary interannually, and a second where the sources were climatological. The simulated mixing ratios and isotopic compositions at global reference stations were used to construct more robust indicators such as global and zonal means and interhemispheric differences. We also compared the model CH4 mixing ratio to satellite observations, for the period 2003 to 2004 with SCIAMACHY and from 2009 to 2010 with GOSAT. The interannual variability of the different OH fields imprinted an interannual variation of the atmospheric CH4 mixing ratio with a magnitude of ±10 ppb, which is comparable to the effect of all sources combined. Meanwhile its effect on the interannual variability of δ13C-CH4 was minor (< 10%). The interannual variability of the mixing ratio interhemispheric difference is dominated by the sources because the OH sink is concentrated in the tropics, thus its interannual variability affects both hemispheres. Meanwhile, although the OH plays an important role in the establishment of an interhemispheric gradient of δ13C-CH4, the interannual variation of this gradient is negligibly affected by the choice of OH field. Overall the study showed that the variability of the OH sink plays a significant role in the interannual variability of the atmospheric methane mixing ratio, and must be considered to improve our understanding of the recent trends in the global methane budget.
Weir, Christopher J.; Rubio, Noah; Rabinovich, Roberto; Pinnock, Hilary; Hanley, Janet; McCloughan, Lucy; Drost, Ellen M.; Mantoani, Leandro C.; MacNee, William; McKinstry, Brian
2016-01-01
Introduction The Bland-Altman limits of agreement method is widely used to assess how well the measurements produced by two raters, devices or systems agree with each other. However, mixed effects versions of the method which take into account multiple sources of variability are less well described in the literature. We address the practical challenges of applying mixed effects limits of agreement to the comparison of several devices to measure respiratory rate in patients with chronic obstructive pulmonary disease (COPD). Methods Respiratory rate was measured in 21 people with a range of severity of COPD. Participants were asked to perform eleven different activities representative of daily life during a laboratory-based standardised protocol of 57 minutes. A mixed effects limits of agreement method was used to assess the agreement of five commercially available monitors (Camera, Photoplethysmography (PPG), Impedance, Accelerometer, and Chest-band) with the current gold standard device for measuring respiratory rate. Results Results produced using mixed effects limits of agreement were compared to results from a fixed effects method based on analysis of variance (ANOVA) and were found to be similar. The Accelerometer and Chest-band devices produced the narrowest limits of agreement (-8.63 to 4.27 and -9.99 to 6.80 respectively) with mean bias -2.18 and -1.60 breaths per minute. These devices also had the lowest within-participant and overall standard deviations (3.23 and 3.29 for Accelerometer and 4.17 and 4.28 for Chest-band respectively). Conclusions The mixed effects limits of agreement analysis enabled us to answer the question of which devices showed the strongest agreement with the gold standard device with respect to measuring respiratory rates. In particular, the estimated within-participant and overall standard deviations of the differences, which are easily obtainable from the mixed effects model results, gave a clear indication that the Accelerometer and Chest-band devices performed best. PMID:27973556
Solvency supervision based on a total balance sheet approach
NASA Astrophysics Data System (ADS)
Pitselis, Georgios
2009-11-01
In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.
Stakeholders' Views of South Korea's Higher Education Internationalization Policy
ERIC Educational Resources Information Center
Cho, Young Ha; Palmer, John D.
2013-01-01
The study investigated the stakeholders' perceptions of South Korea's higher education internationalization policy. Based on the research framework that defines four policy values--propriety, effectiveness, diversity, and engagement, the convergence model was employed with a concurrent mixed method sampling strategy to analyze the stakeholders'…
Effects of mixing states on the multiple-scattering properties of soot aerosols.
Cheng, Tianhai; Wu, Yu; Gu, Xingfa; Chen, Hao
2015-04-20
The radiative properties of soot aerosols are highly sensitive to the mixing states of black carbon particles and other aerosol components. Light absorption properties are enhanced by the mixing state of soot aerosols. Quantification of the effects of mixing states on the scattering properties of soot aerosol are still not completely resolved, especially for multiple-scattering properties. This study focuses on the effects of the mixing state on the multiple scattering of soot aerosols using the vector radiative transfer model. Two types of soot aerosols with different mixing states such as external mixture soot aerosols and internal mixture soot aerosols are studied. Upward radiance/polarization and hemispheric flux are studied with variable soot aerosol loadings for clear and haze scenarios. Our study showed dramatic changes in upward radiance/polarization due to the effects of the mixing state on the multiple scattering of soot aerosols. The relative difference in upward radiance due to the different mixing states can reach 16%, whereas the relative difference of upward polarization can reach 200%. The effects of the mixing state on the multiple-scattering properties of soot aerosols increase with increasing soot aerosol loading. The effects of the soot aerosol mixing state on upwelling hemispheric flux are much smaller than in upward radiance/polarization, which increase with increasing solar zenith angle. The relative difference in upwelling hemispheric flux due to the different soot aerosol mixing states can reach 18% when the solar zenith angle is 75°. The findings should improve our understanding of the effects of mixing states on the optical properties of soot aerosols and their effects on climate. The mixing mechanism of soot aerosols is of critical importance in evaluating the climate effects of soot aerosols, which should be explicitly included in radiative forcing models and aerosol remote sensing.
Kim, Joo Wan; Seol, Du Jin; Choung, Jai Jun
2018-01-01
Aim Kuseonwangdogo is a traditional Korean immunomodulatory polyherbal prescription. However, there are no systemic findings on its complex immunomodulatory effects on in vivo models. In this study, we observed the immunomodulatory effects of Kuseonwangdogo-based mixed herbal formula aqueous extracts (MHFe) on cyclophosphamide- (CPA-) induced immunosuppression mouse model. Methods In total, 60 male 6-week-old ICR mice (10 mice/group) were selected based on body weight 24 h after the second CPA treatment and used in this experiment. Twelve hours after the end of the last (fourth) oral administration of MHFe, the animals were sacrificed. Results Following CPA treatment, a noticeable decrease in the body, thymus, spleen, and submandibular lymph node (LN) weights; white blood cell, red blood cell, platelet number, hemoglobin, and hematocrit concentrations; serum interferon-γ levels; splenic tumor necrosis factor-α, interleukin- (IL-) 1β, and IL-10 content; and peritoneal and splenic natural killer cell activities was observed. Depletion of lymphoid cells in the thymic cortex, splenic white pulp, and submandibular LN-related atrophic changes were also observed. However, these CPA-induced myelosuppressive signs were markedly and dose-dependently inhibited by the oral administration of 125, 250, and 500 mg/kg MHFe. Conclusion MHFe can be a promising, potent immunomodulatory therapeutic agent for various immune disorders. PMID:29849713
A Lagrangian mixing frequency model for transported PDF modeling
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Zhao, Xinyu
2017-11-01
In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.
An improved method for predicting the effects of flight on jet mixing noise
NASA Technical Reports Server (NTRS)
Stone, J. R.
1979-01-01
The NASA method (1976) for predicting the effects of flight on jet mixing noise was improved. The earlier method agreed reasonably well with experimental flight data for jet velocities up to about 520 m/sec (approximately 1700 ft/sec). The poorer agreement at high jet velocities appeared to be due primarily to the manner in which supersonic convection effects were formulated. The purely empirical supersonic convection formulation of the earlier method was replaced by one based on theoretical considerations. Other improvements of an empirical nature included were based on model-jet/free-jet simulated flight tests. The revised prediction method is presented and compared with experimental data obtained from the Bertin Aerotrain with a J85 engine, the DC-10 airplane with JT9D engines, and the DC-9 airplane with refanned JT8D engines. It is shown that the new method agrees better with the data base than a recently proposed SAE method.
Non-linear mixing effects on mass-47 CO2 clumped isotope thermometry: Patterns and implications.
Defliese, William F; Lohmann, Kyger C
2015-05-15
Mass-47 CO(2) clumped isotope thermometry requires relatively large (~20 mg) samples of carbonate minerals due to detection limits and shot noise in gas source isotope ratio mass spectrometry (IRMS). However, it is unreasonable to assume that natural geologic materials are homogenous on the scale required for sampling. We show that sample heterogeneities can cause offsets from equilibrium Δ(47) values that are controlled solely by end member mixing and are independent of equilibrium temperatures. A numerical model was built to simulate and quantify the effects of end member mixing on Δ(47). The model was run in multiple possible configurations to produce a dataset of mixing effects. We verified that the model accurately simulated real phenomena by comparing two artificial laboratory mixtures measured using IRMS to model output. Mixing effects were found to be dependent on end member isotopic composition in δ(13)C and δ(18)O values, and independent of end member Δ(47) values. Both positive and negative offsets from equilibrium Δ(47) can occur, and the sign is dependent on the interaction between end member isotopic compositions. The overall magnitude of mixing offsets is controlled by the amount of variability within a sample; the larger the disparity between end member compositions, the larger the mixing offset. Samples varying by less than 2 ‰ in both δ(13)C and δ(18)O values have mixing offsets below current IRMS detection limits. We recommend the use of isotopic subsampling for δ(13)C and δ(18)O values to determine sample heterogeneity, and to evaluate any potential mixing effects in samples suspected of being heterogonous. Copyright © 2015 John Wiley & Sons, Ltd.
Karimi, Hamid Reza; Gao, Huijun
2008-07-01
A mixed H2/Hinfinity output-feedback control design methodology is presented in this paper for second-order neutral linear systems with time-varying state and input delays. Delay-dependent sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller, which guarantees asymptotic stability and a mixed H2/Hinfinity performance for the closed-loop system of the second-order neutral linear system, is then developed directly instead of coupling the model to a first-order neutral system. A Lyapunov-Krasovskii method underlies the LMI-based mixed H2/Hinfinity output-feedback control design using some free weighting matrices. The simulation results illustrate the effectiveness of the proposed methodology.
The salinity effect in a mixed layer ocean model
NASA Technical Reports Server (NTRS)
Miller, J. R.
1976-01-01
A model of the thermally mixed layer in the upper ocean as developed by Kraus and Turner and extended by Denman is further extended to investigate the effects of salinity. In the tropical and subtropical Atlantic Ocean rapid increases in salinity occur at the bottom of a uniformly mixed surface layer. The most significant effects produced by the inclusion of salinity are the reduction of the deepening rate and the corresponding change in the heating characteristics of the mixed layer. If the net surface heating is positive, but small, salinity effects must be included to determine whether the mixed layer temperature will increase or decrease. Precipitation over tropical oceans leads to the development of a shallow stable layer accompanied by a decrease in the temperature and salinity at the sea surface.
Genomic Model with Correlation Between Additive and Dominance Effects.
Xiang, Tao; Christensen, Ole Fredslund; Vitezica, Zulma Gladis; Legarra, Andres
2018-05-09
Dominance genetic effects are rarely included in pedigree-based genetic evaluation. With the availability of single nucleotide polymorphism markers and the development of genomic evaluation, estimates of dominance genetic effects have become feasible using genomic best linear unbiased prediction (GBLUP). Usually, studies involving additive and dominance genetic effects ignore possible relationships between them. It has been often suggested that the magnitude of functional additive and dominance effects at the quantitative trait loci are related, but there is no existing GBLUP-like approach accounting for such correlation. Wellmann and Bennewitz showed two ways of considering directional relationships between additive and dominance effects, which they estimated in a Bayesian framework. However, these relationships cannot be fitted at the level of individuals instead of loci in a mixed model and are not compatible with standard animal or plant breeding software. This comes from a fundamental ambiguity in assigning the reference allele at a given locus. We show that, if there has been selection, assigning the most frequent as the reference allele orients the correlation between functional additive and dominance effects. As a consequence, the most frequent reference allele is expected to have a positive value. We also demonstrate that selection creates negative covariance between genotypic additive and dominance genetic values. For parameter estimation, it is possible to use a combined additive and dominance relationship matrix computed from marker genotypes, and to use standard restricted maximum likelihood (REML) algorithms based on an equivalent model. Through a simulation study, we show that such correlations can easily be estimated by mixed model software and accuracy of prediction for genetic values is slightly improved if such correlations are used in GBLUP. However, a model assuming uncorrelated effects and fitting orthogonal breeding values and dominant deviations performed similarly for prediction. Copyright © 2018, Genetics.
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
NASA Astrophysics Data System (ADS)
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
Zhou, Lan; Yang, Jin-Bo; Liu, Dan; Liu, Zhan; Chen, Ying; Gao, Bo
2008-06-01
To analyze the possible damage to the remaining tooth and composite restorations when various mixing ratios of bases were used. Testing elastic modulus and poission's ratio of glass-ionomer Vitrebond and self-cured calcium hydroxide Dycal with mixing ratios of 1:1, 3:4, 4:3. Micro-CT was used to scan the first mandibular molar, and the three-dimensional finite element model of the first permanent mandibular molar with class I cavity was established. Analyzing the stress of tooth structure, composite and base cement under physical load when different mixing ratios of base cement were used. The elastic modulus of base cement in various mixing ratios was different, which had the statistic significance. The magnitude and location of stress in restored tooth made no differences when the mixing ratios of Vitrebond and Dycal were changed. The peak stress and spreading area in the model with Dycal was more than that with Vitrebond. Changing the best mixing ratio of base cement can partially influence the mechanistic character, but make no differences on the magnitude and location of stress in restored tooth. During the treatment of deep caries, the base cement of the elastic modulus which is proximal to the dentin and restoration should be chosen to avoid the fracture of tooth or restoration.
OPC modeling by genetic algorithm
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.
2005-05-01
Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.
Werner, David; Ghosh, Upal; Luthy, Richard G
2006-07-01
The sorption kinetics and concentration of polychlorinated biphenyls (PCBs) in historically polluted sediment is modeled to assess a remediation strategy based on in situ PCB sequestration by mixing with activated carbon (AC). We extend our evaluation of a model based on intraparticle diffusion by including a biomimetic semipermeable membrane device (SPMD) and a first-order degradation rate for the aqueous phase. The model predictions are compared with the previously reported experimental PCB concentrations in the bulk water phase and in SPMDs. The simulated scenarios comprise a marine and a freshwater sediment, four PCB congeners, two AC grain sizes, four doses of AC, and comparison with laboratory experiments for up to 540 days of AC amendment slowly mixed with sediment. The model qualitatively reproduces the observed shifts in the PCB distribution during repartitioning after AC amendment but systematically overestimates the overall effect of the treatment in reducing aqueous and SPMD concentrations of PCBs by a factor of 2-6. For our AC application in sediment, competitive sorption of the various solutes apparently requires a reduction by a factor of 16 of the literature values for the AC-water partitioning coefficient measured in pure aqueous systems. With this correction, model results and measurements agree within a factor of 3. We also discuss the impact of the nonlinearity of the AC sorption isotherm and first-order degradation in the aqueous phase. Regular mixing of the sediment accelerates the benefit of the proposed amendment substantially. But according to our scenario, after AC amendment is homogeneously mixed into the sediment and then left undisturbed, aqueous PCB concentrations tend toward the same reduction after approximately 5 or more years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Eugene Y.; Hansen, Brad M. S., E-mail: eyc@mail.utexas.edu, E-mail: hansen@astro.ucla.edu
The spectral distribution of field white dwarfs shows a feature called the 'non-DA gap'. As defined by Bergeron et al., this is a temperature range (5100-6100 K) where relatively few non-DA stars are found, even though such stars are abundant on either side of the gap. It is usually viewed as an indication that a significant fraction of white dwarfs switch their atmospheric compositions back and forth between hydrogen-rich and helium-rich as they cool. In this Letter, we present a Monte Carlo model of the Galactic disk white dwarf population, based on the spectral evolution model of Chen and Hansen.more » We find that the non-DA gap emerges naturally, even though our model only allows white dwarf atmospheres to evolve monotonically from hydrogen-rich to helium-rich through convective mixing. We conclude by discussing the effects of convective mixing on the white dwarf luminosity function and the use thereof for Cosmochronology.« less
ERIC Educational Resources Information Center
Güven Yildirim, Ezgi; Köklükaya, Ayse Nesibe
2018-01-01
The purposes of this study were first to investigate the effects of the project-based learning (PBL) method and project exhibition event on the success of physics teacher candidates, and second, to reveal the experiment group students' views toward this learning method and project exhibition. The research model called explanatory mixed method, in…
ERIC Educational Resources Information Center
Li, Zhi; Feng, Hui-Hsien; Saricaoglu, Aysel
2017-01-01
This classroom-based study employs a mixed-methods approach to exploring both short-term and long-term effects of Criterion feedback on ESL students' development of grammatical accuracy. The results of multilevel growth modeling indicate that Criterion feedback helps students in both intermediate-high and advanced-low levels reduce errors in eight…
Olivier, Pieter I.; van Aarde, Rudi J.
2017-01-01
The peninsula effect predicts that the number of species should decline from the base of a peninsula to the tip. However, evidence for the peninsula effect is ambiguous, as different analytical methods, study taxa, and variations in local habitat or regional climatic conditions influence conclusions on its presence. We address this uncertainty by using two analytical methods to investigate the peninsula effect in three taxa that occupy different trophic levels: trees, millipedes, and birds. We surveyed 81 tree quadrants, 102 millipede transects, and 152 bird points within 150 km of coastal dune forest that resemble a habitat peninsula along the northeast coast of South Africa. We then used spatial (trend surface analyses) and non-spatial regressions (generalized linear mixed models) to test for the presence of the peninsula effect in each of the three taxa. We also used linear mixed models to test if climate (temperature and precipitation) and/or local habitat conditions (water availability associated with topography and landscape structural variables) could explain gradients in species richness. Non-spatial models suggest that the peninsula effect was present in all three taxa. However, spatial models indicated that only bird species richness declined from the peninsula base to the peninsula tip. Millipede species richness increased near the centre of the peninsula, while tree species richness increased near the tip. Local habitat conditions explained species richness patterns of birds and trees, but not of millipedes, regardless of model type. Our study highlights the idiosyncrasies associated with the peninsula effect—conclusions on the presence of the peninsula effect depend on the analytical methods used and the taxon studied. The peninsula effect might therefore be better suited to describe a species richness pattern where the number of species decline from a broader habitat base to a narrow tip, rather than a process that drives species richness. PMID:28376096
Transport theory and the WKB approximation for interplanetary MHD fluctuations
NASA Technical Reports Server (NTRS)
Matthaeus, William H.; Zhou, YE; Zank, G. P.; Oughton, S.
1994-01-01
An alternative approach, based on a multiple scale analysis, is presented in order to reconcile the traditional Wentzel-Kramer-Brillouin (WKB) approach to the modeling of interplanetary fluctuations in a mildly inhomogeneous large-scale flow with a more recently developed transport theory. This enables us to compare directly, at a formal level, the inherent structure of the two models. In the case of noninteracting, incompressible (Alven) waves, the principle difference between the two models is the presence of leading-order couplings (called 'mixing effects') in the non-WKB turbulence model which are absent in a WKB development. Within the context of linearized MHD, two cases have been identified for which the leading order non-WJB 'mixing term' does not vanish at zero wavelength. For these cases the WKB expansion is divergent, whereas the multiple-scale theory is well behaved. We have thus established that the WKB results are contained within the multiple-scale theory, but leading order mixing effects, which are likely to have important observational consequences, can never be recovered in the WKB style expansion. Properties of the higher-order terms in each expansion are also discussed, leading to the conclusion that the non-WKB hierarchy may be applicable even when the scale separation parameter is not small.
Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.
Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes themore » biosynthetic pathways for the main components of biomass - namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-a-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.« less
Prediction of reaction knockouts to maximize succinate production by Actinobacillus succinogenes
Nag, Ambarish; St. John, Peter C.; Crowley, Michael F.; ...
2018-01-30
Succinate is a precursor of multiple commodity chemicals and bio-based succinate production is an active area of industrial bioengineering research. One of the most important microbial strains for bio-based production of succinate is the capnophilic gram-negative bacterium Actinobacillus succinogenes, which naturally produces succinate by a mixed-acid fermentative pathway. To engineer A. succinogenes to improve succinate yields during mixed acid fermentation, it is important to have a detailed understanding of the metabolic flux distribution in A. succinogenes when grown in suitable media. To this end, we have developed a detailed stoichiometric model of the A. succinogenes central metabolism that includes themore » biosynthetic pathways for the main components of biomass - namely glycogen, amino acids, DNA, RNA, lipids and UDP-N-Acetyl-a-D-glucosamine. We have validated our model by comparing model predictions generated via flux balance analysis with experimental results on mixed acid fermentation. Moreover, we have used the model to predict single and double reaction knockouts to maximize succinate production while maintaining growth viability. According to our model, succinate production can be maximized by knocking out either of the reactions catalyzed by the PTA (phosphate acetyltransferase) and ACK (acetyl kinase) enzymes, whereas the double knockouts of PEPCK (phosphoenolpyruvate carboxykinase) and PTA or PEPCK and ACK enzymes are the most effective in increasing succinate production.« less
APPLICATION OF STABLE ISOTOPE TECHNIQUES TO AIR POLLUTION RESEARCH
Stable isotope techniques provide a robust, yet under-utilized tool for examining pollutant effects on plant growth and ecosystem function. Here, we survey a range of mixing model, physiological and system level applications for documenting pollutant effects. Mixing model examp...
Martinez, Josue G; Bohn, Kirsten M; Carroll, Raymond J; Morris, Jeffrey S
2013-06-01
We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible.
Heavy quarkonium hybrids: Spectrum, decay, and mixing
NASA Astrophysics Data System (ADS)
Oncala, Ruben; Soto, Joan
2017-07-01
We present a largely model-independent analysis of the lighter heavy quarkonium hybrids based on the strong coupling regime of potential nonrelativistic QCD. We calculate the spectrum at leading order, including the mixing of static hybrid states. We use potentials that fulfill the required short and long distance theoretical constraints and fit well the available lattice data. We argue that the decay width to the lower lying heavy quarkonia can be reliably estimated in some cases and provide results for a selected set of decays. We also consider the mixing with heavy quarkonium states. We establish the form of the mixing potential at O (1 /mQ) , mQ being the mass of the heavy quarks, and work out its short and long distance constraints. The weak coupling regime of potential nonrelativistic QCD and the effective string theory of QCD are used for that goal. We show that the mixing effects may indeed be important and produce large spin symmetry violations. Most of the isospin zero XYZ states fit well in our spectrum, either as a hybrid or standard quarkonium candidate.
Lucero, Julie; Wallerstein, Nina; Duran, Bonnie; Alegria, Margarita; Greene-Moton, Ella; Israel, Barbara; Kastelic, Sarah; Magarati, Maya; Oetzel, John; Pearson, Cynthia; Schulz, Amy; Villegas, Malia; White Hat, Emily R.
2017-01-01
This article describes a mixed methods study of community-based participatory research (CBPR) partnership practices and the links between these practices and changes in health status and disparities outcomes. Directed by a CBPR conceptual model and grounded in indigenous-transformative theory, our nation-wide, cross-site study showcases the value of a mixed methods approach for better understanding the complexity of CBPR partnerships across diverse community and research contexts. The article then provides examples of how an iterative, integrated approach to our mixed methods analysis yielded enriched understandings of two key constructs of the model: trust and governance. Implications and lessons learned while using mixed methods to study CBPR are provided. PMID:29230152
An Investigation of Item Fit Statistics for Mixed IRT Models
ERIC Educational Resources Information Center
Chon, Kyong Hee
2009-01-01
The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…
Development of Efficient Real-Fluid Model in Simulating Liquid Rocket Injector Flows
NASA Technical Reports Server (NTRS)
Cheng, Gary; Farmer, Richard
2003-01-01
The characteristics of propellant mixing near the injector have a profound effect on the liquid rocket engine performance. However, the flow features near the injector of liquid rocket engines are extremely complicated, for example supercritical-pressure spray, turbulent mixing, and chemical reactions are present. Previously, a homogeneous spray approach with a real-fluid property model was developed to account for the compressibility and evaporation effects such that thermodynamics properties of a mixture at a wide range of pressures and temperatures can be properly calculated, including liquid-phase, gas- phase, two-phase, and dense fluid regions. The developed homogeneous spray model demonstrated a good success in simulating uni- element shear coaxial injector spray combustion flows. However, the real-fluid model suffered a computational deficiency when applied to a pressure-based computational fluid dynamics (CFD) code. The deficiency is caused by the pressure and enthalpy being the independent variables in the solution procedure of a pressure-based code, whereas the real-fluid model utilizes density and temperature as independent variables. The objective of the present research work is to improve the computational efficiency of the real-fluid property model in computing thermal properties. The proposed approach is called an efficient real-fluid model, and the improvement of computational efficiency is achieved by using a combination of a liquid species and a gaseous species to represent a real-fluid species.
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
NASA Technical Reports Server (NTRS)
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
Turbulent reacting flow computations including turbulence-chemistry interactions
NASA Technical Reports Server (NTRS)
Narayan, J. R.; Girimaji, S. S.
1992-01-01
A two-equation (k-epsilon) turbulence model has been extended to be applicable for compressible reacting flows. A compressibility correction model based on modeling the dilatational terms in the Reynolds stress equations has been used. A turbulence-chemistry interaction model is outlined. In this model, the effects of temperature and species mass concentrations fluctuations on the species mass production rates are decoupled. The effect of temperature fluctuations is modeled via a moment model, and the effect of concentration fluctuations is included using an assumed beta-pdf model. Preliminary results obtained using this model are presented. A two-dimensional reacting mixing layer has been used as a test case. Computations are carried out using the Navier-Stokes solver SPARK using a finite rate chemistry model for hydrogen-air combustion.
Computation of turbulent high speed mixing layers using a two-equation turbulence model
NASA Technical Reports Server (NTRS)
Narayan, J. R.; Sekar, B.
1991-01-01
A two-equation turbulence model was extended to be applicable for compressible flows. A compressibility correction based on modelling the dilational terms in the Reynolds stress equations were included in the model. The model is used in conjunction with the SPARK code for the computation of high speed mixing layers. The observed trend of decreasing growth rate with increasing convective Mach number in compressible mixing layers is well predicted by the model. The predictions agree well with the experimental data and the results from a compressible Reynolds stress model. The present model appears to be well suited for the study of compressible free shear flows. Preliminary results obtained for the reacting mixing layers are included.
Farnam, Alirza; Farhang, Sara; Bakhshipour, Abbas; Niknam, Elnaz
2011-12-01
Patients with mixed anxiety and depressive disorder suffer the sub-threshold depressive and anxiety symptoms and their negative impact upon quality of life. This study evaluates their personality dimensions and the possible effect on treatment outcome. The diagnosis of mixed anxiety and depressive disorder was based on a structured clinical interview in 80 patients. NEO inventory measured five personality dimensions. The depression, anxiety and stress scale (DASS) was used to measure the severity of illness before and after the treatment. Neuroticism, disagreeableness and introversion traits were significantly more expressed among these patients compared to normal population. A significant decrease in the score of depression, anxiety and stress was observed in all patients receiving the treatment. The normalized T-score of the five personality dimensions could not predict the degree of the response to treatment. This study describes the personality characteristics of patients with mixed anxiety and depressive disorder and beneficial effects of treatment of such patients to be independent from personality dimensions. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; D'Costa, Joseph F.
1991-01-01
This paper describes the evaluation of mixed implicit-explicit finite element formulations for hyperbolic heat conduction problems involving non-Fourier effects. In particular, mixed implicit-explicit formulations employing the alpha method proposed by Hughes et al. (1987, 1990) are described for the numerical simulation of hyperbolic heat conduction models, which involves time-dependent relaxation effects. Existing analytical approaches for modeling/analysis of such models involve complex mathematical formulations for obtaining closed-form solutions, while in certain numerical formulations the difficulties include severe oscillatory solution behavior (which often disguises the true response) in the vicinity of the thermal disturbances, which propagate with finite velocities. In view of these factors, the alpha method is evaluated to assess the control of the amount of numerical dissipation for predicting the transient propagating thermal disturbances. Numerical test models are presented, and pertinent conclusions are drawn for the mixed-time integration simulation of hyperbolic heat conduction models involving non-Fourier effects.
Real medical benefit assessed by indirect comparison.
Falissard, Bruno; Zylberman, Myriam; Cucherat, Michel; Izard, Valérie; Meyer, François
2009-01-01
Frequently, in data packages submitted for Marketing Approval to the CHMP, there is a lack of relevant head-to-head comparisons of medicinal products that could enable national authorities responsible for the approval of reimbursement to assess the Added Therapeutic Value (ASMR) of new clinical entities or line extensions of existing therapies.Indirect or mixed treatment comparisons (MTC) are methods stemming from the field of meta-analysis that have been designed to tackle this problem. Adjusted indirect comparisons, meta-regressions, mixed models, Bayesian network analyses pool results of randomised controlled trials (RCTs), enabling a quantitative synthesis.The REAL procedure, recently developed by the HAS (French National Authority for Health), is a mixture of an MTC and effect model based on expert opinions. It is intended to translate the efficacy observed in the trials into effectiveness expected in day-to-day clinical practice in France.
Su, Hui; Kondratko, Piotr; Chuang, Shun L
2006-05-29
We investigate variable optical delay of a microwave modulated optical beam in semiconductor optical amplifier/absorber waveguides with population oscillation (PO) and nearly degenerate four-wave-mixing (NDFWM) effects. An optical delay variable between 0 and 160 ps with a 1.0 GHz bandwidth is achieved in an InGaAsP/InP semiconductor optical amplifier (SOA) and shown to be electrically and optically controllable. An analytical model of optical delay is developed and found to agree well with the experimental data. Based on this model, we obtain design criteria to optimize the delay-bandwidth product of the optical delay in semiconductor optical amplifiers and absorbers.
Jiang, Qi; Zeng, Huidan; Liu, Zhao; Ren, Jing; Chen, Guorong; Wang, Zhaofeng; Sun, Luyi; Zhao, Donghui
2013-09-28
Sodium borophosphate glasses exhibit intriguing mixed network former effect, with the nonlinear compositional dependence of their glass transition temperature as one of the most typical examples. In this paper, we establish the widely applicable topological constraint model of sodium borophosphate mixed network former glasses to explain the relationship between the internal structure and nonlinear changes of glass transition temperature. The application of glass topology network was discussed in detail in terms of the unified methodology for the quantitative distribution of each coordinated boron and phosphorus units and glass transition temperature dependence of atomic constraints. An accurate prediction of composition scaling of the glass transition temperature was obtained based on topological constraint model.
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
NASA Astrophysics Data System (ADS)
Salaris, M.; Cassisi, S.; Schiavon, R. P.; Pietrinferni, A.
2018-04-01
Red giants in the updated APOGEE-Kepler catalogue, with estimates of mass, chemical composition, surface gravity and effective temperature, have recently challenged stellar models computed under the standard assumption of solar calibrated mixing length. In this work, we critically reanalyse this sample of red giants, adopting our own stellar model calculations. Contrary to previous results, we find that the disagreement between the Teff scale of red giants and models with solar calibrated mixing length disappears when considering our models and the APOGEE-Kepler stars with scaled solar metal distribution. However, a discrepancy shows up when α-enhanced stars are included in the sample. We have found that assuming mass, chemical composition and effective temperature scale of the APOGEE-Kepler catalogue, stellar models generally underpredict the change of temperature of red giants caused by α-element enhancements at fixed [Fe/H]. A second important conclusion is that the choice of the outer boundary conditions employed in model calculations is critical. Effective temperature differences (metallicity dependent) between models with solar calibrated mixing length and observations appear for some choices of the boundary conditions, but this is not a general result.
Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buck, Edgar C.; Jerden, James L.; Ebert, William L.
The primary purpose of this report is to describe the strategy for coupling three process level models to produce an integrated Used Fuel Degradation Model (FDM). The FDM, which is based on fundamental chemical and physical principals, provides direct calculation of radionuclide source terms for use in repository performance assessments. The G-value for H2O2 production (Gcond) to be used in the Mixed Potential Model (MPM) (H2O2 is the only radiolytic product presently included but others will be added as appropriate) needs to account for intermediate spur reactions. The effects of these intermediate reactions on [H2O2] are accounted for in themore » Radiolysis Model (RM). This report details methods for applying RM calculations that encompass the effects of these fast interactions on [H2O2] as the solution composition evolves during successive MPM iterations and then represent the steady-state [H2O2] in terms of an “effective instantaneous or conditional” generation value (Gcond). It is anticipated that the value of Gcond will change slowly as the reaction progresses through several iterations of the MPM as changes in the nature of fuel surface occur. The Gcond values will be calculated with the RM either after several iterations or when concentrations of key reactants reach threshold values determined from previous sensitivity runs. Sensitivity runs with RM indicate significant changes in G-value can occur over narrow composition ranges. The objective of the mixed potential model (MPM) is to calculate the used fuel degradation rates for a wide range of disposal environments to provide the source term radionuclide release rates for generic repository concepts. The fuel degradation rate is calculated for chemical and oxidative dissolution mechanisms using mixed potential theory to account for all relevant redox reactions at the fuel surface, including those involving oxidants produced by solution radiolysis and provided by the radiolysis model (RM). The RM calculates the concentration of species generated at any specific time and location from the surface of the fuel. Several options being considered for coupling the RM and MPM are described in the report. Different options have advantages and disadvantages based on the extent of coding that would be required and the ease of use of the final product.« less
Simulation of Longitudinal Exposure Data with Variance-Covariance Structures Based on Mixed Models
Longitudinal data are important in exposure and risk assessments, especially for pollutants with long half-lives in the human body and where chronic exposures to current levels in the environment raise concerns for human health effects. It is usually difficult and expensive to ob...
Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M
2017-04-01
Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.
Dynamic prediction in functional concurrent regression with an application to child growth.
Leroux, Andrew; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William
2018-04-15
In many studies, it is of interest to predict the future trajectory of subjects based on their historical data, referred to as dynamic prediction. Mixed effects models have traditionally been used for dynamic prediction. However, the commonly used random intercept and slope model is often not sufficiently flexible for modeling subject-specific trajectories. In addition, there may be useful exposures/predictors of interest that are measured concurrently with the outcome, complicating dynamic prediction. To address these problems, we propose a dynamic functional concurrent regression model to handle the case where both the functional response and the functional predictors are irregularly measured. Currently, such a model cannot be fit by existing software. We apply the model to dynamically predict children's length conditional on prior length, weight, and baseline covariates. Inference on model parameters and subject-specific trajectories is conducted using the mixed effects representation of the proposed model. An extensive simulation study shows that the dynamic functional regression model provides more accurate estimation and inference than existing methods. Methods are supported by fast, flexible, open source software that uses heavily tested smoothing techniques. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Prevalence of Mixed-Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Collins, Kathleen M. T.
2006-01-01
The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
ERIC Educational Resources Information Center
Livingstone, Holly A.; Day, Arla L.
2005-01-01
Despite the popularity of the concept of emotional intelligence(EI), there is much controversy around its definition, measurement, and validity. Therefore, the authors examined the construct and criterion-related validity of an ability-based EI measure (Mayer Salovey Caruso Emotional Intelligence Test [MSCEIT]) and a mixed-model EI measure…
The effect of under-ice melt ponds on their surroundings in the Arctic
NASA Astrophysics Data System (ADS)
Feltham, D. L.; Smith, N.; Flocco, D.
2016-12-01
In the summer months, melt water from the surface of the Arctic sea ice can percolate down through the ice and flow out of its base. This water is relatively warm and fresh compared to the ocean water beneath it, and so it floats between the ice and the oceanic mixed layer, forming pools of melt water called under-ice melt ponds. Sheets of ice, known as false bottoms, can subsequently form via double diffusion processes at the under-ice melt pond interface with the ocean, trapping the pond against the ice and completely isolating it from the ocean below. This has an insulating effect on the parent sea ice above the trapped pond, altering its rate of basal ablation. A one-dimensional, thermodynamic model of Arctic sea ice has been adapted to study the evolution of under-ice melt ponds and false bottoms over time. Comparing simulations of sea ice evolution with and without an under-ice melt pond provides a measure of how an under-ice melt pond affects the mass balance of the sea ice above it. Sensitivity studies testing the response of the model to a range of uncertain parameters have been performed, revealing some interesting implications of under-ice ponds during their life cycle. By changing the rate of basal ablation of the parent sea ice, and so the flux of fresh water and salt into the ocean, under-ice melt ponds affect the properties of the mixed layer beneath the sea ice. Our model of under-ice melt pond refreezing has been coupled to a simple oceanic mixed layer model to determine the effect on mixed layer depth, salinity and temperature.
Simulating mixed-phase Arctic stratus clouds: sensitivity to ice initiation mechanisms
NASA Astrophysics Data System (ADS)
Sednev, I.; Menon, S.; McFarquhar, G.
2008-06-01
The importance of Arctic mixed-phase clouds on radiation and the Arctic climate is well known. However, the development of mixed-phase cloud parameterization for use in large scale models is limited by lack of both related observations and numerical studies using multidimensional models with advanced microphysics that provide the basis for understanding the relative importance of different microphysical processes that take place in mixed-phase clouds. To improve the representation of mixed-phase cloud processes in the GISS GCM we use the GISS single-column model coupled to a bin resolved microphysics (BRM) scheme that was specially designed to simulate mixed-phase clouds and aerosol-cloud interactions. Using this model with the microphysical measurements obtained from the DOE ARM Mixed-Phase Arctic Cloud Experiment (MPACE) campaign in October 2004 at the North Slope of Alaska, we investigate the effect of ice initiation processes and Bergeron-Findeisen process (BFP) on glaciation time and longevity of single-layer stratiform mixed-phase clouds. We focus on observations taken during 9th-10th October, which indicated the presence of a single-layer mixed-phase clouds. We performed several sets of 12-h simulations to examine model sensitivity to different ice initiation mechanisms and evaluate model output (hydrometeors' concentrations, contents, effective radii, precipitation fluxes, and radar reflectivity) against measurements from the MPACE Intensive Observing Period. Overall, the model qualitatively simulates ice crystal concentration and hydrometeors content, but it fails to predict quantitatively the effective radii of ice particles and their vertical profiles. In particular, the ice effective radii are overestimated by at least 50%. However, using the same definition as used for observations, the effective radii simulated and that observed were more comparable. We find that for the single-layer stratiform mixed-phase clouds simulated, process of ice phase initiation due to freezing of supercooled water in both saturated and undersaturated (w.r.t. water) environments is as important as primary ice crystal origination from water vapor. We also find that the BFP is a process mainly responsible for the rates of glaciation of simulated clouds. These glaciation rates cannot be adequately represented by a water-ice saturation adjustment scheme that only depends on temperature and liquid and solid hydrometeors' contents as is widely used in bulk microphysics schemes and are better represented by processes that also account for supersaturation changes as the hydrometeors grow.
Simulating mixed-phase Arctic stratus clouds: sensitivity to ice initiation mechanisms
NASA Astrophysics Data System (ADS)
Sednev, I.; Menon, S.; McFarquhar, G.
2009-07-01
The importance of Arctic mixed-phase clouds on radiation and the Arctic climate is well known. However, the development of mixed-phase cloud parameterization for use in large scale models is limited by lack of both related observations and numerical studies using multidimensional models with advanced microphysics that provide the basis for understanding the relative importance of different microphysical processes that take place in mixed-phase clouds. To improve the representation of mixed-phase cloud processes in the GISS GCM we use the GISS single-column model coupled to a bin resolved microphysics (BRM) scheme that was specially designed to simulate mixed-phase clouds and aerosol-cloud interactions. Using this model with the microphysical measurements obtained from the DOE ARM Mixed-Phase Arctic Cloud Experiment (MPACE) campaign in October 2004 at the North Slope of Alaska, we investigate the effect of ice initiation processes and Bergeron-Findeisen process (BFP) on glaciation time and longevity of single-layer stratiform mixed-phase clouds. We focus on observations taken during 9-10 October, which indicated the presence of a single-layer mixed-phase clouds. We performed several sets of 12-h simulations to examine model sensitivity to different ice initiation mechanisms and evaluate model output (hydrometeors' concentrations, contents, effective radii, precipitation fluxes, and radar reflectivity) against measurements from the MPACE Intensive Observing Period. Overall, the model qualitatively simulates ice crystal concentration and hydrometeors content, but it fails to predict quantitatively the effective radii of ice particles and their vertical profiles. In particular, the ice effective radii are overestimated by at least 50%. However, using the same definition as used for observations, the effective radii simulated and that observed were more comparable. We find that for the single-layer stratiform mixed-phase clouds simulated, process of ice phase initiation due to freezing of supercooled water in both saturated and subsaturated (w.r.t. water) environments is as important as primary ice crystal origination from water vapor. We also find that the BFP is a process mainly responsible for the rates of glaciation of simulated clouds. These glaciation rates cannot be adequately represented by a water-ice saturation adjustment scheme that only depends on temperature and liquid and solid hydrometeors' contents as is widely used in bulk microphysics schemes and are better represented by processes that also account for supersaturation changes as the hydrometeors grow.
Verheggen, Bram G; Westerhout, Kirsten Y; Schreder, Carl H; Augustin, Matthias
2015-01-01
Allergoids are chemically modified allergen extracts administered to reduce allergenicity and to maintain immunogenicity. Oralair® (the 5-grass tablet) is a sublingual native grass allergen tablet for pre- and co-seasonal treatment. Based on a literature review, meta-analysis, and cost-effectiveness analysis the relative effects and costs of the 5-grass tablet versus a mix of subcutaneous allergoid compounds for grass pollen allergic rhinoconjunctivitis were assessed. A Markov model with a time horizon of nine years was used to assess the costs and effects of three-year immunotherapy treatment. Relative efficacy expressed as standardized mean differences was estimated using an indirect comparison on symptom scores extracted from available clinical trials. The Rhinitis Symptom Utility Index (RSUI) was applied as a proxy to estimate utility values for symptom scores. Drug acquisition and other medical costs were derived from published sources as well as estimates for resource use, immunotherapy persistence, and occurrence of asthma. The analysis was executed from the German payer's perspective, which includes payments of the Statutory Health Insurance (SHI) and additional payments by insurants. Comprehensive deterministic and probabilistic sensitivity analyses and different scenarios were performed to test the uncertainty concerning the incremental model outcomes. The applied model predicted a cost-utility ratio of the 5-grass tablet versus a market mix of injectable allergoid products of € 12,593 per QALY in the base case analysis. Predicted incremental costs and QALYs were € 458 (95% confidence interval, CI: € 220; € 739) and 0.036 (95% CI: 0.002; 0.078), respectively. Compared to the allergoid mix the probability of the 5-grass tablet being the most cost-effective treatment option was predicted to be 76% at a willingness-to-pay threshold of € 20,000. The results were most sensitive to changes in efficacy estimates, duration of the pollen season, and immunotherapy persistence rates. This analysis suggests the sublingual native 5-grass tablet to be cost-effective relative to a mix of subcutaneous allergoid compounds. The robustness of these statements has been confirmed in extensive sensitivity and scenario analyses.
Diaz, Francisco J
2016-10-15
We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sundell, Knut; Ferrer-Wreder, Laura; Fraser, Mark W
2014-06-01
The spread of evidence-based practice throughout the world has resulted in the wide adoption of empirically supported interventions (ESIs) and a growing number of controlled trials of imported and culturally adapted ESIs. This article is informed by outcome research on family-based interventions including programs listed in the American Blueprints Model and Promising Programs. Evidence from these controlled trials is mixed and, because it is comprised of both successful and unsuccessful replications of ESIs, it provides clues for the translation of promising programs in the future. At least four explanations appear plausible for the mixed results in replication trials. One has to do with methodological differences across trials. A second deals with ambiguities in the cultural adaptation process. A third explanation is that ESIs in failed replications have not been adequately implemented. A fourth source of variation derives from unanticipated contextual influences that might affect the effects of ESIs when transported to other cultures and countries. This article describes a model that allows for the differential examination of adaptations of interventions in new cultural contexts. © The Author(s) 2012.
Influence of non-homogeneous mixing on final epidemic size in a meta-population model.
Cui, Jingan; Zhang, Yanan; Feng, Zhilan
2018-06-18
In meta-population models for infectious diseases, the basic reproduction number [Formula: see text] can be as much as 70% larger in the case of preferential mixing than that in homogeneous mixing [J.W. Glasser, Z. Feng, S.B. Omer, P.J. Smith, and L.E. Rodewald, The effect of heterogeneity in uptake of the measles, mumps, and rubella vaccine on the potential for outbreaks of measles: A modelling study, Lancet ID 16 (2016), pp. 599-605. doi: 10.1016/S1473-3099(16)00004-9 ]. This suggests that realistic mixing can be an important factor to consider in order for the models to provide a reliable assessment of intervention strategies. The influence of mixing is more significant when the population is highly heterogeneous. In this paper, another quantity, the final epidemic size ([Formula: see text]) of an outbreak, is considered to examine the influence of mixing and population heterogeneity. Final size relation is derived for a meta-population model accounting for a general mixing. The results show that [Formula: see text] can be influenced by the pattern of mixing in a significant way. Another interesting finding is that, heterogeneity in various sub-population characteristics may have the opposite effect on [Formula: see text] and [Formula: see text].
Mixing Study in a Multi-dimensional Motion Mixer
NASA Astrophysics Data System (ADS)
Shah, R.; Manickam, S. S.; Tomei, J.; Bergman, T. L.; Chaudhuri, B.
2009-06-01
Mixing is an important but poorly understood aspect in petrochemical, food, ceramics, fertilizer and pharmaceutical processing and manufacturing. Deliberate mixing of granular solids is an essential operation in the production of industrial powder products usually constituted from different ingredients. The knowledge of particle flow and mixing in a blender is critical to optimize the design and operation. Since performance of the product depends on blend homogeneity, the consequence of variability can be detrimental. A common approach to powder mixing is to use a tumbling blender, which is essentially a hollow vessel horizontally attached to a rotating shaft. This single axis rotary blender is one of the most common batch mixers among in industry, and also finds use in myriad of application as dryers, kilns, coaters, mills and granulators. In most of the rotary mixers the radial convection is faster than axial dispersion transport. This slow dispersive process hinders mixing performance in many blending, drying and coating applications. A double cone mixer is designed and fabricated which rotates around two axes, causing axial mixing competitive to its radial counterpart. Discrete Element Method (DEM) based numerical model is developed to simulate the granular flow within the mixer. Digitally recorded mixing states from experiments are used to fine tune the numerical model. Discrete pocket samplers are also used in the experiments to quantify the characteristics of mixing. A parametric study of the effect of vessel speeds, relative rotational speed (between two axes of rotation), on the granular mixing is investigated by experiments and numerical simulation. Incorporation of dual axis rotation enhances axial mixing by 60 to 85% in comparison to single axis rotation.
BEYOND MIXING-LENGTH THEORY: A STEP TOWARD 321D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnett, W. David; Meakin, Casey; Viallet, Maxime
2015-08-10
We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier–Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier–Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets ofmore » solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier–Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated.« less
Beyond Mixing-length Theory: A Step Toward 321D
NASA Astrophysics Data System (ADS)
Arnett, W. David; Meakin, Casey; Viallet, Maxime; Campbell, Simon W.; Lattanzio, John C.; Mocák, Miroslav
2015-08-01
We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier-Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier-Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets of solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier-Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated.
2013-01-01
Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521
NASA Astrophysics Data System (ADS)
Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael
2015-06-01
Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
Modeling of the Wegener Bergeron Findeisen process—implications for aerosol indirect effects
NASA Astrophysics Data System (ADS)
Storelvmo, T.; Kristjánsson, J. E.; Lohmann, U.; Iversen, T.; Kirkevåg, A.; Seland, Ø.
2008-10-01
A new parameterization of the Wegener-Bergeron-Findeisen (WBF) process has been developed, and implemented in the general circulation model CAM-Oslo. The new parameterization scheme has important implications for the process of phase transition in mixed-phase clouds. The new treatment of the WBF process replaces a previous formulation, in which the onset of the WBF effect depended on a threshold value of the mixing ratio of cloud ice. As no observational guidance for such a threshold value exists, the previous treatment added uncertainty to estimates of aerosol effects on mixed-phase clouds. The new scheme takes subgrid variability into account when simulating the WBF process, allowing for smoother phase transitions in mixed-phase clouds compared to the previous approach. The new parameterization yields a model state which gives reasonable agreement with observed quantities, allowing for calculations of aerosol effects on mixed-phase clouds involving a reduced number of tunable parameters. Furthermore, we find a significant sensitivity to perturbations in ice nuclei concentrations with the new parameterization, which leads to a reversal of the traditional cloud lifetime effect.
Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, R; Gallagher, B; Neville, J
Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied ourmore » model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.« less
NASA Astrophysics Data System (ADS)
Burik, P.; Pesek, L.; Kejzlar, P.; Andrsova, Z.; Zubko, P.
2017-01-01
The main idea of this work is using a physical model to prepare a virtual material with required properties. The model is based on the relationship between the microstructure and mechanical properties. The macroscopic (global) mechanical properties of steel are highly dependent upon microstructure, crystallographic orientation of grains, distribution of each phase present, etc... We need to know the local mechanical properties of each phase separately in multiphase materials. The grain size is a scale, where local mechanical properties are responsible for the behavior. Nanomechanical testing using depth sensing indentation (DSI) provides a straightforward solution for quantitatively characterizing each of phases in microstructure because it is very powerful technique for characterization of materials in small volumes. The aim of this experimental investigation is: (i) to prove how the mixing rule works for local mechanical properties (indentation hardness HIT) in microstructure scale using the DSI technique on steel sheets with different microstructure; (ii) to compare measured global properties with properties achieved by mixing rule; (iii) to analyze the effect of crystallographic orientations of grains on the mixing rule.
Grefenstette, John J; Brown, Shawn T; Rosenfeld, Roni; DePasse, Jay; Stone, Nathan T B; Cooley, Phillip C; Wheaton, William D; Fyshe, Alona; Galloway, David D; Sriram, Anuroop; Guclu, Hasan; Abraham, Thomas; Burke, Donald S
2013-10-08
Mathematical and computational models provide valuable tools that help public health planners to evaluate competing health interventions, especially for novel circumstances that cannot be examined through observational or controlled studies, such as pandemic influenza. The spread of diseases like influenza depends on the mixing patterns within the population, and these mixing patterns depend in part on local factors including the spatial distribution and age structure of the population, the distribution of size and composition of households, employment status and commuting patterns of adults, and the size and age structure of schools. Finally, public health planners must take into account the health behavior patterns of the population, patterns that often vary according to socioeconomic factors such as race, household income, and education levels. FRED (a Framework for Reconstructing Epidemic Dynamics) is a freely available open-source agent-based modeling system based closely on models used in previously published studies of pandemic influenza. This version of FRED uses open-access census-based synthetic populations that capture the demographic and geographic heterogeneities of the population, including realistic household, school, and workplace social networks. FRED epidemic models are currently available for every state and county in the United States, and for selected international locations. State and county public health planners can use FRED to explore the effects of possible influenza epidemics in specific geographic regions of interest and to help evaluate the effect of interventions such as vaccination programs and school closure policies. FRED is available under a free open source license in order to contribute to the development of better modeling tools and to encourage open discussion of modeling tools being used to evaluate public health policies. We also welcome participation by other researchers in the further development of FRED.
NASA Astrophysics Data System (ADS)
Zhu, S.; Sartelet, K. N.; Seigneur, C.
2015-06-01
The Size-Composition Resolved Aerosol Model (SCRAM) for simulating the dynamics of externally mixed atmospheric particles is presented. This new model classifies aerosols by both composition and size, based on a comprehensive combination of all chemical species and their mass-fraction sections. All three main processes involved in aerosol dynamics (coagulation, condensation/evaporation and nucleation) are included. The model is first validated by comparison with a reference solution and with results of simulations using internally mixed particles. The degree of mixing of particles is investigated in a box model simulation using data representative of air pollution in Greater Paris. The relative influence on the mixing state of the different aerosol processes (condensation/evaporation, coagulation) and of the algorithm used to model condensation/evaporation (bulk equilibrium, dynamic) is studied.
The Effect of State Medicaid Case-Mix Payment on Nursing Home Resident Acuity
Feng, Zhanlian; Grabowski, David C; Intrator, Orna; Mor, Vincent
2006-01-01
Objective To examine the relationship between Medicaid case-mix payment and nursing home resident acuity. Data Sources Longitudinal Minimum Data Set (MDS) resident assessments from 1999 to 2002 and Online Survey Certification and Reporting (OSCAR) data from 1996 to 2002, for all freestanding nursing homes in the 48 contiguous U.S. states. Study Design We used a facility fixed-effects model to examine the effect of introducing state case-mix payment on changes in nursing home case-mix acuity. Facility acuity was measured by aggregating the nursing case-mix index (NCMI) from the MDS using the Resource Utilization Group (Version III) resident classification system, separately for new admits and long-stay residents, and by an OSCAR-derived index combining a range of activity of daily living dependencies and special treatment measures. Data Collection/Extraction Methods We followed facilities over the study period to create a longitudinal data file based on the MDS and OSCAR, respectively, and linked facilities with longitudinal data on state case-mix payment policies for the same period. Principal Findings Across three acuity measures and two data sources, we found that states shifting to case-mix payment increased nursing home acuity levels over the study period. Specifically, we observed a 2.5 percent increase in the average acuity of new admits and a 1.3 to 1.4 percent increase in the acuity of long-stay residents, following the introduction of case-mix payment. Conclusions The adoption of case-mix payment increased access to care for higher acuity Medicaid residents. PMID:16899009
Stellar evolution with turbulent diffusion. I. A new formalism of mixing.
NASA Astrophysics Data System (ADS)
Deng, L.; Bressan, A.; Chiosi, C.
1996-09-01
In this paper we present a new formulation of diffusive mixing in stellar interiors aimed at casting light on the kind of mixing that should take place in the so-called overshoot regions surrounding fully convective zones. Key points of the analysis are the inclusion the concept of scale length most effective for mixing, by means of which the diffusion coefficient is formulated, and the inclusion of intermittence and stirring, two properties of turbulence known from laboratory fluid dynamics. The formalism is applied to follow the evolution of a 20Msun_ star with composition Z=0.008 and Y=0.25. Depending on the value of the diffusion coefficient holding in the overshoot region, the evolutionary behaviour of the test stars goes from the case of virtually no mixing (semiconvective like structures) to that of full mixing over there (standard overshoot models). Indeed, the efficiency of mixing in this region drives the extension of the intermediate fully convective shell developing at the onset of the the shell H-burning, and in turn the path in the HR Diagram (HRD). Models with low efficiency of mixing burn helium in the core at high effective temperatures, models with intermediate efficiency perform extended loops in the HRD, finally models with high efficiency spend the whole core He-burning phase at low effective temperatures. In order to cast light on this important point of stellar structure, we test whether or not in the regions of the H-burning shell a convective layer can develop. More precisely, we examine whether the Schwarzschild or the Ledoux criterion ought to be adopted in this region. Furthermore, we test the response of stellar models to the kind of mixing supposed to occur in the H-burning shell regions. Finally, comparing the time scale of thermal dissipation to the evolutionary time scale, we get the conclusion that no mixing in this region should occur. The models with intermediate efficiency of mixing and no mixing at all in the shell H-burning regions are of particular interest as they possess at the same time evolutionary characteristics that are separately typical of models calculated with different schemes of mixing. In other words, the new models share the same properties of models with standard overshoot, namely a wider main sequence band, higher luminosity, and longer lifetimes than classical models, but they also possess extended loops that are the main signature of the classical (semiconvective) description of convection at the border of the core.
NASA Technical Reports Server (NTRS)
Whorton, M. S.
1998-01-01
Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.
Tsuruta, S; Misztal, I; Strandén, I
2001-05-01
Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.
The effect of dense gas dynamics on loss in ORC transonic turbines
NASA Astrophysics Data System (ADS)
Durá Galiana, FJ; Wheeler, APS; Ong, J.; Ventura, CA de M.
2017-03-01
This paper describes a number of recent investigations into the effect of dense gas dynamics on ORC transonic turbine performance. We describe a combination of experimental, analytical and computational studies which are used to determine how, in-particular, trailing-edge loss changes with choice of working fluid. A Ludwieg tube transient wind-tunnel is used to simulate a supersonic base flow which mimics an ORC turbine vane trailing-edge flow. Experimental measurements of wake profiles and trailing-edge base pressure with different working fluids are used to validate high-order CFD simulations. In order to capture the correct mixing in the base region, Large-Eddy Simulations (LES) are performed and verified against the experimental data by comparing the LES with different spatial and temporal resolutions. RANS and Detached-Eddy Simulation (DES) are also compared with experimental data. The effect of different modelling methods and working fluid on mixed-out loss is then determined. Current results point at LES predicting the closest agreement with experimental results, and dense gas effects are consistently predicted to increase loss.
Mixing-controlled reactive transport on travel times in heterogeneous media
NASA Astrophysics Data System (ADS)
Luo, J.; Cirpka, O.
2008-05-01
Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass-transfer coefficients. In most applications, breakthrough curves of conservative and reactive compounds are measured at only a few locations and models are calibrated by matching these breakthrough curves, which is an ill posed inverse problem. By contrast, travel-time based transport models avoid costly aquifer characterization. By considering breakthrough curves measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the travel-time based framework, the breakthrough curve of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct travel-time value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of travel times which also determines the weights associated to each stream tube. Key issues in using the travel-time based framework include the description of mixing mechanisms and the estimation of the travel-time distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the travel-time distribution, given a breakthrough curve integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases where the true travel-time distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and travel-time distributions to fit conservative breakthrough curves and describe the tailing. Reactive transport cases with a bimolecular instantaneous irreversible reaction and a dual Michaelis-Menten problem demonstrate that the mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local breakthrough curves.
Genomic selection for slaughter age in pigs using the Cox frailty model.
Santos, V S; Martins Filho, S; Resende, M D V; Azevedo, C F; Lopes, P S; Guimarães, S E F; Glória, L S; Silva, F F
2015-10-19
The aim of this study was to compare genomic selection methodologies using a linear mixed model and the Cox survival model. We used data from an F2 population of pigs, in which the response variable was the time in days from birth to the culling of the animal and the covariates were 238 markers [237 single nucleotide polymorphism (SNP) plus the halothane gene]. The data were corrected for fixed effects, and the accuracy of the method was determined based on the correlation of the ranks of predicted genomic breeding values (GBVs) in both models with the corrected phenotypic values. The analysis was repeated with a subset of SNP markers with largest absolute effects. The results were in agreement with the GBV prediction and the estimation of marker effects for both models for uncensored data and for normality. However, when considering censored data, the Cox model with a normal random effect (S1) was more appropriate. Since there was no agreement between the linear mixed model and the imputed data (L2) for the prediction of genomic values and the estimation of marker effects, the model S1 was considered superior as it took into account the latent variable and the censored data. Marker selection increased correlations between the ranks of predicted GBVs by the linear and Cox frailty models and the corrected phenotypic values, and 120 markers were required to increase the predictive ability for the characteristic analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xiaohu; Shi, Di; Wang, Zhiwei
Shunt FACTS devices, such as, a Static Var Compensator (SVC), are capable of providing local reactive power compensation. They are widely used in the network to reduce the real power loss and improve the voltage profile. This paper proposes a planning model based on mixed integer conic programming (MICP) to optimally allocate SVCs in the transmission network considering load uncertainty. The load uncertainties are represented by a number of scenarios. Reformulation and linearization techniques are utilized to transform the original non-convex model into a convex second order cone programming (SOCP) model. Numerical case studies based on the IEEE 30-bus systemmore » demonstrate the effectiveness of the proposed planning model.« less
The use of mixed effects ANCOVA to characterize vehicle emission profiles
DOT National Transportation Integrated Search
2000-09-01
A mixed effects analysis of covariance model to characterize mileage dependent emissions profiles for any given group of vehicles having a common model design is used in this paper. These types of evaluations are used by the U.S. Environmental Protec...
Fabian Uzoh; William W. Oliver
2006-01-01
A height increment model is developed and evaluated for individual trees of ponderosa pine throughout the species range in western United States. The data set used in this study came from long-term permanent research plots in even-aged, pure stands both planted and of natural origin. The data base consists of six levels-of-growing stock studies supplemented by initial...
An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1973-01-01
The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
Kitagawa, Shuji; Fujiwara, Megumi; Okinaka, Yuta; Yutani, Reiko; Teraoka, Reiko
2015-01-01
White petrolatum is a mixture of solid and liquid hydrocarbons and its structure can be affected by shear stress. Thus, it might also induce changes in its rheological properties. In this study, we used polarization microscopy to investigate how different mixing methods affect the structure of white petrolatum. We used two different mixing methods, mixing using a rotation/revolution mixer and mixing using an ointment slab and an ointment spatula. The extent of the fragmentation and dispersal of the solid portion of white petrolatum depended on the mixing conditions. Next, we examined the changes in the structure of a salicylic acid ointment, in which white petrolatum was used as a base, induced by mixing and found that the salicylic acid solids within the ointment were also dispersed. In addition to these structural changes, the viscosity and thixotropic behavior of both test substances also decreased in a mixing condition-dependent manner. The reductions in these parameters were most marked after mixing with a rotation/revolution mixer, and similar results were obtained for spreadability. We also investigated the effects of mixing procedure on the skin accumulation and permeation of salicylic acid. They were increased by approximately three-fold after mixing. Little difference in skin accumulation or permeation was detected between the two mixing methods. These findings indicate that mixing procedures themselves affect the utility and physiological effects of white petrolatum-based ointments. Therefore, these effects should be considered when mixing is required for the clinical use of petrolatum-based ointments.
Computing eddy-driven effective diffusivity using Lagrangian particles
Wolfram, Phillip J.; Ringler, Todd D.
2017-08-14
A novel method to derive effective diffusivity from Lagrangian particle trajectory data sets is developed and then analyzed relative to particle-derived meridional diffusivity for eddy-driven mixing in an idealized circumpolar current. Quantitative standard dispersion- and transport-based mixing diagnostics are defined, compared and contrasted to motivate the computation and use of effective diffusivity derived from Lagrangian particles. We compute the effective diffusivity by first performing scalar transport on Lagrangian control areas using stored trajectories computed from online Lagrangian In-situ Global High-performance particle Tracking (LIGHT) using the Model for Prediction Across Scales Ocean (MPAS-O). Furthermore, the Lagrangian scalar transport scheme is comparedmore » against an Eulerian scalar transport scheme. Spatially-variable effective diffusivities are computed from resulting time-varying cumulative concentrations that vary as a function of cumulative area. The transport-based Eulerian and Lagrangian effective diffusivity diagnostics are found to be qualitatively consistent with the dispersion-based diffusivity. All diffusivity estimates show a region of increased subsurface diffusivity within the core of an idealized circumpolar current and results are within a factor of two of each other. The Eulerian and Lagrangian effective diffusivities are most similar; smaller and more spatially diffused values are obtained with the dispersion-based diffusivity computed with particle clusters.« less
Computing eddy-driven effective diffusivity using Lagrangian particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfram, Phillip J.; Ringler, Todd D.
A novel method to derive effective diffusivity from Lagrangian particle trajectory data sets is developed and then analyzed relative to particle-derived meridional diffusivity for eddy-driven mixing in an idealized circumpolar current. Quantitative standard dispersion- and transport-based mixing diagnostics are defined, compared and contrasted to motivate the computation and use of effective diffusivity derived from Lagrangian particles. We compute the effective diffusivity by first performing scalar transport on Lagrangian control areas using stored trajectories computed from online Lagrangian In-situ Global High-performance particle Tracking (LIGHT) using the Model for Prediction Across Scales Ocean (MPAS-O). Furthermore, the Lagrangian scalar transport scheme is comparedmore » against an Eulerian scalar transport scheme. Spatially-variable effective diffusivities are computed from resulting time-varying cumulative concentrations that vary as a function of cumulative area. The transport-based Eulerian and Lagrangian effective diffusivity diagnostics are found to be qualitatively consistent with the dispersion-based diffusivity. All diffusivity estimates show a region of increased subsurface diffusivity within the core of an idealized circumpolar current and results are within a factor of two of each other. The Eulerian and Lagrangian effective diffusivities are most similar; smaller and more spatially diffused values are obtained with the dispersion-based diffusivity computed with particle clusters.« less
Janssen, Dirk P
2012-03-01
Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.
McClintock, B.T.; White, Gary C.; Burnham, K.P.; Pryde, M.A.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
In recent years, the mark-resight method for estimating abundance when the number of marked individuals is known has become increasingly popular. By using field-readable bands that may be resighted from a distance, these techniques can be applied to many species, and are particularly useful for relatively small, closed populations. However, due to the different assumptions and general rigidity of the available estimators, researchers must often commit to a particular model without rigorous quantitative justification for model selection based on the data. Here we introduce a nonlinear logit-normal mixed effects model addressing this need for a more generalized framework. Similar to models available for mark-recapture studies, the estimator allows a wide variety of sampling conditions to be parameterized efficiently under a robust sampling design. Resighting rates may be modeled simply or with more complexity by including fixed temporal and random individual heterogeneity effects. Using information theory, the model(s) best supported by the data may be selected from the candidate models proposed. Under this generalized framework, we hope the uncertainty associated with mark-resight model selection will be reduced substantially. We compare our model to other mark-resight abundance estimators when applied to mainland New Zealand robin (Petroica australis) data recently collected in Eglinton Valley, Fiordland National Park and summarize its performance in simulation experiments.
Qiu, Hao; Versieren, Liske; Rangel, Georgina Guzman; Smolders, Erik
2016-01-19
Soil contamination with copper (Cu) is often associated with zinc (Zn), and the biological response to such mixed contamination is complex. Here, we investigated Cu and Zn mixture toxicity to Hordeum vulgare in three different soils, the premise being that the observed interactions are mainly due to effects on bioavailability. The toxic effect of Cu and Zn mixtures on seedling root elongation was more than additive (i.e., synergism) in soils with high and medium cation-exchange capacity (CEC) but less than additive (antagonism) in a low-CEC soil. This was found when we expressed the dose as the conventional total soil concentration. In contrast, antagonism was found in all soils when we expressed the dose as free-ion activities in soil solution, indicating that there is metal-ion competition for binding to the plant roots. Neither a concentration addition nor an independent action model explained mixture effects, irrespective of the dose expressions. In contrast, a multimetal BLM model and a WHAM-Ftox model successfully explained the mixture effects across all soils and showed that bioavailability factors mainly explain the interactions in soils. The WHAM-Ftox model is a promising tool for the risk assessment of mixed-metal contamination in soils.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation
NASA Technical Reports Server (NTRS)
Holt, James B.; Ruf, Joe
1999-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.
Paying for Primary Care: The Factors Associated with Physician Self-selection into Payment Models.
Rudoler, David; Deber, Raisa; Barnsley, Janet; Glazier, Richard H; Dass, Adrian Rohit; Laporte, Audrey
2015-09-01
To determine the factors associated with primary care physician self-selection into different payment models, we used a panel of eight waves of administrative data for all primary care physicians who practiced in Ontario between 2003/2004 and 2010/2011. We used a mixed effects logistic regression model to estimate physicians' choice of three alternative payment models: fee for service, enhanced fee for service, and blended capitation. We found that primary care physicians self-selected into payment models based on existing practice characteristics. Physicians with more complex patient populations were less likely to switch into capitation-based payment models where higher levels of effort were not financially rewarded. These findings suggested that investigations aimed at assessing the impact of different primary care reimbursement models on outcomes, including costs and access, should first account for potential selection effects. Copyright © 2015 John Wiley & Sons, Ltd.
J. Breidenbach; E. Kublin; R. McGaughey; H.-E. Andersen; S. Reutebuch
2008-01-01
For this study, hierarchical data sets--in that several sample plots are located within a stand--were analyzed for study sites in the USA and Germany. The German data had an additional hierarchy as the stands are located within four distinct public forests. Fixed-effects models and mixed-effects models with a random intercept on the stand level were fit to each data...
Kitagawa, Shuji; Yutani, Reiko; Kodani, Rhu-Ichi; Teraoka, Reiko
2016-01-01
Most steroidal ointments contain propylene glycol (PG) and surfactants, which improve the solubility of corticosteroids in white petrolatum. Surfactants aid the uniform dispersal of PG within white petrolatum. Since the surfactants used in generic ointments are usually different from those used in brand name ointments, we investigated the effects of surfactants on the rheological properties of three brand name ointments and six equivalent generic ointments. We detected marked differences in hardness, adhesiveness, and spreadability among the ointments. Further examinations of model ointments consisting of white petrolatum, PG, and surfactants revealed that the abovementioned properties, especially hardness and adhesiveness, were markedly affected by the surfactants. Since steroidal ointments are often admixed with moisturizing creams prior to use, we investigated the mixing compatibility of the ointments with heparinoid cream and how this was affected by their surfactants. We found that the ointments containing glyceryl monostearate demonstrated good mixing compatibility, whereas those containing non-ionic surfactants with polyoxyethylene chains exhibited phase separation. These results were also consistent with the findings for the model ointments, which indicates that the mixing compatibility of steroidal ointments with heparinoid cream is determined by the emulsifying capacity of the surfactants in their oily bases.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Models of Plumes: Their Flow, Their Geometric Spreading, and Their Mixing with Interplume Flow
NASA Technical Reports Server (NTRS)
Suess, Steven T.
1998-01-01
There are two types of plume flow models: (1) 1D models using ad hoc spreading functions, f(r); (2) MagnetoHydroDynamics (MHD) models. 1D models can be multifluid, time dependent, and incorporate very general descriptions of the energetics. They confirm empirical results that plume flow is slow relative to requirements for high speed wind. But, no published 1 D model incorporates the rapid local spreading at the base (fl(r)) which has an important effect on mass flux. The one published MHD model is isothermal, but confirms that if b=8*pi*p/absolute value(B)2<
NASA Technical Reports Server (NTRS)
Fleming, Eric L.; Jackman, Charles H.; Stolarski, Richard S.; Considine, David B.
1998-01-01
We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry assessment model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. We will present an overview of the new algorithm, and show various model-data comparisons of long-lived tracers as part of the model validation. We will also show how the new algorithm gives substantially better agreement with observations compared to our previous model transport. The new model captures much of the qualitative structure and seasonal variability observed methane, water vapor, and total ozone. These include: isolation of the tropics and winter polar vortex, the well mixed surf-zone region of the winter sub-tropics and mid-latitudes, and the propagation of seasonal signals in the tropical lower stratosphere. Model simulations of carbon-14 and strontium-90 compare fairly well with observations in reproducing the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also ran time dependent simulations of SF6 from which the model mean age of air values were derived. The oldest air (5.5 to 6 years) occurred in the high latitude upper stratosphere during fall and early winter of both hemispheres, and in the southern hemisphere lower stratosphere during late winter and early spring. The latitudinal gradient of the mean ages also compare well with ER-2 aircraft observations in the lower stratosphere.
Differential expression analysis for RNAseq using Poisson mixed models
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny
2017-01-01
Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, M.; Zhang, D.; Wang, Z.; Wang, Y.
2017-12-01
Mixed-phase clouds are persistently observed over the Arctic and the phase partitioning between cloud liquid and ice hydrometeors in mixed-phase clouds has important impacts on the surface energy budget and Arctic climate. In this study, we test the NCAR Community Atmosphere Model Version 5 (CAM5) with the single-column and weather forecast configurations and evaluate the model performance against observation data from the DOE Atmospheric Radiation Measurement (ARM) Program's M-PACE field campaign in October 2004 and long-term ground-based multi-sensor remote sensing measurements. Like most global climate models, we find that CAM5 also poorly simulates the phase partitioning in mixed-phase clouds by significantly underestimating the cloud liquid water content. Assuming pocket structures in the distribution of cloud liquid and ice in mixed-phase clouds as suggested by in situ observations provides a plausible solution to improve the model performance by reducing the Wegner-Bergeron-Findeisen (WBF) process rate. In this study, the modification of the WBF process in the CAM5 model has been achieved with applying a stochastic perturbation to the time scale of the WBF process relevant to both ice and snow to account for the heterogeneous mixture of cloud liquid and ice. Our results show that this modification of WBF process improves the modeled phase partitioning in the mixed-phase clouds. The seasonal variation of mixed-phase cloud properties is also better reproduced in the model in comparison with the long-term ground-based remote sensing observations. Furthermore, the phase partitioning is insensitive to the reassignment time step of perturbations.
Prouty, Nancy G.; Swarzenski, Peter W.; Fackrell, Joseph; Johannesson, Karen H.; Palmore, C. Diane
2017-01-01
Study regionThe groundwater influenced coastal waters along the arid Kona coast of the Big Island, Hawai’i.Study focusA salinity-and phase partitioning-based mixing experiment was constructed using contrasting groundwater endmembers along the arid Konacoast of the Big Island, Hawai’i and local open seawater to better understand biogeochemical and physicochemical processes that influence the fate of submarine groundwater discharge (SGD)-derived nutrients and trace elements.New Hydrological Insights for the RegionTreated wastewater effluent was the main source for nutrient enrichment downstream at the Honokōhau Harbor site. Conservative mixing for some constituents, such as nitrate + nitrite, illustrate the effectiveness of physical mixing to maintain oceanic concentrations in the colloid (0.02–0.45 μm) and truly dissolved (
Unofficial Road Building in the Brazilian Amazon: Dilemmas and Models for Road Governance
NASA Technical Reports Server (NTRS)
Perz, Stephen G.; Overdevest, Christine; Caldas, Marcellus M.; Walker, Robert T.; Arima, Eugenio Y.
2007-01-01
Unofficial roads form dense networks in landscapes, generating a litany of negative ecological outcomes, but unofficial roads in frontier areas are also instrumental in local livelihoods and community development. This trade-off poses dilemmas for the governance of unofficial roads. Unofficial road building in frontier areas of the Brazilian Amazon illustrates the challenges of 'road governance.' Both state-based and community based governance models exhibit important liabilities for governing unofficial roads. Whereas state-based governance has experienced difficulties in adapting to specific local contexts and interacting effectively with local interest groups, community-based governance has a mixed record owing to social inequalities and conflicts among local interest groups. A state-community hybrid model may offer more effective governance of unofficial road building by combining the oversight capacity of the state with locally grounded community management via participatory decision-making.
Assessment of eight HPV vaccination programs implemented in lowest income countries.
Ladner, Joël; Besson, Marie-Hélène; Hampshire, Rachel; Tapert, Lisa; Chirenje, Mike; Saba, Joseph
2012-05-23
Cervix cancer, preventable, continues to be the third most common cancer in women worldwide, especially in lowest income countries. Prophylactic HPV vaccination should help to reduce the morbidity and mortality associated with cervical cancer. The purpose of the study was to describe the results of and key concerns in eight HPV vaccination programs conducted in seven lowest income countries through the Gardasil Access Program (GAP). The GAP provides free HPV vaccine to organizations and institutions in lowest income countries. The HPV vaccination programs were entirely developed, implemented and managed by local institutions. Institutions submitted application forms with institution characteristics, target population, communication delivery strategies. After completion of the vaccination campaign (3 doses), institutions provided a final project report with data on doses administered and vaccination models. Two indicators were calculated, the program vaccination coverage and adherence. Qualitative data were also collected in the following areas: government and community involvement; communication, and sensitization; training and logistics resources, and challenges. A total of eight programs were implemented in seven countries. The eight programs initially targeted a total of 87,580 girls, of which 76,983 received the full 3-dose vaccine course, with mean program vaccination coverage of 87.8%; the mean adherence between the first and third doses of vaccine was 90.9%. Three programs used school-based delivery models, 2 used health facility-based models, and 3 used mixed models that included schools and health facilities. Models that included school-based vaccination were most effective at reaching girls aged 9-13 years. Mixed models comprising school and health facility-based vaccination had better overall performance compared with models using just one of the methods. Increased rates of program coverage and adherence were positively correlated with the number of vaccination sites. Qualitative key insights from the school models showed a high level of coordination and logistics to facilitate vaccination administration, a lower risk of girls being lost to follow-up and vaccinations conducted within the academic year limit the number of girls lost to follow-up. Mixed models that incorporate both schools and health facilities appear to be the most effective at delivering HPV vaccine. This study provides lessons for development of public health programs and policies as countries go forward in national decision-making for HPV vaccination.
Assessment of eight HPV vaccination programs implemented in lowest income countries
2012-01-01
Background Cervix cancer, preventable, continues to be the third most common cancer in women worldwide, especially in lowest income countries. Prophylactic HPV vaccination should help to reduce the morbidity and mortality associated with cervical cancer. The purpose of the study was to describe the results of and key concerns in eight HPV vaccination programs conducted in seven lowest income countries through the Gardasil Access Program (GAP). Methods The GAP provides free HPV vaccine to organizations and institutions in lowest income countries. The HPV vaccination programs were entirely developed, implemented and managed by local institutions. Institutions submitted application forms with institution characteristics, target population, communication delivery strategies. After completion of the vaccination campaign (3 doses), institutions provided a final project report with data on doses administered and vaccination models. Two indicators were calculated, the program vaccination coverage and adherence. Qualitative data were also collected in the following areas: government and community involvement; communication, and sensitization; training and logistics resources, and challenges. Results A total of eight programs were implemented in seven countries. The eight programs initially targeted a total of 87,580 girls, of which 76,983 received the full 3-dose vaccine course, with mean program vaccination coverage of 87.8%; the mean adherence between the first and third doses of vaccine was 90.9%. Three programs used school-based delivery models, 2 used health facility-based models, and 3 used mixed models that included schools and health facilities. Models that included school-based vaccination were most effective at reaching girls aged 9-13 years. Mixed models comprising school and health facility-based vaccination had better overall performance compared with models using just one of the methods. Increased rates of program coverage and adherence were positively correlated with the number of vaccination sites. Qualitative key insights from the school models showed a high level of coordination and logistics to facilitate vaccination administration, a lower risk of girls being lost to follow-up and vaccinations conducted within the academic year limit the number of girls lost to follow-up. Conclusion Mixed models that incorporate both schools and health facilities appear to be the most effective at delivering HPV vaccine. This study provides lessons for development of public health programs and policies as countries go forward in national decision-making for HPV vaccination. PMID:22621342
Theresa B. Jain; Mike A. Battaglia; Han-Sup Han; Russell T. Graham; Christopher R. Keyes; Jeremy S. Fried; Jonathan E. Sandquist
2014-01-01
Implementing fuel treatments in every place where it could be beneficial to do so is impractical and not cost effective under any plausible specification of objectives. Only some of the many possible kinds of treatments will be effective in any particular stand and there are some stands that seem to defy effective treatment. In many more, effective treatment costs far...
Lu, Tao; Wang, Min; Liu, Guangying; Dong, Guang-Hui; Qian, Feng
2016-01-01
It is well known that there is strong relationship between HIV viral load and CD4 cell counts in AIDS studies. However, the relationship between them changes during the course of treatment and may vary among individuals. During treatments, some individuals may experience terminal events such as death. Because the terminal event may be related to the individual's viral load measurements, the terminal mechanism is non-ignorable. Furthermore, there exists competing risks from multiple types of events, such as AIDS-related death and other death. Most joint models for the analysis of longitudinal-survival data developed in literatures have focused on constant coefficients and assume symmetric distribution for the endpoints, which does not meet the needs for investigating the nature of varying relationship between HIV viral load and CD4 cell counts in practice. We develop a mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects to deal with varying relationship between the two endpoints for longitudinal-competing risks survival data. A fully Bayesian inference procedure is established to estimate parameters in the joint model. The proposed method is applied to a multicenter AIDS cohort study. Various scenarios-based potential models that account for partial data features are compared. Some interesting findings are presented.
Mid-depth temperature maximum in an estuarine lake
NASA Astrophysics Data System (ADS)
Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.
2018-03-01
The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~< 2 m), transparent water, daytime maximum of wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.
Gries, Katharine S; Regier, Dean A; Ramsey, Scott D; Patrick, Donald L
2017-06-01
To develop a statistical model generating utility estimates for prostate cancer specific health states, using preference weights derived from the perspectives of prostate cancer patients, men at risk for prostate cancer, and society. Utility estimate values were calculated using standard gamble (SG) methodology. Study participants valued 18 prostate-specific health states with the five attributes: sexual function, urinary function, bowel function, pain, and emotional well-being. Appropriateness of model (linear regression, mixed effects, or generalized estimating equation) to generate prostate cancer utility estimates was determined by paired t-tests to compare observed and predicted values. Mixed-corrected standard SG utility estimates to account for loss aversion were calculated based on prospect theory. 132 study participants assigned values to the health states (n = 40 men at risk for prostate cancer; n = 43 men with prostate cancer; n = 49 general population). In total, 792 valuations were elicited (six health states for each 132 participants). The most appropriate model for the classification system was a mixed effects model; correlations between the mean observed and predicted utility estimates were greater than 0.80 for each perspective. Developing a health-state classification system with preference weights for three different perspectives demonstrates the relative importance of main effects between populations. The predicted values for men with prostate cancer support the hypothesis that patients experiencing the disease state assign higher utility estimates to health states and there is a difference in valuations made by patients and the general population.
Analysis of longitudinal diffusion-weighted images in healthy and pathological aging: An ADNI study.
Kruggel, Frithjof; Masaki, Fumitaro; Solodkin, Ana
2017-02-15
The widely used framework of voxel-based morphometry for analyzing neuroimages is extended here to model longitudinal imaging data by exchanging the linear model with a linear mixed-effects model. The new approach is employed for analyzing a large longitudinal sample of 756 diffusion-weighted images acquired in 177 subjects of the Alzheimer's Disease Neuroimaging initiative (ADNI). While sample- and group-level results from both approaches are equivalent, the mixed-effect model yields information at the single subject level. Interestingly, the neurobiological relevance of the relevant parameter at the individual level describes specific differences associated with aging. In addition, our approach highlights white matter areas that reliably discriminate between patients with Alzheimer's disease and healthy controls with a predictive power of 0.99 and include the hippocampal alveus, the para-hippocampal white matter, the white matter of the posterior cingulate, and optic tracts. In this context, notably the classifier includes a sub-population of patients with minimal cognitive impairment into the pathological domain. Our classifier offers promising features for an accessible biomarker that predicts the risk of conversion to Alzheimer's disease. Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp-content/uploads/how to apply/ADNI Acknowledgement List.pdf. Significance statement This study assesses neuro-degenerative processes in the brain's white matter as revealed by diffusion-weighted imaging, in order to discriminate healthy from pathological aging in a large sample of elderly subjects. The analysis of time-series examinations in a linear mixed effects model allowed the discrimination of population-based aging processes from individual determinants. We demonstrate that a simple classifier based on white matter imaging data is able to predict the conversion to Alzheimer's disease with a high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The debonding of a skin/stringer specimen subjected to tension was studied using three-dimensional volume element modeling and computational fracture mechanics. Mixed mode strain energy release rates were calculated from finite element results using the virtual crack closure technique. The simulations revealed an increase in total energy release rate in the immediate vicinity of the free edges of the specimen. Correlation of the computed mixed-mode strain energy release rates along the delamination front contour with a two-dimensional mixed-mode interlaminar fracture criterion suggested that in spite of peak total energy release rates at the free edge the delamination would not advance at the edges first. The qualitative prediction of the shape of the delamination front was confirmed by X-ray photographs of a specimen taken during testing. The good correlation between prediction based on analysis and experiment demonstrated the efficiency of a mixed-mode failure analysis for the investigation of skin/stiffener separation due to delamination in the adherents. The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is also demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.
The numerical modelling of mixing phenomena of nanofluids in passive micromixers
NASA Astrophysics Data System (ADS)
Milotin, R.; Lelea, D.
2018-01-01
The paper deals with the rapid mixing phenomena in micro-mixing devices with four tangential injections and converging tube, considering nanoparticles and water as the base fluid. Several parameters like Reynolds number (Re = 6 - 284) or fluid temperature are considered in order to optimize the process and obtain fundamental insight in mixing phenomena. The set of partial differential equations is considered based on conservation of momentum and species. Commercial package software Ansys-Fluent is used for solution of differential equations, based on a finite volume method. The results reveal that mixing index and mixing process is strongly dependent both on Reynolds number and heat flux. Moreover there is a certain Reynolds number when flow instabilities are generated that intensify the mixing process due to the tangential injections of the fluids.
Using generalized additive (mixed) models to analyze single case designs.
Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J
2014-04-01
This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
On the repeated measures designs and sample sizes for randomized controlled trials.
Tango, Toshiro
2016-04-01
For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Modelling rainfall amounts using mixed-gamma model for Kuantan district
NASA Astrophysics Data System (ADS)
Zakaria, Roslinazairimah; Moslim, Nor Hafizah
2017-05-01
An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.
Toward topology-based characterization of small-scale mixing in compressible turbulence
NASA Astrophysics Data System (ADS)
Suman, Sawan; Girimaji, Sharath
2011-11-01
Turbulent mixing rate at small scales of motion (molecular mixing) is governed by the steepness of the scalar-gradient field which in turn is dependent upon the prevailing velocity gradients. Thus motivated, we propose a velocity-gradient topology-based approach for characterizing small-scale mixing in compressible turbulence. We define a mixing efficiency metric that is dependent upon the topology of the solenoidal and dilatational deformation rates of a fluid element. The mixing characteristics of solenoidal and dilatational velocity fluctuations are clearly delineated. We validate this new approach by employing mixing data from direct numerical simulations (DNS) of compressible decaying turbulence with passive scalar. For each velocity-gradient topology, we compare the mixing efficiency predicted by the topology-based model with the corresponding conditional scalar variance obtained from DNS. The new mixing metric accurately distinguishes good and poor mixing topologies and indeed reasonably captures the numerical values. The results clearly demonstrate the viability of the proposed approach for characterizing and predicting mixing in compressible flows.
The analysis and modelling of dilatational terms in compressible turbulence
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.
1991-01-01
It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.
The analysis and modeling of dilatational terms in compressible turbulence
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.
1989-01-01
It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.
Mixed formulation for seismic analysis of composite steel-concrete frame structures
NASA Astrophysics Data System (ADS)
Ayoub, Ashraf Salah Eldin
This study presents a new finite element model for the nonlinear analysis of structures made up of steel and concrete under monotonic and cyclic loads. The new formulation is based on a two-field mixed formulation. In the formulation, both forces and deformations are simultaneously approximated within the element through independent interpolation functions. The main advantages of the model is the accuracy in global and local response with very few elements while maintaining rapid numerical convergence and robustness even under severe cyclic loading. Overall four elements were developed based on the new formulation: an element that describes the behavior of anchored reinforcing bars, an element that describes the behavior of composite steel-concrete beams with deformable shear connectors, an element that describes the behavior of reinforced concrete beam-columns with bond-slip, and an element that describes the behavior of pretensioned or posttensioned, bonded or unbonded prestressed concrete structures. The models use fiber discretization of beam sections to describe nonlinear material response. The transfer of forces between steel and concrete is described with bond elements. Bond elements are modeled with distributed spring elements. The non-linear behavior of the composite element derives entirely from the constitutive laws of the steel, concrete and bond elements. Two additional elements are used for the prestressed concrete models, a friction element that models the effect of friction between the tendon and the duct during the posttensioning operation, and an anchorage element that describes the behavior of the prestressing tendon anchorage in posttensioned structures. Two algorithms for the numerical implementation of the new proposed model are presented; an algorithm that enforces stress continuity at element boundaries, and an algorithm in which stress continuity is relaxed locally inside the element. Stability of both algorithms is discussed. Comparison with standard displacement based models and earlier flexibility based models is presented through numerical studies. The studies prove the superiority of the mixed model over both displacement and flexibility models. Correlation studies of the proposed model with experimental results of structural specimens are conducted. The studies show the accuracy of the model and its numerical robustness even under severe cyclic loading conditions.
2014-09-01
very short time period and in this research, we model and study the effects of this rainfall on Taiwan?s coastal oceans as a result of river discharge...model and study the effects of this rainfall on Taiwan’s coastal oceans as a result of river discharge. We do this through the use of a river discharge... Effects of Footprint Shape on the Bulk Mixing Model . . . . . . . . . 57 4.2 Effects of the Horizontal Extent of the Bulk Mixing Model . . . . . . 59
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Modeling 3D Facial Shape from DNA
Claes, Peter; Liberton, Denise K.; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E.; Pearson, Laurel N.; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A.; Yao, Wei; Tang, Hua; Barsh, Gregory S.; Absher, Devin M.; Puts, David A.; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W.; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K.; Boster, James S.; Shriver, Mark D.
2014-01-01
Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127
Using Mixed-Effects Structural Equation Models to Study Student Academic Development.
ERIC Educational Resources Information Center
Pike, Gary R.
1992-01-01
A study at the University of Tennessee Knoxville used mixed-effect structural equation models incorporating latent variables as an alternative to conventional methods of analyzing college students' (n=722) first-year-to-senior academic gains. Results indicate, contrary to previous analysis, that coursework and student characteristics interact to…
NASA Astrophysics Data System (ADS)
Zhang, Donghao; Matsuura, Haruki; Asada, Akiko
2017-04-01
Some automobile factories have segmented mixed-model production lines into shorter sub-lines according to part group, such as engine, trim, and powertrain. The effects of splitting a line into sub-lines have been reported from the standpoints of worker motivation, productivity improvement, and autonomy based on risk spreading. There has been no mention of the possibility of shortening the line length by altering the product sequence using sub-lines. The purpose of the present paper is to determine the conditions under which sub-lines reduce the line length and the degree to which the line length may be shortened. The line lengths for a non-split line and a line that has been split into sub-lines are compared using three methods for determining the working area, the standard closed boundary, the optimized open boundary, and real-life constant-length stations. The results are discussed by analyzing the upper and lower bounds of the line length. Based on these results, a procedure for deciding whether or not to split a production line is proposed.
Budget model can aid group practice planning.
Bender, A D
1991-12-01
A medical practice can enhance its planning by developing a budgetary model to test effects of planning assumptions on its profitability and cash requirements. A model focusing on patient visits, payment mix, patient mix, and fee and payment schedules can help assess effects of proposed decisions. A planning model is not a substitute for planning but should complement a plan that includes mission, goals, values, strategic issues, and different outcomes.
ERIC Educational Resources Information Center
Rast, Philippe; Hofer, Scott M.; Sparks, Catharine
2012-01-01
A mixed effects location scale model was used to model and explain individual differences in within-person variability of negative and positive affect across 7 days (N=178) within a measurement burst design. The data come from undergraduate university students and are pooled from a study that was repeated at two consecutive years. Individual…
Heterogeneous reactions in aircraft gas turbine engines
NASA Astrophysics Data System (ADS)
Brown, R. C.; Miake-Lye, R. C.; Lukachko, S. P.; Waitz, I. A.
2002-05-01
One-dimensional flow models and unity probability heterogeneous rate parameters are used to estimate the maximum effect of heterogeneous reactions on trace species evolution in aircraft gas turbines. The analysis includes reactions on soot particulates and turbine/nozzle material surfaces. Results for a representative advanced subsonic engine indicate the net change in reactant mixing ratios due to heterogeneous reactions is <10-6 for O2, CO2, and H2O, and <10-10 for minor combustion products such as SO2 and NO2. The change in the mixing ratios relative to the initial values is <0.01%. Since these estimates are based on heterogeneous reaction probabilities of unity, the actual changes will be even lower. Thus, heterogeneous chemistry within the engine cannot explain the high conversion of SO2 to SO3 which some wake models require to explain the observed levels of volatile aerosols. Furthermore, turbine heterogeneous processes will not effect exhaust NOx or NOy levels.
Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan
2017-01-01
Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).
A method of minimum volume simplex analysis constrained unmixing for hyperspectral image
NASA Astrophysics Data System (ADS)
Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao
2017-07-01
The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
NASA Astrophysics Data System (ADS)
Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel
2013-06-01
To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.
NASA Astrophysics Data System (ADS)
Parsakhoo, Zahra; Shao, Yaping
2017-04-01
Near-surface turbulent mixing has considerable effect on surface fluxes, cloud formation and convection in the atmospheric boundary layer (ABL). Its quantifications is however a modeling and computational challenge since the small eddies are not fully resolved in Eulerian models directly. We have developed a Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer based on the Ito Stochastic Differential Equation (SDE) for air parcels (particles). Due to the complexity of the mixing in the ABL, we find that linear Ito SDE cannot represent convections properly. Three strategies have been tested to solve the problem: 1) to make the deterministic term in the Ito equation non-linear; 2) to change the random term in the Ito equation fractional, and 3) to modify the Ito equation by including Levy flights. We focus on the third strategy and interpret mixing as interaction between at least two stochastic processes with different Lagrangian time scales. The model is in progress to include the collisions among the particles with different characteristic and to apply the 3D model for real cases. One application of the model is emphasized: some land surface patterns are generated and then coupled with the Large Eddy Simulation (LES).
Bi-phasic trends in mercury concentrations in blood of Wisconsin common loons during 1992–2010
Meyer, Michael W.; Rasmussen, Paul W.; Watras, Carl J.; Fevold, Brick M.; Kenow, Kevin P.
2011-01-01
Wisconsin Department of Natural Resources (WDNR) assessed the ecological risk of mercury (Hg) in aquatic systems by monitoring common loon (Gavia immer) population dynamics and blood Hg concentrations. We report temporal trends in blood Hg concentrations based on 334 samples collected from adults recaptured in subsequent years (resampled 2-9 times) and from 421 blood samples of chicks collected at lakes resampled 2-8 times 1992-2010.. Temporal trends were identified with generalized additive mixed effects models (GAMMs) and mixed effects models to account for the potential lack of independence among observations from the same loon or same lake. Trend analyses indicated that Hg concentrations in the blood of Wisconsin loons declined over the period 1992-2000, and increased during 2002-2010, but not to the level observed in the early 1990s. The best fitting linear mixed effects model included separate trends for the two time periods. The estimated trend in Hg concentration among the adult loon population during 1992-2000 was -2.6% per year and the estimated trend during 2002-2010 was +1.8% per year; chick blood Hg concentrations decreased by -6.5% per year during 1992-2000, but increased 1.8% per year during 2002-2010. This bi-phasic pattern is similar to trends observed for concentrations of methylmercury (meHg) and SO4 in lake water of a well studied seepage lake (Little Rock Lake, Vilas County) within our study area. A cause-effect relationship between these independent trends is hypothesized.
A big data approach to the development of mixed-effects models for seizure count data.
Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M
2017-05-01
Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
An S 4 model inspired from self-complementary neutrino mixing
NASA Astrophysics Data System (ADS)
Zhang, Xinyi
2018-03-01
We build an S 4 model for neutrino masses and mixings based on the self-complementary (SC) neutrino mixing pattern. The SC mixing is constructed from the self-complementarity relation plus {δ }CP}=-\\tfrac{π }{2}. We elaborately construct the model at a percent level of accuracy to reproduce the structure given by the SC mixing. After performing a numerical study on the model’s parameter space, we find that in the case of normal ordering, the model can give predictions on the observables that are compatible with their 3σ ranges, and give predictions for the not-yet observed quantities like the lightest neutrino mass m 1 ∈ [0.003, 0.010] eV and the Dirac CP violating phase {δ }CP}\\in [256.72^\\circ ,283.33^\\circ ].
NASA Astrophysics Data System (ADS)
Miyake, Yasufumi; Boned, Christian; Baylaucq, Antoine; Bessières, David; Zéberg-Mikkelsen, Claus K.; Galliéro, Guillaume; Ushiki, Hideharu
2007-07-01
In order to study the influence of stereoisomeric effects on the dynamic viscosity, an extensive experimental study of the viscosity of the binary system composed of the two stereoisomeric molecular forms of decalin - cis and trans - has been carried out for five different mixtures at three temperatures (303.15, 323.15 and 343.15) K and six isobars up to 100 MPa with a falling-body viscometer (a total of 90 points). The experimental relative uncertainty is estimated to be 2%. The variations of dynamic viscosity versus composition are discussed with respect to their behavior due to stereoisomerism. Four different models with a physical and theoretical background are studied in order to investigate how they take the stereoisomeric effect into account through their required model parameters. The evaluated models are based on the hard-sphere scheme, the concepts of the free-volume and the friction theory, and a model derived from molecular dynamics. Overall, a satisfactory representation of the viscosity of this binary system is found for the different models within the considered ( T, p) range taken into account their simplicity. All the models are able to distinguish between the two stereoisomeric decalin compounds. Further, based on the analysis of the model parameters performed on the pure compounds, it has been found that the use of simple mixing rules without introducing any binary interaction parameters are sufficient in order to predict the viscosity of cis + trans-decalin mixtures with the same accuracy in comparison with the experimental values as obtained for the pure compounds. In addition to these models, a semi-empirical self-referencing model and the simple mixing laws of Grunberg-Nissan and Katti-Chaudhri are also applied in the representation of the viscosity behavior of these systems.
The operating room case-mix problem under uncertainty and nurses capacity constraints.
Yahia, Zakaria; Eltawil, Amr B; Harraz, Nermine A
2016-12-01
Surgery is one of the key functions in hospitals; it generates significant revenue and admissions to hospitals. In this paper we address the decision of choosing a case-mix for a surgery department. The objective of this study is to generate an optimal case-mix plan of surgery patients with uncertain surgery operations, which includes uncertainty in surgery durations, length of stay, surgery demand and the availability of nurses. In order to obtain an optimal case-mix plan, a stochastic optimization model is proposed and the sample average approximation method is applied. The proposed model is used to determine the number of surgery cases to be weekly served, the amount of operating rooms' time dedicated to each specialty and the number of ward beds dedicated to each specialty. The optimal case-mix selection criterion is based upon a weighted score taking into account both the waiting list and the historical demand of each patient category. The score aims to maximizing the service level of the operating rooms by increasing the total number of surgery cases that could be served. A computational experiment is presented to demonstrate the performance of the proposed method. The results show that the stochastic model solution outperforms the expected value problem solution. Additional analysis is conducted to study the effect of varying the number of ORs and nurses capacity on the overall ORs' performance.
Comparing colon cancer outcomes: The impact of low hospital case volume and case-mix adjustment.
Fischer, C; Lingsma, H F; van Leersum, N; Tollenaar, R A E M; Wouters, M W; Steyerberg, E W
2015-08-01
When comparing performance across hospitals it is essential to consider the noise caused by low hospital case volume and to perform adequate case-mix adjustment. We aimed to quantify the role of noise and case-mix adjustment on standardized postoperative mortality and anastomotic leakage (AL) rates. We studied 13,120 patients who underwent colon cancer resection in 85 Dutch hospitals. We addressed differences between hospitals in postoperative mortality and AL, using fixed (ignoring noise) and random effects (incorporating noise) logistic regression models with general and additional, disease specific, case-mix adjustment. Adding disease specific variables improved the performance of the case-mix adjustment models for postoperative mortality (c-statistic increased from 0.77 to 0.81). The overall variation in standardized mortality ratios was similar, but some individual hospitals changed considerably. For the standardized AL rates the performance of the adjustment models was poor (c-statistic 0.59 and 0.60) and overall variation was small. Most of the observed variation between hospitals was actually noise. Noise had a larger effect on hospital performance than extended case-mix adjustment, although some individual hospital outcome rates were affected by more detailed case-mix adjustment. To compare outcomes between hospitals it is crucial to consider noise due to low hospital case volume with a random effects model. Copyright © 2015 Elsevier Ltd. All rights reserved.
The effect of wind mixing on the vertical distribution of buoyant plastic debris
NASA Astrophysics Data System (ADS)
Kukulka, T.; Proskurowski, G.; Morét-Ferguson, S.; Meyer, D. W.; Law, K. L.
2012-04-01
Micro-plastic marine debris is widely distributed in vast regions of the subtropical gyres and has emerged as a major open ocean pollutant. The fate and transport of plastic marine debris is governed by poorly understood geophysical processes, such as ocean mixing within the surface boundary layer. Based on profile observations and a one-dimensional column model, we demonstrate that plastic debris is vertically distributed within the upper water column due to wind-driven mixing. These results suggest that total oceanic plastics concentrations are significantly underestimated by traditional surface measurements, requiring a reinterpretation of existing plastic marine debris data sets. A geophysical approach must be taken in order to properly quantify and manage this form of marine pollution.
Effects of imperfect mixing on low-density polyethylene reactor dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, C.M.; Dihora, J.O.; Ray, W.H.
1998-07-01
Earlier work considered the effect of feed conditions and controller configuration on the runaway behavior of LDPE autoclave reactors assuming a perfectly mixed reactor. This study provides additional insight on the dynamics of such reactors by using an imperfectly mixed reactor model and bifurcation analysis to show the changes in the stability region when there is imperfect macroscale mixing. The presence of imperfect mixing substantially increases the range of stable operation of the reactor and makes the process much easier to control than for a perfectly mixed reactor. The results of model analysis and simulations are used to identify somemore » of the conditions that lead to unstable reactor behavior and to suggest ways to avoid reactor runaway or reactor extinction during grade transitions and other process operation disturbances.« less
MARTINEZ, Josue G.; BOHN, Kirsten M.; CARROLL, Raymond J.
2013-01-01
We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible. PMID:23997376
NASA Astrophysics Data System (ADS)
Amaral, J. T.; Becker, V. M.
2018-05-01
We investigate ρ vector meson production in e p collisions at HERA with leading neutrons in the dipole formalism. The interaction of the dipole and the pion is described in a mixed-space approach, in which the dipole-pion scattering amplitude is given by the Marquet-Peschanski-Soyez saturation model, which is based on the traveling wave solutions of the nonlinear Balitsky-Kovchegov equation. We estimate the magnitude of the absorption effects and compare our results with a previous analysis of the same process in full coordinate space. In contrast with this approach, the present study leads to absorption K factors in the range of those predicted by previous theoretical studies on semi-inclusive processes.
NASA Astrophysics Data System (ADS)
Marazuela, M. A.; Vázquez-Suñé, E.; Custodio, E.; Palma, T.; García-Gil, A.; Ayora, C.
2018-06-01
Salt flat brines are a major source of minerals and especially lithium. Moreover, valuable wetlands with delicate ecologies are also commonly present at the margins of salt flats. Therefore, the efficient and sustainable exploitation of the brines they contain requires detailed knowledge about the hydrogeology of the system. A critical issue is the freshwater-brine mixing zone, which develops as a result of the mass balance between the recharged freshwater and the evaporating brine. The complex processes occurring in salt flats require a three-dimensional (3D) approach to assess the mixing zone geometry. In this study, a 3D map of the mixing zone in a salt flat is presented, using the Salar de Atacama as an example. This mapping procedure is proposed as the basis of computationally efficient three-dimensional numerical models, provided that the hydraulic heads of freshwater and mixed waters are corrected based on their density variations to convert them into brine heads. After this correction, the locations of lagoons and wetlands that are characteristic of the marginal zones of the salt flats coincide with the regional minimum water (brine) heads. The different morphologies of the mixing zone resulting from this 3D mapping have been interpreted using a two-dimensional (2D) flow and transport numerical model of an idealized cross-section of the mixing zone. The result of the model shows a slope of the mixing zone that is similar to that obtained by 3D mapping and lower than in previous models. To explain this geometry, the 2D model was used to evaluate the effects of heterogeneity in the mixing zone geometry. The higher the permeability of the upper aquifer is, the lower the slope and the shallower the mixing zone become. This occurs because most of the freshwater lateral recharge flows through the upper aquifer due to its much higher transmissivity, thus reducing the freshwater head. The presence of a few meters of highly permeable materials in the upper part of these hydrogeological systems, such as alluvial fans or karstified evaporites that are frequently associated with the salt flats, is enough to greatly modify the geometry of the saline interface.
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
Effect of shroud geometry on the effectiveness of a short mixing stack gas eductor model
NASA Astrophysics Data System (ADS)
Kavalis, A. E.
1983-06-01
An existing apparatus for testing models of gas eductor systems using high temperature primary flow was modified to provide improved control and performance over a wide range of gas temperature and flow rates. Secondary flow pumping, temperature and pressure data were recorded for two gas eductor system models. The first, previously tested under hot flow conditions, consists of a primary plate with four tilted-angled nozzles and a slotted, shrouded mixing stack with two diffuser rings (overall L/D = 1.5). A portable pyrometer with a surface probe was used for the second model in order to identify any hot spots at the external surface of the mixing stack, shroud and diffuser rings. The second model is shown to have almost the same mixing and pumping performance with the first one but to exhibit much lower shroud and diffuser surface temperatures.
Jayachandrababu, Krishna C; Verploegh, Ross J; Leisen, Johannes; Nieuwendaal, Ryan C; Sholl, David S; Nair, Sankar
2016-06-15
Mixed-linker zeolitic imidazolate frameworks (ZIFs) are nanoporous materials that exhibit continuous and controllable tunability of properties like effective pore size, hydrophobicity, and organophilicity. The structure of mixed-linker ZIFs has been studied on macroscopic scales using gravimetric and spectroscopic techniques. However, it has so far not been possible to obtain information on unit-cell-level linker distribution, an understanding of which is key to predicting and controlling their adsorption and diffusion properties. We demonstrate the use of (1)H combined rotation and multiple pulse spectroscopy (CRAMPS) NMR spin exchange measurements in combination with computational modeling to elucidate potential structures of mixed-linker ZIFs, particularly the ZIF 8-90 series. All of the compositions studied have structures that have linkers mixed at a unit-cell-level as opposed to separated or highly clustered phases within the same crystal. Direct experimental observations of linker mixing were accomplished by measuring the proton spin exchange behavior between functional groups on the linkers. The data were then fitted to a kinetic spin exchange model using proton positions from candidate mixed-linker ZIF structures that were generated computationally using the short-range order (SRO) parameter as a measure of the ordering, clustering, or randomization of the linkers. The present method offers the advantages of sensitivity without requiring isotope enrichment, a straightforward NMR pulse sequence, and an analysis framework that allows one to relate spin diffusion behavior to proposed atomic positions. We find that structures close to equimolar composition of the two linkers show a greater tendency for linker clustering than what would be predicted based on random models. Using computational modeling we have also shown how the window-type distribution in experimentally synthesized mixed-linker ZIF-8-90 materials varies as a function of their composition. The structural information thus obtained can be further used for predicting, screening, or understanding the tunable adsorption and diffusion behavior of mixed-linker ZIFs, for which the knowledge of linker distributions in the framework is expected to be important.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Bo; Nawaz, Kashif; Baxter, Van D.
Heat pump water heater systems (HPWH) introduce new challenges for design and modeling tools, because they require vapor compression system balanced with a water storage tank. In addition, a wrapped-tank condenser coil has strong coupling with a stratified water tank, which leads HPWH simulation to a transient process. To tackle these challenges and deliver an effective, hardware-based HPWH equipment design tool, a quasi-steady-state HPWH model was developed based on the DOE/ORNL Heat Pump Design Model (HPDM). Two new component models were added via this study. One is a one-dimensional stratified water tank model, an improvement to the open-source EnergyPlus watermore » tank model, by introducing a calibration factor to account for bulk mixing effect due to water draws, circulations, etc. The other is a wrapped-tank condenser coil model, using a segment-to-segment modeling approach. In conclusion, the HPWH system model was validated against available experimental data. After that, the model was used for parametric simulations to determine the effects of various design factors.« less
Shen, Bo; Nawaz, Kashif; Baxter, Van D.; ...
2017-10-31
Heat pump water heater systems (HPWH) introduce new challenges for design and modeling tools, because they require vapor compression system balanced with a water storage tank. In addition, a wrapped-tank condenser coil has strong coupling with a stratified water tank, which leads HPWH simulation to a transient process. To tackle these challenges and deliver an effective, hardware-based HPWH equipment design tool, a quasi-steady-state HPWH model was developed based on the DOE/ORNL Heat Pump Design Model (HPDM). Two new component models were added via this study. One is a one-dimensional stratified water tank model, an improvement to the open-source EnergyPlus watermore » tank model, by introducing a calibration factor to account for bulk mixing effect due to water draws, circulations, etc. The other is a wrapped-tank condenser coil model, using a segment-to-segment modeling approach. In conclusion, the HPWH system model was validated against available experimental data. After that, the model was used for parametric simulations to determine the effects of various design factors.« less
Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E
2017-11-10
A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.
Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael; Smargiassi, Audrey
2014-09-01
Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data.
Liu, Chuan-Fen; Sales, Anne E; Sharp, Nancy D; Fishman, Paul; Sloan, Kevin L; Todd-Stenberg, Jeff; Nichol, W Paul; Rosen, Amy K; Loveland, Susan
2003-01-01
Objective To compare the rankings for health care utilization performance measures at the facility level in a Veterans Health Administration (VHA) health care delivery network using pharmacy- and diagnosis-based case-mix adjustment measures. Data Sources/Study Setting The study included veterans who used inpatient or outpatient services in Veterans Integrated Service Network (VISN) 20 during fiscal year 1998 (October 1997 to September 1998; N=126,076). Utilization and pharmacy data were extracted from VHA national databases and the VISN 20 data warehouse. Study Design We estimated concurrent regression models using pharmacy or diagnosis information in the base year (FY1998) to predict health service utilization in the same year. Utilization measures included bed days of care for inpatient care and provider visits for outpatient care. Principal Findings Rankings of predicted utilization measures across facilities vary by case-mix adjustment measure. There is greater consistency within the diagnosis-based models than between the diagnosis- and pharmacy-based models. The eight facilities were ranked differently by the diagnosis- and pharmacy-based models. Conclusions Choice of case-mix adjustment measure affects rankings of facilities on performance measures, raising concerns about the validity of profiling practices. Differences in rankings may reflect differences in comparability of data capture across facilities between pharmacy and diagnosis data sources, and unstable estimates due to small numbers of patients in a facility. PMID:14596393
On the validity of travel-time based nonlinear bioreactive transport models in steady-state flow.
Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A
2015-01-01
Travel-time based models simplify the description of reactive transport by replacing the spatial coordinates with the groundwater travel time, posing a quasi one-dimensional (1-D) problem and potentially rendering the determination of multidimensional parameter fields unnecessary. While the approach is exact for strictly advective transport in steady-state flow if the reactive properties of the porous medium are uniform, its validity is unclear when local-scale mixing affects the reactive behavior. We compare a two-dimensional (2-D), spatially explicit, bioreactive, advective-dispersive transport model, considered as "virtual truth", with three 1-D travel-time based models which differ in the conceptualization of longitudinal dispersion: (i) neglecting dispersive mixing altogether, (ii) introducing a local-scale longitudinal dispersivity constant in time and space, and (iii) using an effective longitudinal dispersivity that increases linearly with distance. The reactive system considers biodegradation of dissolved organic carbon, which is introduced into a hydraulically heterogeneous domain together with oxygen and nitrate. Aerobic and denitrifying bacteria use the energy of the microbial transformations for growth. We analyze six scenarios differing in the variance of log-hydraulic conductivity and in the inflow boundary conditions (constant versus time-varying concentration). The concentrations of the 1-D models are mapped to the 2-D domain by means of the kinematic (for case i), and mean groundwater age (for cases ii & iii), respectively. The comparison between concentrations of the "virtual truth" and the 1-D approaches indicates extremely good agreement when using an effective, linearly increasing longitudinal dispersivity in the majority of the scenarios, while the other two 1-D approaches reproduce at least the concentration tendencies well. At late times, all 1-D models give valid approximations of two-dimensional transport. We conclude that the conceptualization of nonlinear bioreactive transport in complex multidimensional domains by quasi 1-D travel-time models is valid for steady-state flow fields if the reactants are introduced over a wide cross-section, flow is at quasi steady state, and dispersive mixing is adequately parametrized. Copyright © 2015 Elsevier B.V. All rights reserved.
Millerón, M; López de Heredia, U; Lorenzo, Z; Alonso, J; Dounavi, A; Gil, L; Nanos, N
2013-03-01
Spatial discordance between primary and effective dispersal in plant populations indicates that postdispersal processes erase the seed rain signal in recruitment patterns. Five different models were used to test the spatial concordance of the primary and effective dispersal patterns in a European beech (Fagus sylvatica) population from central Spain. An ecological method was based on classical inverse modelling (SSS), using the number of seed/seedlings as input data. Genetic models were based on direct kernel fitting of mother-to-offspring distances estimated by a parentage analysis or were spatially explicit models based on the genotype frequencies of offspring (competing sources model and Moran-Clark's Model). A fully integrated mixed model was based on inverse modelling, but used the number of genotypes as input data (gene shadow model). The potential sources of error and limitations of each seed dispersal estimation method are discussed. The mean dispersal distances for seeds and saplings estimated with these five methods were higher than those obtained by previous estimations for European beech forests. All the methods show strong discordance between primary and effective dispersal kernel parameters, and for dispersal directionality. While seed rain was released mostly under the canopy, saplings were established far from mother trees. This discordant pattern may be the result of the action of secondary dispersal by animals or density-dependent effects; that is, the Janzen-Connell effect. © 2013 Blackwell Publishing Ltd.
Modeling of hot-mix asphalt compaction : a thermodynamics-based compressible viscoelastic model
DOT National Transportation Integrated Search
2010-12-01
Compaction is the process of reducing the volume of hot-mix asphalt (HMA) by the application of external forces. As a result of compaction, the volume of air voids decreases, aggregate interlock increases, and interparticle friction increases. The qu...
NASA Astrophysics Data System (ADS)
Peck, Jaron Joshua
Water is used in power generation for cooling processes in thermoelectric power. plants and currently withdraws more water than any other sector in the U.S. Reducing water. use from power generation will help to alleviate water stress in at risk areas, where droughts. have the potential to strain water resources. The amount of water used for power varies. depending on many climatic aspects as well as plant operation factors. This work presents. a model that quantifies the water use for power generation for two regions representing. different generation fuel portfolios, California and Utah. The analysis of the California Independent System Operator introduces the methods. of water energy modeling by creating an overall water use factor in volume of water per. unit of energy produced based on the fuel generation mix of the area. The idea of water. monitoring based on energy used by a building or region is explored based on live fuel mix. data. This is for the purposes of increasing public awareness of the water associated with. personal energy use and helping to promote greater energy efficiency. The Utah case study explores the effects more renewable, and less water-intensive, forms of energy will have on the overall water use from power generation for the state. Using a similar model to that of the California case study, total water savings are quantified. based on power reduction scenarios involving increased use of renewable energy. The. plausibility of implementing more renewable energy into Utah’s power grid is also. discussed. Data resolution, as well as dispatch methods, economics, and solar variability, introduces some uncertainty into the analysis.
Peristaltic transport and mixing of cytosol through the whole body of Physarum plasmodium.
Iima, Makoto; Nakagaki, Toshiyuki
2012-09-01
We study how the net transport and mixing of chemicals occur in a relatively large amoeba, the true slime mold Physarum polycephalum. The shuttle streaming of the amoeba is characterized by a rhythmic flow of the order of 1 μm/s in which the protoplasm streams back and forth. To explain the experimentally observed transport of chemicals, we formulate a simplified model to consider the mechanism by which net transport can be induced by shuttle (or periodic) motion inside the amoeba. This model is independent from the details of fluid property as it is based on the mass conservation law only. Even in such a simplified model, we demonstrate that sectional oscillations play an important role in net transport and discuss the effects of the sectional boundary motion on net transport in the microorganism.
Enthalpy of Mixing in Al–Tb Liquid
Zhou, Shihuai; Tackes, Carl; Napolitano, Ralph
2017-06-21
The liquid-phase enthalpy of mixing for Al$-$Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy is determined relative to the unmixed pure (Al and Tb) components. The required formation enthalpy for the Al3Tb phase is computed from first-principles calculations. Finally, based on our measurements, three different semi-empirical solution models are offered for the excess free energy of the liquid, including regular, subregular, and associate model formulations. These models are also compared withmore » the Miedema model prediction of mixing enthalpy.« less
Dynamic route guidance strategy in a two-route pedestrian-vehicle mixed traffic flow system
NASA Astrophysics Data System (ADS)
Liu, Mianfang; Xiong, Shengwu; Li, Bixiang
2016-05-01
With the rapid development of transportation, traffic questions have become the major issue for social, economic and environmental aspects. Especially, during serious emergencies, it is very important to alleviate road traffic congestion and improve the efficiency of evacuation to reduce casualties, and addressing these problems has been a major task for the agencies responsible in recent decades. Advanced road guidance strategies have been developed for homogeneous traffic flows, or to reduce traffic congestion and enhance the road capacity in a symmetric two-route scenario. However, feedback strategies have rarely been considered for pedestrian-vehicle mixed traffic flows with variable velocities and sizes in an asymmetric multi-route traffic system, which is a common phenomenon in many developing countries. In this study, we propose a weighted road occupancy feedback strategy (WROFS) for pedestrian-vehicle mixed traffic flows, which considers the system equilibrium to ease traffic congestion. In order to more realistic simulating the behavior of mixed traffic objects, the paper adopted a refined and dynamic cellular automaton model (RDPV_CA model) as the update mechanism for pedestrian-vehicle mixed traffic flow. Moreover, a bounded rational threshold control was introduced into the feedback strategy to avoid some negative effect of delayed information and reduce. Based on comparisons with the two previously proposed strategies, the simulation results obtained in a pedestrian-vehicle traffic flow scenario demonstrated that the proposed strategy with a bounded rational threshold was more effective and system equilibrium, system stability were reached.
Multivariate statistical approach to estimate mixing proportions for unknown end members
Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.
2012-01-01
A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.
Statistical modelling of growth using a mixed model with orthogonal polynomials.
Suchocki, T; Szyda, J
2011-02-01
In statistical modelling, the effects of single-nucleotide polymorphisms (SNPs) are often regarded as time-independent. However, for traits recorded repeatedly, it is very interesting to investigate the behaviour of gene effects over time. In the analysis, simulated data from the 13th QTL-MAS Workshop (Wageningen, The Netherlands, April 2009) was used and the major goal was the modelling of genetic effects as time-dependent. For this purpose, a mixed model which describes each effect using the third-order Legendre orthogonal polynomials, in order to account for the correlation between consecutive measurements, is fitted. In this model, SNPs are modelled as fixed, while the environment is modelled as random effects. The maximum likelihood estimates of model parameters are obtained by the expectation-maximisation (EM) algorithm and the significance of the additive SNP effects is based on the likelihood ratio test, with p-values corrected for multiple testing. For each significant SNP, the percentage of the total variance contributed by this SNP is calculated. Moreover, by using a model which simultaneously incorporates effects of all of the SNPs, the prediction of future yields is conducted. As a result, 179 from the total of 453 SNPs covering 16 out of 18 true quantitative trait loci (QTL) were selected. The correlation between predicted and true breeding values was 0.73 for the data set with all SNPs and 0.84 for the data set with selected SNPs. In conclusion, we showed that a longitudinal approach allows for estimating changes of the variance contributed by each SNP over time and demonstrated that, for prediction, the pre-selection of SNPs plays an important role.
Ledrich, Julie; Gana, Kamel
2013-12-01
The aim of this study was to examine the intricate relationship between some personality traits (i.e., attributional style, perceived control over consequences, self-esteem), and depressive mood in a nonclinical sample (N= 334). Method. Structural equation modelling was used to estimate five competing models: two vulnerability models describing the effects of personality traits on depressive mood, one scar model describing the effects of depression on personality traits, a mixed model describing the effects of attributional style and perceived control over consequences on depressive mood, which in turn affects self-esteem, and a reciprocal model which is a non-recursive version of the mixed model that specifies bidirectional effects between depressive mood and self-esteem. The best-fitting model was the mixed model. Moreover, we observed a significant negative effect of depression on self-esteem, but no effect in the opposite direction. These findings provide supporting arguments against the continuum model of the relationship between self-esteem and depression, and lend substantial support to the scar model, which claims that depressive mood damages and erodes self-esteem. In addition, the 'depressogenic' nature of the pessimistic attributional style, and the 'antidepressant' nature of perceived control over consequences plead in favour of the vulnerability model. © 2012 The British Psychological Society.
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Constrained minimization problems for the reproduction number in meta-population models.
Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N
2018-02-14
The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015. https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017. https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.
Yang, Li; Li, Shanshan; Liu, Jixiao; Cheng, Jingmeng
2018-02-01
To explore and utilize the advantages of droplet-based microfluidics, hydrodynamics, and mixing process within droplets traveling though the T junction channel and convergent-divergent sinusoidal microchannels are studied by numerical simulations and experiments, respectively. In the T junction channel, the mixing efficiency is significantly influenced by the twirling effect, which controls the initial distributions of the mixture during the droplet formation stage. Therefore, the internal recirculating flow can create a convection mechanism, thus improving mixing. The twirling effect is noticeably influenced by the velocity of the continuous phase; in the sinusoidal channel, the Dean vortices and droplet deformation are induced by centrifugal force and alternative velocity gradient, thus enhancing the mixing efficiency. The best mixing occurred when the droplet size is comparable with the channel width. Finally, we propose a unique optimized structure, which includes a T junction inlet joined to a sinusoidal channel. In this structure, the mixing of fluids in the droplets follows two routes: One is the twirling effect and symmetric recirculation flow in the straight channel. The other is the asymmetric recirculation and droplet deformation in the winding and variable cross-section. Among the three structures, the optimized structure has the best mixing efficiency at the shortest mixing time (0.25 ms). The combination of the twirling effect, variable cross-section effect, and Dean vortices greatly intensifies the chaotic flow. This study provides the insight of the mixing process and may benefit the design and operations of droplet-based microfluidics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.
In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less
Modeling Intrajunction Dispersion at a Well-Mixed Tidal River Junction
Wolfram, Phillip J.; Fringer, Oliver B.; Monsen, Nancy E.; ...
2016-08-01
In this paper, the relative importance of small-scale, intrajunction flow features such as shear layers, separation zones, and secondary flows on dispersion in a well-mixed tidal river junction is explored. A fully nonlinear, nonhydrostatic, and unstructured three-dimensional (3D) model is used to resolve supertidal dispersion via scalar transport at a well-mixed tidal river junction. Mass transport simulated in the junction is compared against predictions using a simple node-channel model to quantify the effects of small-scale, 3D intrajunction flow features on mixing and dispersion. The effects of three-dimensionality are demonstrated by quantifying the difference between two-dimensional (2D) and 3D model results.more » An intermediate 3D model that does not resolve the secondary circulation or the recirculating flow at the junction is also compared to the 3D model to quantify the relative sensitivity of mixing on intrajunction flow features. Resolution of complex flow features simulated by the full 3D model is not always necessary because mixing is primarily governed by bulk flow splitting due to the confluence–diffluence cycle. Finally, results in 3D are comparable to the 2D case for many flow pathways simulated, suggesting that 2D modeling may be reasonable for nonstratified and predominantly hydrostatic flows through relatively straight junctions, but not necessarily for the full junction network.« less
A green vehicle routing problem with customer satisfaction criteria
NASA Astrophysics Data System (ADS)
Afshar-Bakeshloo, M.; Mehrabi, A.; Safari, H.; Maleki, M.; Jolai, F.
2016-12-01
This paper develops an MILP model, named Satisfactory-Green Vehicle Routing Problem. It consists of routing a heterogeneous fleet of vehicles in order to serve a set of customers within predefined time windows. In this model in addition to the traditional objective of the VRP, both the pollution and customers' satisfaction have been taken into account. Meanwhile, the introduced model prepares an effective dashboard for decision-makers that determines appropriate routes, the best mixed fleet, speed and idle time of vehicles. Additionally, some new factors evaluate the greening of each decision based on three criteria. This model applies piecewise linear functions (PLFs) to linearize a nonlinear fuzzy interval for incorporating customers' satisfaction into other linear objectives. We have presented a mixed integer linear programming formulation for the S-GVRP. This model enriches managerial insights by providing trade-offs between customers' satisfaction, total costs and emission levels. Finally, we have provided a numerical study for showing the applicability of the model.
Logistic Mixed Models to Investigate Implicit and Explicit Belief Tracking.
Lages, Martin; Scheel, Anne
2016-01-01
We investigated the proposition of a two-systems Theory of Mind in adults' belief tracking. A sample of N = 45 participants predicted the choice of one of two opponent players after observing several rounds in an animated card game. Three matches of this card game were played and initial gaze direction on target and subsequent choice predictions were recorded for each belief task and participant. We conducted logistic regressions with mixed effects on the binary data and developed Bayesian logistic mixed models to infer implicit and explicit mentalizing in true belief and false belief tasks. Although logistic regressions with mixed effects predicted the data well a Bayesian logistic mixed model with latent task- and subject-specific parameters gave a better account of the data. As expected explicit choice predictions suggested a clear understanding of true and false beliefs (TB/FB). Surprisingly, however, model parameters for initial gaze direction also indicated belief tracking. We discuss why task-specific parameters for initial gaze directions are different from choice predictions yet reflect second-order perspective taking.
NASA Astrophysics Data System (ADS)
Pisman, T. I.; Galayda, Ya. V.
The paper presents experimental and mathematical model of interactions between invertebrates the ciliates Paramecium caudatum and the rotifers Brachionus plicatilis and algae Chlorella vulgaris and Scenedesmus quadricauda in the producer -- consumer aquatic biotic cycle with spatially separated components The model describes the dynamics of the mixed culture of ciliates and rotifers in the consumer component feeding on the mixed algal culture of the producer component It has been found that metabolites of the algae Scenedesmus produce an adverse effect on the reproduction of the ciliates P caudatum Taking into account this effect the results of investigation of the mathematical model were in qualitative agreement with the experimental results In the producer -- consumer biotic cycle it was shown that coexistence is impossible in the mixed algal culture of the producer component and in the mixed culture of invertebrates of the consumer component The ciliates P caudatum are driven out by the rotifers Brachionus plicatilis
Ramirez, Adriana G; Tracci, Margaret C; Stukenborg, George J; Turrentine, Florence E; Kozower, Benjamin D; Jones, R Scott
2016-01-01
Background The Hospital Value-Based Purchasing Program measures value of care provided by participating Medicare hospitals while creating financial incentives for quality improvement and fostering increased transparency. Limited information is available comparing hospital performance across healthcare business models. Study Design 2015 hospital Value-Based Purchasing Program results were used to examine hospital performance by business model. General linear modeling assessed differences in mean total performance score, hospital case mix index, and differences after adjustment for differences in hospital case mix index. Results Of 3089 hospitals with Total Performance Scores (TPS), categories of representative healthcare business models included 104 Physician-owned Surgical Hospitals (POSH), 111 University HealthSystem Consortium (UHC), 14 US News & World Report Honor Roll (USNWR) Hospitals, 33 Kaiser Permanente, and 124 Pioneer Accountable Care Organization affiliated hospitals. Estimated mean TPS for POSH (64.4, 95% CI 61.83, 66.38) and Kaiser (60.79, 95% CI 56.56, 65.03) were significantly higher compared to all remaining hospitals while UHC members (36.8, 95% CI 34.51, 39.17) performed below the mean (p < 0.0001). Significant differences in mean hospital case mix index included POSH (mean 2.32, p<0.0001), USNWR honorees (mean 2.24, p 0.0140) and UHC members (mean =1.99, p<0.0001) while Kaiser Permanente hospitals had lower case mix value (mean =1.54, p<0.0001). Re-estimation of TPS did not change the original results after adjustment for differences in hospital case mix index. Conclusions The Hospital Value-Based Purchasing Program revealed superior hospital performance associated with business model. Closer inspection of high-value hospitals may guide value improvement and policy-making decisions for all Medicare Value-Based Purchasing Program Hospitals. PMID:27502368
Lewis Jordon; Richard F. Daniels; Alexander Clark; Rechun He
2005-01-01
Earlywood and latewood microfibril angle (MFA) was determined at I-millimeter intervals from disks at 1.4 meters, then at 3-meter intervals to a height of 13.7 meters, from 18 loblolly pine (Pinus taeda L.) trees grown in southeastern Texas. A modified three-parameter logistic function with mixed effects is used for modeling earlywood and latewood...
ERIC Educational Resources Information Center
Sen, Sedat
2018-01-01
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
NASA Astrophysics Data System (ADS)
Sadeghi, Morteza; Ghanbarian, Behzad; Horton, Robert
2018-02-01
Thermal conductivity is an essential component in multiphysics models and coupled simulation of heat transfer, fluid flow, and solute transport in porous media. In the literature, various empirical, semiempirical, and physical models were developed for thermal conductivity and its estimation in partially saturated soils. Recently, Ghanbarian and Daigle (GD) proposed a theoretical model, using the percolation-based effective-medium approximation, whose parameters are physically meaningful. The original GD model implicitly formulates thermal conductivity λ as a function of volumetric water content θ. For the sake of computational efficiency in numerical calculations, in this study, we derive an explicit λ(θ) form of the GD model. We also demonstrate that some well-known empirical models, e.g., Chung-Horton, widely applied in the HYDRUS model, as well as mixing models are special cases of the GD model under specific circumstances. Comparison with experiments indicates that the GD model can accurately estimate soil thermal conductivity.
NASA Astrophysics Data System (ADS)
Robati, Masoud
This Doctorate program focuses on the evaluation and improving the rutting resistance of micro-surfacing mixtures. There are many research problems related to the rutting resistance of micro-surfacing mixtures that still require further research to be solved. The main objective of this Ph.D. program is to experimentally and analytically study and improve rutting resistance of micro-surfacing mixtures. During this Ph.D. program major aspects related to the rutting resistance of micro-surfacing mixtures are investigated and presented as follow: 1) evaluation of a modification of current micro-surfacing mix design procedures: On the basis of this effort, a new mix design procedure is proposed for type III micro-surfacing mixtures as rut-fill materials on the road surface. Unlike the current mix design guidelines and specification, the new mix design is capable of selecting the optimum mix proportions for micro-surfacing mixtures; 2) evaluation of test methods and selection of aggregate grading for type III application of micro-surfacing: Within the term of this study, a new specification for selection of aggregate grading for type III application of micro-surfacing is proposed; 3) evaluation of repeatability and reproducibility of micro-surfacing mixture design tests: In this study, limits for repeatability and reproducibility of micro-surfacing mix design tests are presented; 4) a new conceptual model for filler stiffening effect on asphalt mastic of micro-surfacing: A new model is proposed, which is able to establish limits for minimum and maximum filler concentrations in the micro-surfacing mixture base on only the filler important physical and chemical properties; 5) incorporation of reclaimed asphalt pavement and post-fabrication asphalt shingles in micro-surfacing mixture: The effectiveness of newly developed mix design procedure for micro-surfacing mixtures is further validated using recycled materials. The results present the limits for the use of RAP and RAS amount in micro-surfacing mixtures; 6) new colored micro-surfacing formulations with improved durability and performance: The significant improvement of around 45% in rutting resistance of colored and conventional micro-surfacing mixtures is achieved through employing low penetration grade bitumen polymer modified asphalt emulsion stabilized using nanoparticles.
Laser Self-Mixing Fiber Bragg Grating Sensor for Acoustic Emission Measurement.
Liu, Bin; Ruan, Yuxi; Yu, Yanguang; Xi, Jiangtao; Guo, Qinghua; Tong, Jun; Rajan, Ginu
2018-06-16
Fiber Bragg grating (FBG) is considered a good candidate for acoustic emission (AE) measurement. The sensing and measurement in traditional FBG-based AE systems are based on the variation in laser intensity induced by the Bragg wavelength shift. This paper presents a sensing system by combining self-mixing interference (SMI) in a laser diode and FBG for AE measurement, aiming to form a new compact and cost-effective sensing system. The measurement model of the overall system was derived. The performance of the presented system was investigated from both aspects of theory and experiment. The results show that the proposed system is able to measure AE events with high resolution and over a wide dynamic frequency range.
NASA Astrophysics Data System (ADS)
Yuan, Manman; Wang, Weiping; Luo, Xiong; Li, Lixiang; Kurths, Jürgen; Wang, Xiao
2018-03-01
This paper is concerned with the exponential lag function projective synchronization of memristive multidirectional associative memory neural networks (MMAMNNs). First, we propose a new model of MMAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying discrete delays and distributed time delays. Second, we design two kinds of hybrid controllers. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the controllers are carefully designed to confirm the process of different types of synchronization in the MMAMNNs. Third, sufficient criteria guaranteeing the synchronization of system are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.
A mixed-effects height-diameter model for cottonwood in the Mississippi Delta
Curtis L. VanderSchaaf; H. Christoph Stuhlinger
2012-01-01
Eastern cottonwood (Populus deltoides Bartr. ex Marsh.) has been artificially regenerated throughout the Mississippi Delta region because of its fast growth and is being considered for biofuel production.This paper presents a mixed-effects height-diameter model for cottonwood in the Mississippi Delta region. After obtaining height-diameter...
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
Some effects of swirl on turbulent mixing and combustion
NASA Technical Reports Server (NTRS)
Rubel, A.
1972-01-01
A general formulation of some effects of swirl on turbulent mixing is given. The basis for the analysis is that momentum transport is enhanced by turbulence resulting from rotational instability of the fluid field. An appropriate form for the turbulent eddy viscosity is obtained by mixing length type arguments. The result takes the form of a corrective factor that is a function of the swirl and acts to increase the eddy viscosity. The factor is based upon the initial mixing conditions implying that the rotational turbulence decays in a manner similar to that of free shear turbulence. Existing experimental data for free jet combustion are adequately matched by using the modifying factor to relate the effects of swirl on eddy viscosity. The model is extended and applied to the supersonic combustion of a ring jet of hydrogen injected into a constant area annular air stream. The computations demonstrate that swirling the flow could: (1) reduce the burning length by one half, (2) result in more uniform burning across the annulus width, and (3) open the possibility of optimization of the combustion characteristics by locating the fuel jet between the inner wall and center of the annulus width.
Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei
2017-11-01
A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.
2017-10-01
perturbations in the energetic material to study their effects on the blast wave formation. The last case also makes use of the same PBX, however, the...configuration, Case A: Spore cloud located on the top of the charge at an angle 45 degree, Case B: Spore cloud located at an angle 45 degree from the charge...theoretical validation. The first is the Sedov case where the pressure decay and blast wave front are validated based on analytical solutions. In this test
Pourghassem, Hossein
2012-01-01
Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.
Isotopic modeling of the sub-cloud evaporation effect in precipitation.
Salamalikis, V; Argiriou, A A; Dotsika, E
2016-02-15
In dry and warm environments sub-cloud evaporation influences the falling raindrops modifying their final stable isotopic content. During their descent from the cloud base towards the ground surface, through the unsaturated atmosphere, hydrometeors are subjected to evaporation whereas the kinetic fractionation results to less depleted or enriched isotopic signatures compared to the initial isotopic composition of the raindrops at cloud base. Nowadays the development of Generalized Climate Models (GCMs) that include isotopic content calculation modules are of great interest for the isotopic tracing of the global hydrological cycle. Therefore the accurate description of the underlying processes affecting stable isotopic content can improve the performance of iso-GCMs. The aim of this study is to model the sub-cloud evaporation effect using a) mixing and b) numerical isotope evaporation models. The isotope-mixing evaporation model simulates the isotopic enrichment (difference between the ground and the cloud base isotopic composition of raindrops) in terms of raindrop size, ambient temperature and relative humidity (RH) at ground level. The isotopic enrichment (Δδ) varies linearly with the evaporated raindrops mass fraction of the raindrop resulting to higher values at drier atmospheres and for smaller raindrops. The relationship between Δδ and RH is described by a 'heat capacity' model providing high correlation coefficients for both isotopes (R(2)>80%) indicating that RH is an ideal indicator of the sub-cloud evaporation effect. Vertical distribution of stable isotopes in falling raindrops is also investigated using a numerical isotope-evaporation model. Temperature and humidity dependence of the vertical isotopic variation is clearly described by the numerical isotopic model showing an increase in the isotopic values with increasing temperature and decreasing RH. At an almost saturated atmosphere (RH=95%) sub-cloud evaporation is negligible and the isotopic composition hardly changes even at high temperatures while at drier and warm conditions the enrichment of (18)Ο reaches up to 20‰, depending on the raindrop size and the initial meteorological conditions. Copyright © 2015 Elsevier B.V. All rights reserved.
Development of a new continuous process for mixing of complex non-Newtonian fluids
NASA Astrophysics Data System (ADS)
Migliozzi, Simona; Mazzei, Luca; Sochon, Bob; Angeli, Panagiota; Thames Multiphase Team; Coral Project Collaboration
2017-11-01
Design of new continuous mixing operations poses many challenges, especially when dealing with highly viscous non-Newtonian fluids. Knowledge of complex rheological behaviour of the working mixture is crucial for development of an efficient process. In this work, we investigate the mixing performance of two different static mixers and the effects of the mixture rheology on the manufacturing of novel non-aqueous-based oral care products using experimental and computational fluid dynamic methods. The two liquid phases employed, i.e. a carbomer suspension in polyethylene glycol and glycerol, start to form a gel when they mix. We studied the structure evolution of the liquid mixture using time-resolved rheometry and we obtained viscosity rheograms at different phase ratios from pressure drop measurements in a customized mini-channel. The numerical results and rheological model were validated with experimental measurements carried out in a specifically designed setup. EPSRS-CORAL.
NASA Astrophysics Data System (ADS)
Khan, Noor Saeed; Gul, Taza; Khan, Muhammad Altaf; Bonyah, Ebenezer; Islam, Saeed
Mixed convection in gravity-driven non-Newtonian nanofluid films (Casson and Williamson) flow containing both nanoparticles and gyrotactic microorganisms along a convectively heated vertical surface is investigated. The actively controlled nanofluid model boundary conditions are used to explore the liquid films flow. The study exhibits an analytical approach for the non-Newtonian thin film nanofluids bioconvection based on physical mechanisms responsible for the nanoparticles and the base fluid, such as Brownian motion and thermophoresis. Both the fluids have almost the same behaviors for the effects of all the pertinent parameters except the effect of Schmidt number on the microorganism density function where the effect is opposite. Ordinary differential equations together with the boundary conditions are obtained through similarity variables from the governing equations of the problem, which are solved by HAM (Homotopy Analysis Method). The solution is expressed through graphs and illustrated which show the influences of all the parameters. The study is relevant to novel microbial fuel cell technologies combining the nanofluid with bioconvection phenomena.
Designing a podiatry service to meet the needs of the population: a service simulation.
Campbell, Jackie A
2007-02-01
A model of a podiatry service has been developed which takes into consideration the effect of changing access criteria, skill mix and staffing levels (among others) given fixed local staffing budgets and the foot-health characteristics of the local community. A spreadsheet-based deterministic model was chosen to allow maximum transparency of programming. This work models a podiatry service in England, but could be adapted for other settings and, with some modification, for other community-based services. This model enables individual services to see the effect on outcome parameters such as number of patients treated, number discharged and size of waiting lists of various service configurations, given their individual local data profile. The process of designing the model has also had spin-off benefits for the participants in making explicit many of the implicit rules used in managing their services.
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.
Significance of the model considering mixed grain-size for inverse analysis of turbidites
NASA Astrophysics Data System (ADS)
Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.
2016-12-01
A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.
NASA Technical Reports Server (NTRS)
Boulet, C.; Ma, Q.
2016-01-01
Line mixing effects have been calculated in the ?1 parallel band of self-broadened NH3. The theoretical approach is an extension of a semi-classical model to symmetric-top molecules with inversion symmetry developed in the companion paper [Q. Ma and C. Boulet, J. Chem. Phys. 144, 224303 (2016)]. This model takes into account line coupling effects and hence enables the calculation of the entire relaxation matrix. A detailed analysis of the various coupling mechanisms is carried out for Q and R inversion doublets. The model has been applied to the calculation of the shape of the Q branch and of some R manifolds for which an obvious signature of line mixing effects has been experimentally demonstrated. Comparisons with measurements show that the present formalism leads to an accurate prediction of the available experimental line shapes. Discrepancies between the experimental and theoretical sets of first order mixing parameters are discussed as well as some extensions of both theory and experiment.
Modelling the effect of environmental factors on resource allocation in mixed plants systems
NASA Astrophysics Data System (ADS)
Gayler, Sebastian; Priesack, Eckart
2010-05-01
In most cases, growth of plants is determined by competition against neighbours for the local resources light, water and nutrients and by defending against herbivores and pathogens. Consequently, it is important for a plant to grow fast without neglecting defence. However, plant internal substrates and energy required to support maintenance, growth and defence are limited and the total demand for these processes cannot be met in most cases. Therefore, allocation of carbohydrates to growth related primary metabolism or to defence related secondary metabolism can be seen as a trade-off between the demand of plants for being competitive against neighbours and for being more resistant against pathogens. A modelling approach is presented which can be used to simulate competition for light, water and nutrients between plant individuals in mixed canopies. The balance of resource allocation between growth processes and synthesis of secondary compounds is modelled by a concept originating from different plant defence hypothesis. The model is used to analyse the impact of environmental factors such as soil water and nitrogen availability, planting density and atmospheric concentration of CO2 on growth of plant individuals within mixed canopies and variations in concentration of carbon-based secondary metabolites in plant tissues.
RWPV bioreactor mass transport: earth-based and in microgravity
NASA Technical Reports Server (NTRS)
Begley, Cynthia M.; Kleis, Stanley J.
2002-01-01
Mass transport and mixing of perfused scalar quantities in the NASA Rotating Wall Perfused Vessel bioreactor are studied using numerical models of the flow field and scalar concentration field. Operating conditions typical of both microgravity and ground-based cell cultures are studied to determine the expected vessel performance for both flight and ground-based control experiments. Results are presented for the transport of oxygen with cell densities and consumption rates typical of colon cancer cells cultured in the RWPV. The transport and mixing characteristics are first investigated with a step change in the perfusion inlet concentration by computing the time histories of the time to exceed 10% inlet concentration. The effects of a uniform cell utilization rate are then investigated with time histories of the outlet concentration, volume average concentration, and volume fraction starved. It is found that the operating conditions used in microgravity produce results that are quite different then those for ground-based conditions. Mixing times for microgravity conditions are significantly shorter than those for ground-based operation. Increasing the differential rotation rates (microgravity) increases the mixing and transport, while increasing the mean rotation rate (ground-based) suppresses both. Increasing perfusion rates enhances mass transport for both microgravity and ground-based cases, however, for the present range of operating conditions, above 5-10 cc/min there are diminishing returns as much of the inlet fluid is transported directly to the perfusion exit. The results show that exit concentration is not a good indicator of the concentration distributions in the vessel. In microgravity conditions, the NASA RWPV bioreactor with the viscous pump has been shown to provide an environment that is well mixed. Even when operated near the theoretical minimum perfusion rates, only a small fraction of the volume provides less than the required oxygen levels. 2002 Wiley Periodicals, Inc.
RWPV bioreactor mass transport: earth-based and in microgravity.
Begley, Cynthia M; Kleis, Stanley J
2002-11-20
Mass transport and mixing of perfused scalar quantities in the NASA Rotating Wall Perfused Vessel bioreactor are studied using numerical models of the flow field and scalar concentration field. Operating conditions typical of both microgravity and ground-based cell cultures are studied to determine the expected vessel performance for both flight and ground-based control experiments. Results are presented for the transport of oxygen with cell densities and consumption rates typical of colon cancer cells cultured in the RWPV. The transport and mixing characteristics are first investigated with a step change in the perfusion inlet concentration by computing the time histories of the time to exceed 10% inlet concentration. The effects of a uniform cell utilization rate are then investigated with time histories of the outlet concentration, volume average concentration, and volume fraction starved. It is found that the operating conditions used in microgravity produce results that are quite different then those for ground-based conditions. Mixing times for microgravity conditions are significantly shorter than those for ground-based operation. Increasing the differential rotation rates (microgravity) increases the mixing and transport, while increasing the mean rotation rate (ground-based) suppresses both. Increasing perfusion rates enhances mass transport for both microgravity and ground-based cases, however, for the present range of operating conditions, above 5-10 cc/min there are diminishing returns as much of the inlet fluid is transported directly to the perfusion exit. The results show that exit concentration is not a good indicator of the concentration distributions in the vessel. In microgravity conditions, the NASA RWPV bioreactor with the viscous pump has been shown to provide an environment that is well mixed. Even when operated near the theoretical minimum perfusion rates, only a small fraction of the volume provides less than the required oxygen levels. 2002 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Campbell, James R.; Ge, Cui; Wang, Jun; Welton, Ellsworth J.; Bucholtz, Anthony; Hyer, Edward J.; Reid, Elizabeth A.; Chew, Boon Ning; Liew, Soo-Chin; Salinas, Santo V.;
2015-01-01
This work describes some of the most extensive ground-based observations of the aerosol profile collected in Southeast Asia to date, highlighting the challenges in simulating these observations with a mesoscale perspective. An 84-h WRF Model coupled with chemistry (WRF-Chem) mesoscale simulation of smoke particle transport at Kuching, Malaysia, in the southern Maritime Continent of Southeast Asia is evaluated relative to a unique collection of continuous ground-based lidar, sun photometer, and 4-h radiosonde profiling. The period was marked by relatively dry conditions, allowing smoke layers transported to the site unperturbed by wet deposition to be common regionally. The model depiction is reasonable overall. Core thermodynamics, including landsea-breeze structure, are well resolved. Total model smoke extinction and, by proxy, mass concentration are low relative to observation. Smoke emissions source products are likely low because of undersampling of fires in infrared sun-synchronous satellite products, which is exacerbated regionally by endemic low-level cloud cover. Differences are identified between the model mass profile and the lidar profile, particularly during periods of afternoon convective mixing. A static smoke mass injection height parameterized for this study potentially influences this result. The model does not resolve the convective mixing of aerosol particles into the lower free troposphere or the enhancement of near-surface extinction from nighttime cooling and hygroscopic effects.
A mixed model framework for teratology studies.
Braeken, Johan; Tuerlinckx, Francis
2009-10-01
A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
Traveltime-based descriptions of transport and mixing in heterogeneous domains
NASA Astrophysics Data System (ADS)
Luo, Jian; Cirpka, Olaf A.
2008-09-01
Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass transfer coefficients. In most applications, breakthrough curves (BTCs) of conservative and reactive compounds are measured at only a few locations and spatially explicit models are calibrated by matching these BTCs. A common difficulty in such applications is that the individual BTCs differ too strongly to justify the assumption of spatial homogeneity, whereas the number of observation points is too small to identify the spatial distribution of the decisive parameters. The key objective of the current study is to characterize physical transport by the analysis of conservative tracer BTCs and predict the macroscopic BTCs of compounds that react upon mixing from the interpretation of conservative tracer BTCs and reactive parameters determined in the laboratory. We do this in the framework of traveltime-based transport models which do not require spatially explicit, costly aquifer characterization. By considering BTCs of a conservative tracer measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the traveltime-based framework, the BTC of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct traveltime value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of traveltimes, which also determines the weights associated with each stream tube. Key issues in using the traveltime-based framework include the description of mixing mechanisms and the estimation of the traveltime distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the traveltime distribution, given a BTC integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases wherein the true traveltime distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and traveltime distributions to fit conservative BTCs and describe the tailing. A reactive transport case of a dual Michaelis-Menten problem demonstrates that the reactive mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local BTCs.
Wang, Yuanjia; Chen, Huaihou
2012-01-01
Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801
Wang, Yuanjia; Chen, Huaihou
2012-12-01
We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Galloway, A. W. E.; Eisenlord, M. E.; Brett, M. T.
2016-02-01
Stable isotope (SI) based mixing models are the most common approach used to infer resource pathways in consumers. However, SI based analyses are often underdetermined, and consumer SI fractionation is usually unknown. The use of fatty acid (FA) tracers in mixing models offers an alternative approach that can resolve the underdetermined constraint. A limitation to both methods is the considerable uncertainty about consumer `trophic modification' (TM) of dietary FA or SI, which occurs as consumers transform dietary resources into tissues. We tested the utility of SI and FA approaches for inferring the diets of the marine benthic isopod (Idotea wosnesenskii) fed various marine macroalgae in controlled feeding trials. Our analyses quantified how the accuracy and precision of Bayesian mixing models was influenced by choice of algorithm (SIAR vs MixSIR), fractionation (assumed or known), and whether the model was under or overdetermined (seven sources and two vs 26 tracers) for cases where isopods were fed an exclusive diet of one of the seven different macroalgae. Using the conventional approach (i.e., 2 SI with assumed TM) resulted in average model outputs, i.e., the contribution from the exclusive resource = 0.20 ± 0.23 (0.00-0.79), mean ± SD (95% credible interval), that only differed slightly from the prior assumption. Using the FA based approach with known TM greatly improved model performance, i.e., the contribution from the exclusive resource = 0.91 ± 0.10 (0.58-0.99). The choice of algorithm only made a difference when fractionation was known and the model was overdetermined (FA approach). In this case SIAR and MixSIR had outputs of 0.86 ± 0.11 (0.48-0.96) and 0.96 ± 0.05 (0.79-1.00), respectively. This analysis shows the choice of dietary tracers and the assumption of consumer trophic modification greatly influence the performance of mixing model dietary reconstructions, and ultimately our understanding of what resources actually support aquatic consumers.
Modeling the elastic energy of alloys: Potential pitfalls of continuum treatments.
Baskaran, Arvind; Ratsch, Christian; Smereka, Peter
2015-12-01
Some issues that arise when modeling elastic energy for binary alloys are discussed within the context of a Keating model and density-functional calculations. The Keating model is a simplified atomistic formulation based on modeling elastic interactions of a binary alloy with harmonic springs whose equilibrium length is species dependent. It is demonstrated that the continuum limit for the strain field are the usual equations of linear elasticity for alloys and that they correctly capture the coarse-grained behavior of the displacement field. In addition, it is established that Euler-Lagrange equation of the continuum limit of the elastic energy will yield the same strain field equation. This is the same energy functional that is often used to model elastic effects in binary alloys. However, a direct calculation of the elastic energy atomistic model reveals that the continuum expression for the elastic energy is both qualitatively and quantitatively incorrect. This is because it does not take atomistic scale compositional nonuniformity into account. Importantly, this result also shows that finely mixed alloys tend to have more elastic energy than segregated systems, which is the exact opposite of predictions made by some continuum theories. It is also shown that for strained thin films the traditionally used effective misfit for alloys systematically underestimate the strain energy. In some models, this drawback is handled by including an elastic contribution to the enthalpy of mixing, which is characterized in terms of the continuum concentration. The direct calculation of the atomistic model reveals that this approach suffers serious difficulties. It is demonstrated that elastic contribution to the enthalpy of mixing is nonisotropic and scale dependent. It is also shown that such effects are present in density-functional theory calculations for the Si-Ge system. This work demonstrates that it is critical to include the microscopic arrangements in any elastic model to achieve even qualitatively correct behavior.
USDA-ARS?s Scientific Manuscript database
This study was designed to determine if the present USDA ARS Spray Nozzle models based on water plus non-ionic surfactant spray solutions could be used to estimate spray droplet size data for different spray formulations through use of experimentally determined correction factors or if full spray fo...
ERIC Educational Resources Information Center
Yakubova, Gulnoza; Hughes, Elizabeth M.; Hornberger, Erin
2015-01-01
The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with…
Comparison of Screen Sizes When Using Video Prompting to Teach Adolescents with Autism
ERIC Educational Resources Information Center
Bennett, Kyle D.; Gutierrez, Anibal, Jr.; Loughrey, Tara O.
2016-01-01
Recently, researchers have compared the effectiveness of video-based instruction (VBI), particularly video modeling, when using smaller versus larger screen sizes with positive, but mixed results. Using an adapted alternating treatments design, we compared two different screen sizes (i.e., iPhone 5 versus iPad 2) using video prompting as the VBI…
Converting isotope ratios to diet composition - the use of mixing models - June 2010
One application of stable isotope analysis is to reconstruct diet composition based on isotopic mass balance. The isotopic value of a consumer’s tissue reflects the isotopic values of its food sources proportional to their dietary contributions. Isotopic mixing models are used ...
Schlenker, R; Shaughnessy, P; Yslas, I
1983-01-01
The considerably higher cost per patient day in hospital-based compared with freestanding nursing homes is well known. In this study, data from a random sample of 1,843 patients from 78 freestanding and hospital-based nursing homes in Colorado were used to explore the extent to which this higher cost can be explained by differences in case mix and quality of care. These differences were found to be associated with approximately 40% of the difference in cost, with case mix accounting for the majority of this effect. Although these findings are based on data from one state, they strongly suggest that Medicare and Medicaid nursing home policies should take case mix into account in reimbursing hospital-based and freestanding nursing homes.
Thermomagnetic phenomena in the mixed state of high temperature superconductors
NASA Technical Reports Server (NTRS)
Meilikhov, E. Z.
1995-01-01
Galvano- and thermomagnetic-phenomena in high temperature superconductors, based on kinetic coefficients, are discussed, along with a connection between the electric field and the heat flow in superconductor mixed state. The relationship that determines the transport coefficients of high temperature superconductors in the mixed state based on Seebeck and Nernst effects is developed. It is shown that this relationship is true for a whole transition region of the resistive mixed state of a superconductor. Peltier, Ettingshausen and Righi-Leduc effects associated with heat conductivity as related to high temperature superconductors are also addressed.
Study of the 190Hg Nucleus: Testing the Existence of U(5) Symmetry
NASA Astrophysics Data System (ADS)
Jahangiri Tazekand, Z.; Mohseni, M.; Mohammadi, M. A.; Sabri, H.
2018-06-01
In this paper, we have considered the energy spectra, quadrupole transition probabilities, energy surface, charge radii, and quadrupole moment of the190Hg nucleus to describe the interplay between phase transitions and configuration mixing of intruder excitations. To this aim, we have used four different formalisms: (i) interacting boson model including configuration mixing, (ii) Z(5) critical symmetry, (iii) U(6)-based transitional Hamiltonian, and (iv) a transitional interacting boson model Hamiltonian in both interacting boson model (IBM)-1 and IBM-2 versions which are based on affine \\widehat{SU(1,1)} Lie algebra. Results show the advantages of configuration mixing and transitional Hamiltonians, in particular IBM-2 formalism, to reproduce the experimental counterparts when the weight of spherical symmetry increased.
González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier
2008-01-01
Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760
Jingjing Liang; J. Buongiorno; R.A. Monserud
2005-01-01
A growth model for uneven-aged mixed-conifer stands in California was developed with data from 205 permanent plots. The model predicts the number of softwood and hardwood trees in nineteen diameter classes, based on equations for diameter growth rates, mortality arid recruitment. The model gave unbiased predictions of the expected number of trees by diameter class and...
Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel
2018-02-27
Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .
Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz
2015-04-01
Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pisman, T. I.
2009-07-01
The paper presents a experimental and mathematical model of interactions between invertebrates (the ciliates Paramecium caudatum and the rotifers Brachionus plicatilis) in the "producer-consumer" aquatic biotic cycle with spatially separated components. The model describes the dynamics of the mixed culture of ciliates and rotifers in the "consumer" component, feeding on the mixed algal culture of the "producer" component. It has been found that metabolites of the algae Scenedesmus produce an adverse effect on the reproduction of the ciliates P. caudatum. Taking into account this effect, the results of investigation of the mathematical model were in qualitative agreement with the experimental results. In the "producer-consumer" biotic cycle it was shown that coexistence is impossible in the mixed culture of invertebrates of the "consumer" component. The ciliates P. caudatum are driven out by the rotifers B. plicatilis.
Bravo, Felipe; Hann, D.W.; Maguire, Douglas A.
2001-01-01
Mixed conifer and hardwood stands in southwestern Oregon were studied to explore the hypothesis that competition effects on individual-tree growth and survival will differ according to the species comprising the competition measure. Likewise, it was hypothesized that competition measures should extrapolate best if crown-based surrogates are given preference over diameter-based (basal area based) surrogates. Diameter growth and probability of survival were modeled for individual Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) trees growing in pure stands. Alternative models expressing one-sided and two-sided competition as a function of either basal area or crown structure were then applied to other plots in which Douglas-fir was mixed with other conifers and (or) hardwood species. Crown-based variables outperformed basal area based variables as surrogates for one-sided competition in both diameter growth and survival probability, regardless of species composition. In contrast, two-sided competition was best represented by total basal area of competing trees. Surrogates reflecting differences in crown morphology among species relate more closely to the mechanics of competition for light and, hence, facilitate extrapolation to species combinations for which no observations are available.
Mathematical models to characterize early epidemic growth: A Review
Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile
2016-01-01
There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-15 Ebola epidemic in West Africa. PMID:27451336
Mathematical models to characterize early epidemic growth: A review
NASA Astrophysics Data System (ADS)
Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile
2016-09-01
There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-2015 Ebola epidemic in West Africa.
Characterization of Viscoelastic Materials Through an Active Mixer by Direct-Ink Writing
NASA Astrophysics Data System (ADS)
Drake, Eric
The goal of this thesis is two-fold: First, to determine mixing effectiveness of an active mixer attachment to a three-dimensional (3D) printer by characterizing actively-mixed, three-dimensionally printed silicone elastomers. Second, to understand mechanical properties of a printed lattice structure with varying geometry and composition. Ober et al defines mixing effectiveness as a measureable quantity characterized by two key variables: (i) a dimensionless impeller parameter (O ) that depends on mixer geometry as well as Peclet number (Pe) and (ii) a coefficient of variation (COV) that describes the mixer effectiveness based upon image intensity. The first objective utilizes tungsten tracer particles distributed throughout a batch of Dow Corning SE1700 (two parts silicone) - ink "A". Ink "B" is made from pure SE1700. Using the in-site active mixer, both ink "A" and "B" coalesce to form a hybrid ink just before extrusion. Two samples of varying mixer speeds and composition ratios are printed and analyzed by microcomputed tomography (MicroCT). A continuous stirred tank reactor (CSTR) model is applied to better understand mixing behavior. Results are then compared with computer models to verify the hypothesis. Data suggests good mixing for the sample with higher impeller speed. A Radial Distrubtion Function (RDF) macro is used to provide further qualitative analysis of mixing efficiency. The second objective of this thesis utilized three-dimensionally printed samples of varying geometry and composition to ascertain mechanical properties. Samples were printed using SE1700 provided by Lawrence Livermore National Laboratory with a face-centered tetragonal (FCT) structure. Hardness testing is conducted using a Shore OO durometer guided by a computer-controlled, three-axis translation stage to provide precise movements. Data is collected across an 'x-y' plane of the specimen. To explain the data, a simply supported beam model is applied to a single unit cell which yields basic structural behavior per cell. Characterizing the sample as a whole requires a more rigorous approach and non-trivial complexities due to varying geometries and compositions exist. The data demonstrates a uniform change in hardness as a function of position. Additionally, the data indicates periodicities in the lattice structure.
Transient modeling in simulation of hospital operations for emergency response.
Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li
2006-01-01
Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.
Effects of mixing on resolved and unresolved scales on stratospheric age of air
NASA Astrophysics Data System (ADS)
Dietmüller, Simone; Garny, Hella; Plöger, Felix; Jöckel, Patrick; Cai, Duy
2017-06-01
Mean age of air (AoA) is a widely used metric to describe the transport along the Brewer-Dobson circulation. We seek to untangle the effects of different processes on the simulation of AoA, using the chemistry-climate model EMAC (ECHAM/MESSy Atmospheric Chemistry) and the Chemical Lagrangian Model of the Stratosphere (CLaMS). Here, the effects of residual transport and two-way mixing on AoA are calculated. To do so, we calculate the residual circulation transit time (RCTT). The difference of AoA and RCTT is defined as aging by mixing. However, as diffusion is also included in this difference, we further use a method to directly calculate aging by mixing on resolved scales. Comparing these two methods of calculating aging by mixing allows for separating the effect of unresolved aging by mixing (which we term aging by diffusion
in the following) in EMAC and CLaMS. We find that diffusion impacts AoA by making air older, but its contribution plays a minor role (order of 10 %) in all simulations. However, due to the different advection schemes of the two models, aging by diffusion has a larger effect on AoA and mixing efficiency in EMAC, compared to CLaMS. Regarding the trends in AoA, in CLaMS the AoA trend is negative throughout the stratosphere except in the Northern Hemisphere middle stratosphere, consistent with observations. This slight positive trend is neither reproduced in a free-running nor in a nudged simulation with EMAC - in both simulations the AoA trend is negative throughout the stratosphere. Trends in AoA are mainly driven by the contributions of RCTT and aging by mixing, whereas the contribution of aging by diffusion plays a minor role.
Experiment Analysis and Modelling of Compaction Behaviour of Ag60Cu30Sn10 Mixed Metal Powders
NASA Astrophysics Data System (ADS)
Zhou, Mengcheng; Huang, Shangyu; Liu, Wei; Lei, Yu; Yan, Shiwei
2018-03-01
A novel process method combines powder compaction and sintering was employed to fabricate thin sheets of cadmium-free silver based filler metals, the compaction densification behaviour of Ag60Cu30Sn10 mixed metal powders was investigated experimentally. Based on the equivalent density method, the density-dependent Drucker-Prager Cap (DPC) model was introduced to model the powder compaction behaviour. Various experiment procedures were completed to determine the model parameters. The friction coefficients in lubricated and unlubricated die were experimentally determined. The determined material parameters were validated by experiments and numerical simulation of powder compaction process using a user subroutine (USDFLD) in ABAQUS/Standard. The good agreement between the simulated and experimental results indicates that the determined model parameters are able to describe the compaction behaviour of the multicomponent mixed metal powders, which can be further used for process optimization simulations.
Item Response Theory Models for Wording Effects in Mixed-Format Scales
ERIC Educational Resources Information Center
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu
2015-01-01
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar
2015-06-01
Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.
Dynamic Roughness Ratio-Based Framework for Modeling Mixed Mode of Droplet Evaporation.
Gunjan, Madhu Ranjan; Raj, Rishi
2017-07-18
The spatiotemporal evolution of an evaporating sessile droplet and its effect on lifetime is crucial to various disciplines of science and technology. Although experimental investigations suggest three distinct modes through which a droplet evaporates, namely, the constant contact radius (CCR), the constant contact angle (CCA), and the mixed, only the CCR and the CCA modes have been modeled reasonably. Here we use experiments with water droplets on flat and micropillared silicon substrates to characterize the mixed mode. We visualize that a perfect CCA mode after the initial CCR mode is an idealization on a flat silicon substrate, and the receding contact line undergoes intermittent but recurring pinning (CCR mode) as it encounters fresh contaminants on the surface. The resulting increase in roughness lowers the contact angle of the droplet during these intermittent CCR modes until the next depinning event, followed by the CCA mode of evaporation. The airborne contaminants in our experiments are mostly loosely adhered to the surface and travel along with the receding contact line. The resulting gradual increase in the apparent roughness and hence the extent of CCR mode over CCA mode forces appreciable decrease in the contact angle observed during the mixed mode of evaporation. Unlike loosely adhered airborne contaminants on flat samples, micropillars act as fixed roughness features. The apparent roughness fluctuates about the mean value as the contact line recedes between pillars. Evaporation on these surfaces exhibits stick-jump motion with a short-duration mixed mode toward the end when the droplet size becomes comparable to the pillar spacing. We incorporate this dynamic roughness into a classical evaporation model to accurately predict the droplet evolution throughout the three modes, for both flat and micropillared silicon surfaces. We believe that this framework can also be extended to model the evaporation of nanofluids and the coffee-ring effect, among others.