Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
Convex set and linear mixing model
NASA Technical Reports Server (NTRS)
Xu, P.; Greeley, R.
1993-01-01
A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
ERIC Educational Resources Information Center
Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang
2006-01-01
This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
Linear Mixed Models: Gum and Beyond
NASA Astrophysics Data System (ADS)
Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens
2014-04-01
In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.
Generalized linear mixed models with varying coefficients for longitudinal data.
Zhang, Daowen
2004-03-01
The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong
2016-01-01
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471
Extended Mixed-Efects Item Response Models with the MH-RM Algorithm
ERIC Educational Resources Information Center
Chalmers, R. Philip
2015-01-01
A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong
2016-04-07
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L
2014-01-01
Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of reporting has room for improvement regarding the characteristics of the analysis, estimation method, validation, and selection of the model.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Analyzing longitudinal data with the linear mixed models procedure in SPSS.
West, Brady T
2009-09-01
Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.
D.b.h./crown diameter relationships in mixed Appalachian hardwood stands
Neil I. Lamson; Neil I. Lamson
1987-01-01
Linear regression formulae for predicting crown diameter as a function of stem diameter are presented for nine species found in 50- to 80-year-old mixed hardwood stands in north-central West Virginia. Generally, crown diameter was closely related to tolerance; more tolerant species had larger crowns.
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
An approximate generalized linear model with random effects for informative missing data.
Follmann, D; Wu, M
1995-03-01
This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.
2011-01-01
Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252
Characterizing entanglement with global and marginal entropic measures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adesso, Gerardo; Illuminati, Fabrizio; De Siena, Silvio
2003-12-01
We qualify the entanglement of arbitrary mixed states of bipartite quantum systems by comparing global and marginal mixednesses quantified by different entropic measures. For systems of two qubits we discriminate the class of maximally entangled states with fixed marginal mixednesses, and determine an analytical upper bound relating the entanglement of formation to the marginal linear entropies. This result partially generalizes to mixed states the quantification of entanglement with marginal mixednesses holding for pure states. We identify a class of entangled states that, for fixed marginals, are globally more mixed than product states when measured by the linear entropy. Such statesmore » cannot be discriminated by the majorization criterion.« less
On the repeated measures designs and sample sizes for randomized controlled trials.
Tango, Toshiro
2016-04-01
For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M
2011-09-10
The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
Zhang, Z; Guillaume, F; Sartelet, A; Charlier, C; Georges, M; Farnir, F; Druet, T
2012-10-01
In many situations, genome-wide association studies are performed in populations presenting stratification. Mixed models including a kinship matrix accounting for genetic relatedness among individuals have been shown to correct for population and/or family structure. Here we extend this methodology to generalized linear mixed models which properly model data under various distributions. In addition we perform association with ancestral haplotypes inferred using a hidden Markov model. The method was shown to properly account for stratification under various simulated scenari presenting population and/or family structure. Use of ancestral haplotypes resulted in higher power than SNPs on simulated datasets. Application to real data demonstrates the usefulness of the developed model. Full analysis of a dataset with 4600 individuals and 500 000 SNPs was performed in 2 h 36 min and required 2.28 Gb of RAM. The software GLASCOW can be freely downloaded from www.giga.ulg.ac.be/jcms/prod_381171/software. francois.guillaume@jouy.inra.fr Supplementary data are available at Bioinformatics online.
Competing regression models for longitudinal data.
Alencar, Airlane P; Singer, Julio M; Rocha, Francisco Marcelo M
2012-03-01
The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretest-posttest longitudinal data. In particular, we consider log-normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE-based models may be preferable when the goal is to compare the marginal expected responses. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
Shek, Daniel T L; Ma, Cecilia M S
2011-01-05
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.
Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations
Shek, Daniel T. L.; Ma, Cecilia M. S.
2011-01-01
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vu, Cung Khac; Nihei, Kurt Toshimi; Johnson, Paul A.
A system and method of characterizing properties of a medium from a non-linear interaction are include generating, by first and second acoustic sources disposed on a surface of the medium on a first line, first and second acoustic waves. The first and second acoustic sources are controllable such that trajectories of the first and second acoustic waves intersect in a mixing zone within the medium. The method further includes receiving, by a receiver positioned in a plane containing the first and second acoustic sources, a third acoustic wave generated by a non-linear mixing process from the first and second acousticmore » waves in the mixing zone; and creating a first two-dimensional image of non-linear properties or a first ratio of compressional velocity and shear velocity, or both, of the medium in a first plane generally perpendicular to the surface and containing the first line, based on the received third acoustic wave.« less
Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.
Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C
2014-12-01
D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.
Xiao, Qingtai; Xu, Jianxin; Wang, Hua
2016-08-16
A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target.
Xiao, Qingtai; Xu, Jianxin; Wang, Hua
2016-01-01
A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target. PMID:27527065
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING
Liu, Hongcheng; Yao, Tao; Li, Runze
2015-01-01
This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2018-01-01
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591
Typical Werner states satisfying all linear Bell inequalities with dichotomic measurements
NASA Astrophysics Data System (ADS)
Luo, Ming-Xing
2018-04-01
Quantum entanglement as a special resource inspires various distinct applications in quantum information processing. Unfortunately, it is NP-hard to detect general quantum entanglement using Bell testing. Our goal is to investigate quantum entanglement with white noises that appear frequently in experiment and quantum simulations. Surprisingly, for almost all multipartite generalized Greenberger-Horne-Zeilinger states there are entangled noisy states that satisfy all linear Bell inequalities consisting of full correlations with dichotomic inputs and outputs of each local observer. This result shows generic undetectability of mixed entangled states in contrast to Gisin's theorem of pure bipartite entangled states in terms of Bell nonlocality. We further provide an accessible method to show a nontrivial set of noisy entanglement with small number of parties satisfying all general linear Bell inequalities. These results imply typical incompleteness of special Bell theory in explaining entanglement.
Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.
2016-01-21
Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.
Wang, Yuanjia; Chen, Huaihou
2012-01-01
Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801
Wang, Yuanjia; Chen, Huaihou
2012-12-01
We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.
Real longitudinal data analysis for real people: building a good enough mixed model.
Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E
2010-02-20
Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
Posterior propriety for hierarchical models with log-likelihoods that have norm bounds
Michalak, Sarah E.; Morris, Carl N.
2015-07-17
Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less
An, Shengli; Zhang, Yanhong; Chen, Zheng
2012-12-01
To analyze binary classification repeated measurement data with generalized estimating equations (GEE) and generalized linear mixed models (GLMMs) using SPSS19.0. GEE and GLMMs models were tested using binary classification repeated measurement data sample using SPSS19.0. Compared with SAS, SPSS19.0 allowed convenient analysis of categorical repeated measurement data using GEE and GLMMs.
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
Dharani, S; Rakkiyappan, R; Cao, Jinde; Alsaedi, Ahmed
2017-08-01
This paper explores the problem of synchronization of a class of generalized reaction-diffusion neural networks with mixed time-varying delays. The mixed time-varying delays under consideration comprise of both discrete and distributed delays. Due to the development and merits of digital controllers, sampled-data control is a natural choice to establish synchronization in continuous-time systems. Using a newly introduced integral inequality, less conservative synchronization criteria that assure the global asymptotic synchronization of the considered generalized reaction-diffusion neural network and mixed delays are established in terms of linear matrix inequalities (LMIs). The obtained easy-to-test LMI-based synchronization criteria depends on the delay bounds in addition to the reaction-diffusion terms, which is more practicable. Upon solving these LMIs by using Matlab LMI control toolbox, a desired sampled-data controller gain can be acuqired without any difficulty. Finally, numerical examples are exploited to express the validity of the derived LMI-based synchronization criteria.
Genetic mixed linear models for twin survival data.
Ha, Il Do; Lee, Youngjo; Pawitan, Yudi
2007-07-01
Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2016-01-15
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
2017-02-04
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
Benevolent Ideology and Women's Economic Decision-Making: When Sexism Is Hurting Men's Wallet.
Silvestre, Aude; Sarlet, Marie; Huart, Johanne; Dardenne, Benoit
2016-01-01
Can ideology, as a widespread "expectation creator," impact economic decisions? In two studies we investigated the influence of the Benevolent Sexism (BS) ideology (which dictates that men should provide for passive and nurtured women) on women's economic decision-making. In Study 1, using a Dictator Game in which women decided how to share amounts of money with men, results of a Generalized Linear Mixed Model analysis show that higher endorsement of BS and contextual expectations of benevolence were associated with more very unequal offers. Similarly, in an Ultimatum Game in which women received monetary offers from men, Study 2's Generalized Linear Mixed Model's results revealed that BS led women to reject more very unequal offers. If women's endorsement of BS ideology and expectations of benevolence prove contrary to reality, they may strike back at men. These findings show that BS ideology creates expectations that shape male-female relationships in a way that could be prejudicial to men.
Benevolent Ideology and Women’s Economic Decision-Making: When Sexism Is Hurting Men’s Wallet
Silvestre, Aude; Sarlet, Marie; Huart, Johanne; Dardenne, Benoit
2016-01-01
Can ideology, as a widespread “expectation creator,” impact economic decisions? In two studies we investigated the influence of the Benevolent Sexism (BS) ideology (which dictates that men should provide for passive and nurtured women) on women’s economic decision-making. In Study 1, using a Dictator Game in which women decided how to share amounts of money with men, results of a Generalized Linear Mixed Model analysis show that higher endorsement of BS and contextual expectations of benevolence were associated with more very unequal offers. Similarly, in an Ultimatum Game in which women received monetary offers from men, Study 2’s Generalized Linear Mixed Model’s results revealed that BS led women to reject more very unequal offers. If women’s endorsement of BS ideology and expectations of benevolence prove contrary to reality, they may strike back at men. These findings show that BS ideology creates expectations that shape male-female relationships in a way that could be prejudicial to men. PMID:26870955
Lobréaux, Stéphane; Melodelima, Christelle
2015-02-01
We tested the use of Generalized Linear Mixed Models to detect associations between genetic loci and environmental variables, taking into account the population structure of sampled individuals. We used a simulation approach to generate datasets under demographically and selectively explicit models. These datasets were used to analyze and optimize GLMM capacity to detect the association between markers and selective coefficients as environmental data in terms of false and true positive rates. Different sampling strategies were tested, maximizing the number of populations sampled, sites sampled per population, or individuals sampled per site, and the effect of different selective intensities on the efficiency of the method was determined. Finally, we apply these models to an Arabidopsis thaliana SNP dataset from different accessions, looking for loci associated with spring minimal temperature. We identified 25 regions that exhibit unusual correlations with the climatic variable and contain genes with functions related to temperature stress. Copyright © 2014 Elsevier Inc. All rights reserved.
Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng
2013-05-01
Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.
Diaz, Francisco J
2016-10-15
We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
USDA-ARS?s Scientific Manuscript database
An analytical and statistical method has been developed to measure the ultrasound-enhanced bioscouring performance of milligram quantities of endo- and exo-polygalacturonase enzymes obtained from Rhizopus oryzae fungi. UV-Vis spectrophotometric data and a general linear mixed models procedure indic...
Using generalized additive (mixed) models to analyze single case designs.
Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J
2014-04-01
This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Optimized Waterspace Management and Scheduling Using Mixed-Integer Linear Programming
2016-01-01
Complete [30]. Proposition 4.1 satisfies the first criterion. For the second criterion, we will use the Traveling Salesman Problem (TSP), which has been...A branch and cut algorithm for the symmetric generalized traveling salesman problem , Operations Research 45 (1997) 378–394. [33] J. Silberholz, B...Golden, The generalized traveling salesman problem : A new genetic algorithm ap- proach, Extended Horizons: Advances in Computing, Optimization, and
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Afshari, Saied; Hejazi, S. Hossein; Kantzas, Apostolos
2018-05-01
Miscible displacement of fluids in porous media is often characterized by the scaling of the mixing zone length with displacement time. Depending on the viscosity contrast of fluids, the scaling law varies between the square root relationship, a sign for dispersive transport regime during stable displacement, and the linear relationship, which represents the viscous fingering regime during an unstable displacement. The presence of heterogeneities in a porous medium significantly affects the scaling behavior of the mixing length as it interacts with the viscosity contrast to control the mixing of fluids in the pore space. In this study, the dynamics of the flow and transport during both unit and adverse viscosity ratio miscible displacements are investigated in heterogeneous packings of circular grains using pore-scale numerical simulations. The pore-scale heterogeneity level is characterized by the variations of the grain diameter and velocity field. The growth of mixing length is employed to identify the nature of the miscible transport regime at different viscosity ratios and heterogeneity levels. It is shown that as the viscosity ratio increases to higher adverse values, the scaling law of mixing length gradually shifts from dispersive to fingering nature up to a certain viscosity ratio and remains almost the same afterwards. In heterogeneous media, the mixing length scaling law is observed to be generally governed by the variations of the velocity field rather than the grain size. Furthermore, the normalization of mixing length temporal plots with respect to the governing parameters of viscosity ratio, heterogeneity, medium length, and medium aspect ratio is performed. The results indicate that mixing length scales exponentially with log-viscosity ratio and grain size standard deviation while the impact of aspect ratio is insignificant. For stable flows, mixing length scales with the square root of medium length, whereas it changes linearly with length during unstable flows. This scaling procedure allows us to describe the temporal variation of mixing length using a generalized curve for various combinations of the flow conditions and porous medium properties.
Impulsive synchronization of stochastic reaction-diffusion neural networks with mixed time delays.
Sheng, Yin; Zeng, Zhigang
2018-07-01
This paper discusses impulsive synchronization of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and hybrid time delays. By virtue of inequality techniques, theories of stochastic analysis, linear matrix inequalities, and the contradiction method, sufficient criteria are proposed to ensure exponential synchronization of the addressed stochastic reaction-diffusion neural networks with mixed time delays via a designed impulsive controller. Compared with some recent studies, the neural network models herein are more general, some restrictions are relaxed, and the obtained conditions enhance and generalize some published ones. Finally, two numerical simulations are performed to substantiate the validity and merits of the developed theoretical analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Murakami, Akira
2016-01-01
This article introduces two sophisticated statistical modeling techniques that allow researchers to analyze systematicity, individual variation, and nonlinearity in second language (L2) development. Generalized linear mixed-effects models can be used to quantify individual variation and examine systematic effects simultaneously, and generalized…
Wang, Xulong; Philip, Vivek M.; Ananda, Guruprasad; White, Charles C.; Malhotra, Ankit; Michalski, Paul J.; Karuturi, Krishna R. Murthy; Chintalapudi, Sumana R.; Acklin, Casey; Sasner, Michael; Bennett, David A.; De Jager, Philip L.; Howell, Gareth R.; Carter, Gregory W.
2018-01-01
Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost, whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized LMM (GLMM) in a Bayesian framework, called Bayes-GLMM. Bayes-GLMM has four major features: (1) support of categorical, binary, and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer’s Disease Sequencing Project. This study contains 570 individuals from 111 families, each with Alzheimer’s disease diagnosed at one of four confidence levels. Using Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer’s disease. Two variants, rs140233081 and rs149372995, lie between PRKAR1B and PDGFA. The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with Alzheimer’s disease-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed-model approach in a Bayesian framework for association studies. PMID:29507048
40 CFR 60.667 - Chemicals affected by subpart NNN.
Code of Federal Regulations, 2010 CFR
2010-07-01
... alcohols, ethoxylated, mixed Linear alcohols, ethoxylated, and sulfated, sodium salt, mixed Linear alcohols, sulfated, sodium salt, mixed Linear alkylbenzene 123-01-3 Magnesium acetate 142-72-3 Maleic anhydride 108...
40 CFR 60.667 - Chemicals affected by subpart NNN.
Code of Federal Regulations, 2011 CFR
2011-07-01
... alcohols, ethoxylated, mixed Linear alcohols, ethoxylated, and sulfated, sodium salt, mixed Linear alcohols, sulfated, sodium salt, mixed Linear alkylbenzene 123-01-3 Magnesium acetate 142-72-3 Maleic anhydride 108...
Mutation-selection equilibrium in games with mixed strategies.
Tarnita, Corina E; Antal, Tibor; Nowak, Martin A
2009-11-07
We develop a new method for studying stochastic evolutionary game dynamics of mixed strategies. We consider the general situation: there are n pure strategies whose interactions are described by an nxn payoff matrix. Players can use mixed strategies, which are given by the vector (p(1),...,p(n)). Each entry specifies the probability to use the corresponding pure strategy. The sum over all entries is one. Therefore, a mixed strategy is a point in the simplex S(n). We study evolutionary dynamics in a well-mixed population of finite size. Individuals reproduce proportional to payoff. We consider the case of weak selection, which means the payoff from the game is only a small contribution to overall fitness. Reproduction can be subject to mutation; a mutant adopts a randomly chosen mixed strategy. We calculate the average abundance of every mixed strategy in the stationary distribution of the mutation-selection process. We find the crucial conditions that specify if a strategy is favored or opposed by selection. One condition holds for low mutation rate, another for high mutation rate. The result for any mutation rate is a linear combination of those two. As a specific example we study the Hawk-Dove game. We prove general statements about the relationship between games with pure and with mixed strategies.
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
The Linear Mixing Approximation for Planetary Ices
NASA Astrophysics Data System (ADS)
Bethkenhagen, M.; Meyer, E. R.; Hamel, S.; Nettelmann, N.; French, M.; Scheibe, L.; Ticknor, C.; Collins, L. A.; Kress, J. D.; Fortney, J. J.; Redmer, R.
2017-12-01
We investigate the validity of the widely used linear mixing approximation for the equations of state (EOS) of planetary ices, which are thought to dominate the interior of the ice giant planets Uranus and Neptune. For that purpose we perform density functional theory molecular dynamics simulations using the VASP code.[1] In particular, we compute 1:1 binary mixtures of water, ammonia, and methane, as well as their 2:1:4 ternary mixture at pressure-temperature conditions typical for the interior of Uranus and Neptune.[2,3] In addition, a new ab initio EOS for methane is presented. The linear mixing approximation is verified for the conditions present inside Uranus ranging up to 10 Mbar based on the comprehensive EOS data set. We also calculate the diffusion coefficients for the ternary mixture along different Uranus interior profiles and compare them to the values of the pure compounds. We find that deviations of the linear mixing approximation from the real mixture are generally small; for the EOS they fall within about 4% uncertainty while the diffusion coefficients deviate up to 20% . The EOS of planetary ices are applied to adiabatic models of Uranus. It turns out that a deep interior of almost pure ices is consistent with the gravity field data, in which case the planet becomes rather cold (T core ˜ 4000 K). [1] G. Kresse and J. Hafner, Physical Review B 47, 558 (1993). [2] R. Redmer, T.R. Mattsson, N. Nettelmann and M. French, Icarus 211, 798 (2011). [3] N. Nettelmann, K. Wang, J. J. Fortney, S. Hamel, S. Yellamilli, M. Bethkenhagen and R. Redmer, Icarus 275, 107 (2016).
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
NASA Astrophysics Data System (ADS)
Hapugoda, J. C.; Sooriyarachchi, M. R.
2017-09-01
Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.
Logit-normal mixed model for Indian Monsoon rainfall extremes
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-03-01
Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.
Amplitudes for multiphoton quantum processes in linear optics
NASA Astrophysics Data System (ADS)
Urías, Jesús
2011-07-01
The prominent role that linear optical networks have acquired in the engineering of photon states calls for physically intuitive and automatic methods to compute the probability amplitudes for the multiphoton quantum processes occurring in linear optics. A version of Wick's theorem for the expectation value, on any vector state, of products of linear operators, in general, is proved. We use it to extract the combinatorics of any multiphoton quantum processes in linear optics. The result is presented as a concise rule to write down directly explicit formulae for the probability amplitude of any multiphoton process in linear optics. The rule achieves a considerable simplification and provides an intuitive physical insight about quantum multiphoton processes. The methodology is applied to the generation of high-photon-number entangled states by interferometrically mixing coherent light with spontaneously down-converted light.
Nguyen, N H; Whatmore, P; Miller, A; Knibb, W
2016-02-01
The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.
Evaluating a Policing Strategy Intended to Disrupt an Illicit Street-Level Drug Market
ERIC Educational Resources Information Center
Corsaro, Nicholas; Brunson, Rod K.; McGarrell, Edmund F.
2010-01-01
The authors examined a strategic policing initiative that was implemented in a high crime Nashville, Tennessee neighborhood by utilizing a mixed-methodological evaluation approach in order to provide (a) a descriptive process assessment of program fidelity; (b) an interrupted time-series analysis relying upon generalized linear models; (c)…
USDA-ARS?s Scientific Manuscript database
Emerald ash borer, Agrilus planipennis Fairmaire, an insect native to central Asia, was first detected in southeast Michigan in 2002, and has since killed millions of ash trees, Fraxinus spp., throughout eastern North America. Here, we use generalized linear mixed models to predict the presence or a...
Spatial Assessment of Model Errors from Four Regression Techniques
Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove
2005-01-01
Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...
Effective Teaching Results in Increased Science Achievement for All Students
ERIC Educational Resources Information Center
Johnson, Carla C.; Kahle, Jane Butler; Fargo, Jamison D.
2007-01-01
This study of teacher effectiveness and student achievement in science demonstrated that effective teachers positively impact student learning. A general linear mixed model was used to assess change in student scores on the Discovery Inquiry Test as a function of time, race, teacher effectiveness, gender, and impact of teacher effectiveness in…
Assessing the Impact of Community Marriage Policies on County Divorce Rates
ERIC Educational Resources Information Center
Birch, Paul James; Weed, Stan E.; Olsen, Joseph
2004-01-01
Community marriage initiatives (CMIs) are designed to strengthen marriage and increase marital stability by addressing relevant laws, policies, and cultural factors. We examined a specific CMI designed to lower divorce rates by establishing a shared public commitment among clergy to strengthen marriage. A mixed-effects general linear model was…
Linear mixed model for heritability estimation that explicitly addresses environmental variation.
Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S
2016-07-05
The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.
Trends in high-risk sexual behaviors among general population groups in China: a systematic review.
Cai, Rui; Richardus, Jan Hendrik; Looman, Caspar W N; de Vlas, Sake J
2013-01-01
The objective of this review was to investigate whether Chinese population groups that do not belong to classical high risk groups show an increasing trend of engaging in high-risk sexual behaviors. We systematically searched the English and Chinese literature on sexual risk behaviors published between January 1980 and March 2012 in PubMed and the China National Knowledge Infrastructure (CNKI). We included observational studies that focused on population groups other than commercial sex workers (CSWs) and their clients, and men who have sex with men (MSM) and quantitatively reported one of the following indicators of recent high-risk sexual behavior: premarital sex, commercial sex, multiple sex partners, condom use or sexually transmitted infections (STIs). We used generalized linear mixed model to examine the time trend in engaging in high-risk sexual behaviors. We included 174 observational studies involving 932,931 participants: 55 studies reported on floating populations, 73 on college students and 46 on other groups (i.e. out-of-school youth, rural residents, and subjects from gynecological or obstetric clinics and premarital check-up centers). From the generalized linear mixed model, no significant trends in engaging in high-risk sexual behaviors were identified in the three population groups. Sexual risk behaviors among certain general population groups have not increased substantially. These groups are therefore unlikely to incite a STI/HIV epidemic among the general Chinese population. Because the studied population groups are not necessarily representative of the general population, the outcomes found may not reflect those of the general population.
Trends in High-Risk Sexual Behaviors among General Population Groups in China: A Systematic Review
Cai, Rui; Richardus, Jan Hendrik; Looman, Caspar W. N.; de Vlas, Sake J.
2013-01-01
Background The objective of this review was to investigate whether Chinese population groups that do not belong to classical high risk groups show an increasing trend of engaging in high-risk sexual behaviors. Methods We systematically searched the English and Chinese literature on sexual risk behaviors published between January 1980 and March 2012 in PubMed and the China National Knowledge Infrastructure (CNKI). We included observational studies that focused on population groups other than commercial sex workers (CSWs) and their clients, and men who have sex with men (MSM) and quantitatively reported one of the following indicators of recent high-risk sexual behavior: premarital sex, commercial sex, multiple sex partners, condom use or sexually transmitted infections (STIs). We used generalized linear mixed model to examine the time trend in engaging in high-risk sexual behaviors. Results We included 174 observational studies involving 932,931 participants: 55 studies reported on floating populations, 73 on college students and 46 on other groups (i.e. out-of-school youth, rural residents, and subjects from gynecological or obstetric clinics and premarital check-up centers). From the generalized linear mixed model, no significant trends in engaging in high-risk sexual behaviors were identified in the three population groups. Discussion Sexual risk behaviors among certain general population groups have not increased substantially. These groups are therefore unlikely to incite a STI/HIV epidemic among the general Chinese population. Because the studied population groups are not necessarily representative of the general population, the outcomes found may not reflect those of the general population. PMID:24236121
Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger
2017-09-01
The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Zhang, Chuan; Wang, Xingyuan; Luo, Chao; Li, Junqiu; Wang, Chunpeng
2018-03-01
In this paper, we focus on the robust outer synchronization problem between two nonlinear complex networks with parametric disturbances and mixed time-varying delays. Firstly, a general complex network model is proposed. Besides the nonlinear couplings, the network model in this paper can possess parametric disturbances, internal time-varying delay, discrete time-varying delay and distributed time-varying delay. Then, according to the robust control strategy, linear matrix inequality and Lyapunov stability theory, several outer synchronization protocols are strictly derived. Simple linear matrix controllers are designed to driver the response network synchronize to the drive network. Additionally, our results can be applied on the complex networks without parametric disturbances. Finally, by utilizing the delayed Lorenz chaotic system as the dynamics of all nodes, simulation examples are given to demonstrate the effectiveness of our theoretical results.
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.
Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel
2015-09-10
Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.
Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.
Zhang, Yue; Berhane, Kiros
2016-01-01
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.
A Parameter Subset Selection Algorithm for Mixed-Effects Models
Schmidt, Kathleen L.; Smith, Ralph C.
2016-01-01
Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less
ERIC Educational Resources Information Center
Pratt, Charlotte; Webber, Larry S.; Baggett, Chris D.; Ward, Dianne; Pate, Russell R.; Murray, David; Lohman, Timothy; Lytle, Leslie; Elder, John P.
2008-01-01
This study describes the relationships between sedentary activity and body composition in 1,458 sixth-grade girls from 36 middle schools across the United States. Multivariate associations between sedentary activity and body composition were examined with regression analyses using general linear mixed models. Mean age, body mass index, and…
ERIC Educational Resources Information Center
Muth, Chelsea; Bales, Karen L.; Hinde, Katie; Maninger, Nicole; Mendoza, Sally P.; Ferrer, Emilio
2016-01-01
Unavoidable sample size issues beset psychological research that involves scarce populations or costly laboratory procedures. When incorporating longitudinal designs these samples are further reduced by traditional modeling techniques, which perform listwise deletion for any instance of missing data. Moreover, these techniques are limited in their…
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Development of Robotics Applications in a Solid Propellant Mixing Laboratory
1988-06-01
implementation of robotic hardware and software into a laboratory environment requires a carefully structured series of phases which examines, in...strategy. The general methodology utilized in this project is discussed in Appendix A. The proposed laboratory robotics development program was structured ...Accessibility - Potential modifications - Safety precautions e) Robot Transport - Slider mechanisms - Linear tracks - Gantry configuration - Mobility f
Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz
2013-01-01
The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...
Highly accurate symplectic element based on two variational principles
NASA Astrophysics Data System (ADS)
Qing, Guanghui; Tian, Jia
2018-02-01
For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.
Generalization of mixed multiscale finite element methods with applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C S
Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixedmore » multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii« less
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, B.; Polizzi, E.
2013-05-01
The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.
Suppression of chaos at slow variables by rapidly mixing fast dynamics
NASA Astrophysics Data System (ADS)
Abramov, R.
2012-04-01
One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation property promotes the suppression of chaos at the slow variables through the rapid mixing at the fast variables, both theoretically and through numerical simulations. A suitable mathematical framework is developed, connecting the slow dynamics on the tangent subspaces to the infinite-time linear response of the mean state to a constant external forcing at the fast variables. Additionally, it is shown that the uncoupled dynamics for the slow variables may remain chaotic while the complete multiscale system loses chaos and becomes completely predictable at the slow variables through increasing chaos and turbulence at the fast variables. This result contradicts the common sense intuition, where, naturally, one would think that coupling a slow weakly chaotic system with another much faster and much stronger mixing system would result in general increase of chaos at the slow variables.
Finite-time mixed outer synchronization of complex networks with coupling time-varying delay.
He, Ping; Ma, Shu-Hua; Fan, Tao
2012-12-01
This article is concerned with the problem of finite-time mixed outer synchronization (FMOS) of complex networks with coupling time-varying delay. FMOS is a recently developed generalized synchronization concept, i.e., in which different state variables of the corresponding nodes can evolve into finite-time complete synchronization, finite-time anti-synchronization, and even amplitude finite-time death simultaneously for an appropriate choice of the controller gain matrix. Some novel stability criteria for the synchronization between drive and response complex networks with coupling time-varying delay are derived using the Lyapunov stability theory and linear matrix inequalities. And a simple linear state feedback synchronization controller is designed as a result. Numerical simulations for two coupled networks of modified Chua's circuits are then provided to demonstrate the effectiveness and feasibility of the proposed complex networks control and synchronization schemes and then compared with the proposed results and the previous schemes for accuracy.
Analysis of Cross-Sectional Univariate Measurements for Family Dyads Using Linear Mixed Modeling
Knafl, George J.; Dixon, Jane K.; O'Malley, Jean P.; Grey, Margaret; Deatrick, Janet A.; Gallo, Agatha M.; Knafl, Kathleen A.
2010-01-01
Outcome measurements from members of the same family are likely correlated. Such intrafamilial correlation (IFC) is an important dimension of the family as a unit but is not always accounted for in analyses of family data. This article demonstrates the use of linear mixed modeling to account for IFC in the important special case of univariate measurements for family dyads collected at a single point in time. Example analyses of data from partnered parents having a child with a chronic condition on their child's adaptation to the condition and on the family's general functioning and management of the condition are provided. Analyses of this kind are reasonably straightforward to generate with popular statistical tools. Thus, it is recommended that IFC be reported as standard practice reflecting the fact that a family dyad is more than just the aggregate of two individuals. Moreover, not accounting for IFC can affect the conclusions. PMID:19307316
Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.
2014-01-01
Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...
2015-11-12
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
NASA Technical Reports Server (NTRS)
Merenyi, E.; Miller, J. S.; Singer, R. B.
1992-01-01
The linear mixing model approach was successfully applied to data sets of various natures. In these sets, the measured radiance could be assumed to be a linear combination of radiance contributions. The present work is an attempt to analyze a spectral image of Mars with linear mixing modeling.
Predicted Hematologic and Plasma Volume Responses Following Rapid Ascent to Progressive Altitudes
2014-06-01
of these changes, and define baseline demographics and physiologic descriptors that are important in predicting these changes. The overall impact of... physiologic descriptors that are important in predicting these changes. Using general linear mixed models and a comprehensive relational database...accomplished using a comprehensive relational database containing individual ascent profiles, demographics, and physiologic subject descriptors as well as
Building out a Measurement Model to Incorporate Complexities of Testing in the Language Domain
ERIC Educational Resources Information Center
Wilson, Mark; Moore, Stephen
2011-01-01
This paper provides a summary of a novel and integrated way to think about the item response models (most often used in measurement applications in social science areas such as psychology, education, and especially testing of various kinds) from the viewpoint of the statistical theory of generalized linear and nonlinear mixed models. In addition,…
ERIC Educational Resources Information Center
Lazar, Ann A.; Zerbe, Gary O.
2011-01-01
Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA),…
Evolutionary dynamics of general group interactions in structured populations
NASA Astrophysics Data System (ADS)
Li, Aming; Broom, Mark; Du, Jinming; Wang, Long
2016-02-01
The evolution of populations is influenced by many factors, and the simple classical models have been developed in a number of important ways. Both population structure and multiplayer interactions have been shown to significantly affect the evolution of important properties, such as the level of cooperation or of aggressive behavior. Here we combine these two key factors and develop the evolutionary dynamics of general group interactions in structured populations represented by regular graphs. The traditional linear and threshold public goods games are adopted as models to address the dynamics. We show that for linear group interactions, population structure can favor the evolution of cooperation compared to the well-mixed case, and we see that the more neighbors there are, the harder it is for cooperators to persist in structured populations. We further show that threshold group interactions could lead to the emergence of cooperation even in well-mixed populations. Here population structure sometimes inhibits cooperation for the threshold public goods game, where depending on the benefit to cost ratio, the outcomes are bistability or a monomorphic population of defectors or cooperators. Our results suggest, counterintuitively, that structured populations are not always beneficial for the evolution of cooperation for nonlinear group interactions.
The Bayesian group lasso for confounded spatial data
Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.
2017-01-01
Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.
Phylogenetic mixtures and linear invariants for equal input models.
Casanellas, Marta; Steel, Mike
2017-04-01
The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).
Negative base encoding in optical linear algebra processors
NASA Technical Reports Server (NTRS)
Perlee, C.; Casasent, D.
1986-01-01
In the digital multiplication by analog convolution algorithm, the bits of two encoded numbers are convolved to form the product of the two numbers in mixed binary representation; this output can be easily converted to binary. Attention is presently given to negative base encoding, treating base -2 initially, and then showing that the negative base system can be readily extended to any radix. In general, negative base encoding in optical linear algebra processors represents a more efficient technique than either sign magnitude or 2's complement encoding, when the additions of digitally encoded products are performed in parallel.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Mixed, charge and heat noises in thermoelectric nanosystems
NASA Astrophysics Data System (ADS)
Crépieux, Adeline; Michelini, Fabienne
2015-01-01
Mixed, charge and heat current fluctuations as well as thermoelectric differential conductances are considered for non-interacting nanosystems connected to reservoirs. Using the Landauer-Büttiker formalism, we derive general expressions for these quantities and consider their possible relationships in the entire ranges of temperature, voltage and coupling to the environment or reservoirs. We introduce a dimensionless quantity given by the ratio between the product of mixed noises and the product of charge and heat noises, distinguishing between the auto-ratio defined in the same reservoir and the cross-ratio between distinct reservoirs. From the linear response regime to the high-voltage regime, we further specify the analytical expressions of differential conductances, noises and ratios of noises, and examine their behavior in two concrete nanosystems: a quantum point contact in an ohmic environment and a single energy level quantum dot connected to reservoirs. In the linear response regime, we find that these ratios are equal to each other and are simply related to the figure of merit. They can be expressed in terms of differential conductances with the help of the fluctuation-dissipation theorem. In the non-linear regime, these ratios radically distinguish between themselves as the auto-ratio remains bounded by one, while the cross-ratio exhibits rich and complex behaviors. In the quantum dot nanosystem, we moreover demonstrate that the thermoelectric efficiency can be expressed as a ratio of noises in the non-linear Schottky regime. In the intermediate voltage regime, the cross-ratio changes sign and diverges, which evidences a change of sign in the heat cross-noise.
Mixed, charge and heat noises in thermoelectric nanosystems.
Crépieux, Adeline; Michelini, Fabienne
2015-01-14
Mixed, charge and heat current fluctuations as well as thermoelectric differential conductances are considered for non-interacting nanosystems connected to reservoirs. Using the Landauer-Büttiker formalism, we derive general expressions for these quantities and consider their possible relationships in the entire ranges of temperature, voltage and coupling to the environment or reservoirs. We introduce a dimensionless quantity given by the ratio between the product of mixed noises and the product of charge and heat noises, distinguishing between the auto-ratio defined in the same reservoir and the cross-ratio between distinct reservoirs. From the linear response regime to the high-voltage regime, we further specify the analytical expressions of differential conductances, noises and ratios of noises, and examine their behavior in two concrete nanosystems: a quantum point contact in an ohmic environment and a single energy level quantum dot connected to reservoirs. In the linear response regime, we find that these ratios are equal to each other and are simply related to the figure of merit. They can be expressed in terms of differential conductances with the help of the fluctuation-dissipation theorem. In the non-linear regime, these ratios radically distinguish between themselves as the auto-ratio remains bounded by one, while the cross-ratio exhibits rich and complex behaviors. In the quantum dot nanosystem, we moreover demonstrate that the thermoelectric efficiency can be expressed as a ratio of noises in the non-linear Schottky regime. In the intermediate voltage regime, the cross-ratio changes sign and diverges, which evidences a change of sign in the heat cross-noise.
NASA Astrophysics Data System (ADS)
Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan
2018-04-01
Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.
NASA Astrophysics Data System (ADS)
Schröder, Jörg; Viebahn, Nils; Wriggers, Peter; Auricchio, Ferdinando; Steeger, Karl
2017-09-01
In this work we investigate different mixed finite element formulations for the detection of critical loads for the possible occurrence of bifurcation and limit points. In detail, three- and two-field formulations for incompressible and quasi-incompressible materials are analyzed. In order to apply various penalty functions for the volume dilatation in displacement/pressure mixed elements we propose a new consistent scheme capturing the non linearities of the penalty constraints. It is shown that for all mixed formulations, which can be reduced to a generalized displacement scheme, a straight forward stability analysis is possible. However, problems based on the classical saddle-point structure require a different analyses based on the change of the signature of the underlying matrix system. The basis of these investigations is the work from Auricchio et al. (Comput Methods Appl Mech Eng 194:1075-1092, 2005, Comput Mech 52:1153-1167, 2013).
Atomic Schroedinger cat-like states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enriquez-Flores, Marco; Rosas-Ortiz, Oscar; Departamento de Fisica, Cinvestav, A.P. 14-740, Mexico D.F. 07000
2010-10-11
After a short overview of the basic mathematical structure of quantum mechanics we analyze the Schroedinger's antinomic example of a living and dead cat mixed in equal parts. Superpositions of Glauber kets are shown to approximate such macroscopic states. Then, two-level atomic states are used to construct mesoscopic kittens as appropriate linear combinations of angular momentum eigenkets for j = 1/2. Some general comments close the present contribution.
Solution of the Generalized Noah's Ark Problem.
Billionnet, Alain
2013-01-01
The phylogenetic diversity (PD) of a set of species is a measure of the evolutionary distance among the species in the collection, based on a phylogenetic tree. Such a tree is composed of a root, internal nodes, and leaves that correspond to the set of taxa under study. With each edge of the tree is associated a non-negative branch length (evolutionary distance). If a particular survival probability is associated with each taxon, the PD measure becomes the expected PD measure. In the Noah's Ark Problem (NAP) introduced by Weitzman (1998), these survival probabilities can be increased at some cost. The problem is to determine how best to allocate a limited amount of resources to maximize the expected PD of the considered species. It is easy to formulate the NAP as a (difficult) nonlinear 0-1 programming problem. The aim of this article is to show that a general version of the NAP (GNAP) can be solved simply and efficiently with any set of edge weights and any set of survival probabilities by using standard mixed-integer linear programming software. The crucial point to move from a nonlinear program in binary variables to a mixed-integer linear program, is to approximate the logarithmic function by the lower envelope of a set of tangents to the curve. Solving the obtained mixed-integer linear program provides not only a near-optimal solution but also an upper bound on the value of the optimal solution. We also applied this approach to a generalization of the nature reserve problem (GNRP) that consists of selecting a set of regions to be conserved so that the expected PD of the set of species present in these regions is maximized. In this case, the survival probabilities of different taxa are not independent of each other. Computational results are presented to illustrate potentialities of the approach. Near-optimal solutions with hypothetical phylogenetic trees comprising about 4000 taxa are obtained in a few seconds or minutes of computing time for the GNAP, and in about 30 min for the GNRP. In all the cases the average guarantee varies from 0% to 1.20%.
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
Mixed problems for the Korteweg-de Vries equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faminskii, A V
1999-06-30
Results are established concerning the non-local solubility and wellposedness in various function spaces of the mixed problem for the Korteweg-de Vries equation u{sub t}+u{sub xxx}+au{sub x}+uu{sub x}=f(t,x) in the half-strip (0,T)x(-{infinity},0). Some a priori estimates of the solutions are obtained using a special solution J(t,x) of the linearized KdV equation of boundary potential type. Properties of J are studied which differ essentially as x{yields}+{infinity} or x{yields}-{infinity}. Application of this boundary potential enables us in particular to prove the existence of generalized solutions with non-regular boundary values.
A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.
Ferrari, Alberto; Comelli, Mario
2016-12-01
In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.
An R2 statistic for fixed effects in the linear mixed model.
Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver
2008-12-20
Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.
General functioning predicts reward and punishment learning in schizophrenia.
Somlai, Zsuzsanna; Moustafa, Ahmed A; Kéri, Szabolcs; Myers, Catherine E; Gluck, Mark A
2011-04-01
Previous studies investigating feedback-driven reinforcement learning in patients with schizophrenia have provided mixed results. In this study, we explored the clinical predictors of reward and punishment learning using a probabilistic classification learning task. Patients with schizophrenia (n=40) performed similarly to healthy controls (n=30) on the classification learning task. However, more severe negative and general symptoms were associated with lower reward-learning performance, whereas poorer general psychosocial functioning was correlated with both lower reward- and punishment-learning performances. Multiple linear regression analyses indicated that general psychosocial functioning was the only significant predictor of reinforcement learning performance when education, antipsychotic dose, and positive, negative and general symptoms were included in the analysis. These results suggest a close relationship between reinforcement learning and general psychosocial functioning in schizophrenia. Published by Elsevier B.V.
Firth, Joseph; Stubbs, Brendon; Vancampfort, Davy; Firth, Josh A; Large, Matthew; Rosenbaum, Simon; Hallgren, Mats; Ward, Philip B; Sarris, Jerome; Yung, Alison R
2018-06-06
Handgrip strength may provide an easily-administered marker of cognitive functional status. However, further population-scale research examining relationships between grip strength and cognitive performance across multiple domains is needed. Additionally, relationships between grip strength and cognitive functioning in people with schizophrenia, who frequently experience cognitive deficits, has yet to be explored. Baseline data from the UK Biobank (2007-2010) was analyzed; including 475397 individuals from the general population, and 1162 individuals with schizophrenia. Linear mixed models and generalized linear mixed models were used to assess the relationship between grip strength and 5 cognitive domains (visual memory, reaction time, reasoning, prospective memory, and number memory), controlling for age, gender, bodyweight, education, and geographical region. In the general population, maximal grip strength was positively and significantly related to visual memory (coefficient [coeff] = -0.1601, standard error [SE] = 0.003), reaction time (coeff = -0.0346, SE = 0.0004), reasoning (coeff = 0.2304, SE = 0.0079), number memory (coeff = 0.1616, SE = 0.0092), and prospective memory (coeff = 0.3486, SE = 0.0092: all P < .001). In the schizophrenia sample, grip strength was strongly related to visual memory (coeff = -0.155, SE = 0.042, P < .001) and reaction time (coeff = -0.049, SE = 0.009, P < .001), while prospective memory approached statistical significance (coeff = 0.233, SE = 0.132, P = .078), and no statistically significant association was found with number memory and reasoning (P > .1). Grip strength is significantly associated with cognitive functioning in the general population and individuals with schizophrenia, particularly for working memory and processing speed. Future research should establish directionality, examine if grip strength also predicts functional and physical health outcomes in schizophrenia, and determine whether interventions which improve muscular strength impact on cognitive and real-world functioning.
A comparison of bilingual education and generalist teachers' approaches to scientific biliteracy
NASA Astrophysics Data System (ADS)
Garza, Esther
The purpose of this study was to determine if educators were capitalizing on bilingual learners' use of their biliterate abilities to acquire scientific meaning and discourse that would formulate a scientific biliterate identity. Mixed methods were used to explore teachers' use of biliteracy and Funds of Knowledge (Moll, L., Amanti, C., Neff, D., & Gonzalez, N., 1992; Gonzales, Moll, & Amanti, 2005) from the students' Latino heritage while conducting science inquiry. The research study explored four constructs that conceptualized scientific biliteracy. The four constructs include science literacy, science biliteracy, reading comprehension strategies and students' cultural backgrounds. There were 156 4th-5th grade bilingual and general education teachers in South Texas that were surveyed using the Teacher Scientific Biliteracy Inventory (TSBI) and five teachers' science lessons were observed. Qualitative findings revealed that a variety of scientific biliteracy instructional strategies were frequently used in both bilingual and general education classrooms. The language used to deliver this instruction varied. A General Linear Model revealed that classroom assignment, bilingual or general education, had a significant effect on a teacher's instructional approach to employ scientific biliteracy. A simple linear regression found that the TSBI accounted for 17% of the variance on 4th grade reading benchmarks. Mixed methods results indicated that teachers were utilizing scientific biliteracy strategies in English, Spanish and/or both languages. Household items and science experimentation at home were encouraged by teachers to incorporate the students' cultural backgrounds. Finally, science inquiry was conducted through a universal approach to science learning versus a multicultural approach to science learning.
Shear-flexible finite-element models of laminated composite plates and shells
NASA Technical Reports Server (NTRS)
Noor, A. K.; Mathers, M. D.
1975-01-01
Several finite-element models are applied to the linear static, stability, and vibration analysis of laminated composite plates and shells. The study is based on linear shallow-shell theory, with the effects of shear deformation, anisotropic material behavior, and bending-extensional coupling included. Both stiffness (displacement) and mixed finite-element models are considered. Discussion is focused on the effects of shear deformation and anisotropic material behavior on the accuracy and convergence of different finite-element models. Numerical studies are presented which show the effects of increasing the order of the approximating polynomials, adding internal degrees of freedom, and using derivatives of generalized displacements as nodal parameters.
NASA Technical Reports Server (NTRS)
Grody, N. C.
1973-01-01
Linear and nonlinear responses of a magnetoplasma resulting from inhomogeneity in the background plasma density are studied. The plasma response to an impulse electric field was measured and the results are compared with the theory of an inhomogeneous cold plasma. Impulse responses were recorded for the different plasma densities, static magnetic fields, and neutral pressures and generally appeared as modulated, damped oscillations. The frequency spectra of the waveforms consisted of two separated resonance peaks. For weak excitation, the results correlate with the linear theory of a cold, inhomogeneous, cylindrical magnetoplasma. The damping mechanism is identified with that of phase mixing due to inhomogeneity in plasma density. With increasing excitation voltage, the nonlinear impulse responses display stronger damping and a small increase in the frequency of oscillation.
Controller Synthesis for Periodically Forced Chaotic Systems
NASA Astrophysics Data System (ADS)
Basso, Michele; Genesio, Roberto; Giovanardi, Lorenzo
Delayed feedback controllers are an appealing tool for stabilization of periodic orbits in chaotic systems. Despite their conceptual simplicity, specific and reliable design procedures are difficult to obtain, partly also because of their inherent infinite-dimensional structure. This chapter considers the use of finite dimensional linear time invariant controllers for stabilization of periodic solutions in a general class of sinusoidally forced nonlinear systems. For such controllers — which can be interpreted as rational approximations of the delayed ones — we provide a computationally attractive synthesis technique based on Linear Matrix Inequalities (LMIs), by mixing results concerning absolute stability of nonlinear systems and robustness of uncertain linear systems. The resulting controllers prove to be effective for chaos suppression in electronic circuits and systems, as shown by two different application examples.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data
Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-01-01
Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741
A novel formulation for unsteady counterflow flames using a thermal-conductivity-weighted coordinate
NASA Astrophysics Data System (ADS)
Weiss, Adam D.; Vera, Marcos; Liñán, Amable; Sánchez, Antonio L.; Williams, Forman A.
2018-01-01
A general formulation is given for the description of reacting mixing layers in stagnation-type flows subject to both time-varying strain and pressure. The salient feature of the formulation is the introduction of a thermal-conductivity-weighted transverse coordinate that leads to a compact transport operator that facilitates numerical integration and theoretical analysis. For steady counterflow mixing layers, the associated transverse mass flux is shown to be effectively linear in terms of the new coordinate, so that the conservation equations for energy and chemical species uncouple from the mass and momentum conservation equations, thereby greatly simplifying the solution. Comparisons are shown with computations of diffusion flames with infinitely fast reaction using both the classic Howarth-Dorodnitzyn density-weighted coordinate and the new thermal-conductivity-weighted coordinate, illustrating the advantages of the latter. Also, as an illustrative application of the formulation to the computation of unsteady counterflows, the flame response to harmonically varying strain is examined in the linear limit.
Analysis and generation of groundwater concentration time series
NASA Astrophysics Data System (ADS)
Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae
2018-01-01
Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.
Neutron response of GafChromic® EBT2 film
NASA Astrophysics Data System (ADS)
Hsiao, Ming-Chen; Liu, Yuan-Hao; Chen, Wei-Lin; Jiang, Shiang-Huei
2013-03-01
Neutron and gamma-ray mixed field dosimetry remains one of the most challenging topics in radiation dosimetry studies. However, the requirement for accurate mixed field dosimetry is increasing because of the considerable interest in high-energy radiotherapy machines, medical ion beams and BNCT epithermal neutron beams. Therefore, this study investigated the GafChromic® EBT2 film. The linearity, reproducibility, energy dependence and homogeneity of the film were tested in a 60Co medical beam, 6-MV LINAC and 10-MV LINAC. The linearity and self-developing effect of the film irradiated in an epithermal neutron beam were also examined. These basic detector characteristics showed that EBT2 film can be effectively applied in mixed field dosimetry. A general detector response model was developed to determine the neutron relative effectiveness (RE) values. The RE value of fast neutrons varies with neutron spectra. By contrast, the RE value of thermal neutrons was determined as a constant; it is only 32.5% in relation to gamma rays. No synergy effect was observed in this study. The lithium-6 capture reaction dominates the neutron response in the thermal neutron energy range, and the recoil hydrogen dose becomes the dominant component in the fast neutron energy region. Based on this study, the application of the EBT2 film in the neutron and gamma-ray mixed field is feasible.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
A note about high blood pressure in childhood
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena; Simão, Carla
2017-06-01
In medical, behavioral and social sciences it is usual to get a binary outcome. In the present work is collected information where some of the outcomes are binary variables (1='yes'/ 0='no'). In [14] a preliminary study about the caregivers perception of pediatric hypertension was introduced. An experimental questionnaire was designed to be answered by the caregivers of routine pediatric consultation attendees in the Santa Maria's hospital (HSM). The collected data was statistically analyzed, where a descriptive analysis and a predictive model were performed. Significant relations between some socio-demographic variables and the assessed knowledge were obtained. In [14] can be found a statistical data analysis using partial questionnaire's information. The present article completes the statistical approach estimating a model for relevant remaining questions of questionnaire by Generalized Linear Models (GLM). Exploring the binary outcome issue, we intend to extend this approach using Generalized Linear Mixed Models (GLMM), but the process is still ongoing.
NASA Astrophysics Data System (ADS)
Grum-Grzhimailo, A. N.; Cubaynes, D.; Heinecke, E.; Hoffmann, P.; Zimmermann, P.; Meyer, M.
2010-10-01
The generalized geometrical model for photoionization from polarized atoms is extended to include mixing of configurations in the initial atomic and/or the final photoion states. The theoretical results for angle-resolved linear and circular magnetic dichroism are in good agreement with new high-resolution photoelectron data for 3p-1 photoionization of potassium atoms polarized in the K 3p64s 2S1/2 ground state by laser optical pumping.
Summation by parts, projections, and stability
NASA Technical Reports Server (NTRS)
Olsson, Pelle
1993-01-01
We have derived stability results for high-order finite difference approximations of mixed hyperbolic-parabolic initial-boundary value problems (IBVP). The results are obtained using summation by parts and a new way of representing general linear boundary conditions as an orthogonal projection. By slightly rearranging the analytic equations, we can prove strict stability for hyperbolic-parabolic IBVP. Furthermore, we generalize our technique so as to yield strict stability on curvilinear non-smooth domains in two space dimensions. Finally, we show how to incorporate inhomogeneous boundary data while retaining strict stability. Using the same procedure one can prove strict stability in higher dimensions as well.
Ma, Qiuyun; Jiao, Yan; Ren, Yiping
2017-01-01
In this study, length-weight relationships and relative condition factors were analyzed for Yellow Croaker (Larimichthys polyactis) along the north coast of China. Data covered six regions from north to south: Yellow River Estuary, Coastal Waters of Northern Shandong, Jiaozhou Bay, Coastal Waters of Qingdao, Haizhou Bay, and South Yellow Sea. In total 3,275 individuals were collected during six years (2008, 2011-2015). One generalized linear model, two simply linear models and nine linear mixed effect models that applied the effects from regions and/or years to coefficient a and/or the exponent b were studied and compared. Among these twelve models, the linear mixed effect model with random effects from both regions and years fit the data best, with lowest Akaike information criterion value and mean absolute error. In this model, the estimated a was 0.0192, with 95% confidence interval 0.0178~0.0308, and the estimated exponent b was 2.917 with 95% confidence interval 2.731~2.945. Estimates for a and b with the random effects in intercept and coefficient from Region and Year, ranged from 0.013 to 0.023 and from 2.835 to 3.017, respectively. Both regions and years had effects on parameters a and b, while the effects from years were shown to be much larger than those from regions. Except for Coastal Waters of Northern Shandong, a decreased from north to south. Condition factors relative to reference years of 1960, 1986, 2005, 2007, 2008~2009 and 2010 revealed that the body shape of Yellow Croaker became thinner in recent years. Furthermore relative condition factors varied among months, years, regions and length. The values of a and relative condition factors decreased, when the environmental pollution became worse, therefore, length-weight relationships could be an indicator for the environment quality. Results from this study provided basic description of current condition of Yellow Croaker along the north coast of China.
Mosing, Martina; Waldmann, Andreas D.; MacFarlane, Paul; Iff, Samuel; Auer, Ulrike; Bohm, Stephan H.; Bettschart-Wolfensberger, Regula; Bardell, David
2016-01-01
This study evaluated the breathing pattern and distribution of ventilation in horses prior to and following recovery from general anaesthesia using electrical impedance tomography (EIT). Six horses were anaesthetised for 6 hours in dorsal recumbency. Arterial blood gas and EIT measurements were performed 24 hours before (baseline) and 1, 2, 3, 4, 5 and 6 hours after horses stood following anaesthesia. At each time point 4 representative spontaneous breaths were analysed. The percentage of the total breath length during which impedance remained greater than 50% of the maximum inspiratory impedance change (breath holding), the fraction of total tidal ventilation within each of four stacked regions of interest (ROI) (distribution of ventilation) and the filling time and inflation period of seven ROI evenly distributed over the dorso-ventral height of the lungs were calculated. Mixed effects multi-linear regression and linear regression were used and significance was set at p<0.05. All horses demonstrated inspiratory breath holding until 5 hours after standing. No change from baseline was seen for the distribution of ventilation during inspiration. Filling time and inflation period were more rapid and shorter in ventral and slower and longer in most dorsal ROI compared to baseline, respectively. In a mixed effects multi-linear regression, breath holding was significantly correlated with PaCO2 in both the univariate and multivariate regression. Following recovery from anaesthesia, horses showed inspiratory breath holding during which gas redistributed from ventral into dorsal regions of the lungs. This suggests auto-recruitment of lung tissue which would have been dependent and likely atelectic during anaesthesia. PMID:27331910
Paddison, Charlotte; Elliott, Marc; Parker, Richard; Staetsky, Laura; Lyratzopoulos, Georgios; Campbell, John L
2012-01-01
Objectives Uncertainties exist about when and how best to adjust performance measures for case mix. Our aims are to quantify the impact of case-mix adjustment on practice-level scores in a national survey of patient experience, to identify why and when it may be useful to adjust for case mix, and to discuss unresolved policy issues regarding the use of case-mix adjustment in performance measurement in health care. Design/setting Secondary analysis of the 2009 English General Practice Patient Survey. Responses from 2 163 456 patients registered with 8267 primary care practices. Linear mixed effects models were used with practice included as a random effect and five case-mix variables (gender, age, race/ethnicity, deprivation, and self-reported health) as fixed effects. Main outcome measures Primary outcome was the impact of case-mix adjustment on practice-level means (adjusted minus unadjusted) and changes in practice percentile ranks for questions measuring patient experience in three domains of primary care: access; interpersonal care; anticipatory care planning, and overall satisfaction with primary care services. Results Depending on the survey measure selected, case-mix adjustment changed the rank of between 0.4% and 29.8% of practices by more than 10 percentile points. Adjusting for case-mix resulted in large increases in score for a small number of practices and small decreases in score for a larger number of practices. Practices with younger patients, more ethnic minority patients and patients living in more socio-economically deprived areas were more likely to gain from case-mix adjustment. Age and race/ethnicity were the most influential adjustors. Conclusions While its effect is modest for most practices, case-mix adjustment corrects significant underestimation of scores for a small proportion of practices serving vulnerable patients and may reduce the risk that providers would ‘cream-skim’ by not enrolling patients from vulnerable socio-demographic groups. PMID:22626735
Paddison, Charlotte; Elliott, Marc; Parker, Richard; Staetsky, Laura; Lyratzopoulos, Georgios; Campbell, John L; Roland, Martin
2012-08-01
Uncertainties exist about when and how best to adjust performance measures for case mix. Our aims are to quantify the impact of case-mix adjustment on practice-level scores in a national survey of patient experience, to identify why and when it may be useful to adjust for case mix, and to discuss unresolved policy issues regarding the use of case-mix adjustment in performance measurement in health care. Secondary analysis of the 2009 English General Practice Patient Survey. Responses from 2 163 456 patients registered with 8267 primary care practices. Linear mixed effects models were used with practice included as a random effect and five case-mix variables (gender, age, race/ethnicity, deprivation, and self-reported health) as fixed effects. Primary outcome was the impact of case-mix adjustment on practice-level means (adjusted minus unadjusted) and changes in practice percentile ranks for questions measuring patient experience in three domains of primary care: access; interpersonal care; anticipatory care planning, and overall satisfaction with primary care services. Depending on the survey measure selected, case-mix adjustment changed the rank of between 0.4% and 29.8% of practices by more than 10 percentile points. Adjusting for case-mix resulted in large increases in score for a small number of practices and small decreases in score for a larger number of practices. Practices with younger patients, more ethnic minority patients and patients living in more socio-economically deprived areas were more likely to gain from case-mix adjustment. Age and race/ethnicity were the most influential adjustors. While its effect is modest for most practices, case-mix adjustment corrects significant underestimation of scores for a small proportion of practices serving vulnerable patients and may reduce the risk that providers would 'cream-skim' by not enrolling patients from vulnerable socio-demographic groups.
The long-solved problem of the best-fit straight line: application to isotopic mixing lines
NASA Astrophysics Data System (ADS)
Wehr, Richard; Saleska, Scott R.
2017-01-01
It has been almost 50 years since York published an exact and general solution for the best-fit straight line to independent points with normally distributed errors in both x and y. York's solution is highly cited in the geophysical literature but almost unknown outside of it, so that there has been no ebb in the tide of books and papers wrestling with the problem. Much of the post-1969 literature on straight-line fitting has sown confusion not merely by its content but by its very existence. The optimal least-squares fit is already known; the problem is already solved. Here we introduce the non-specialist reader to York's solution and demonstrate its application in the interesting case of the isotopic mixing line, an analytical tool widely used to determine the isotopic signature of trace gas sources for the study of biogeochemical cycles. The most commonly known linear regression methods - ordinary least-squares regression (OLS), geometric mean regression (GMR), and orthogonal distance regression (ODR) - have each been recommended as the best method for fitting isotopic mixing lines. In fact, OLS, GMR, and ODR are all special cases of York's solution that are valid only under particular measurement conditions, and those conditions do not hold in general for isotopic mixing lines. Using Monte Carlo simulations, we quantify the biases in OLS, GMR, and ODR under various conditions and show that York's general - and convenient - solution is always the least biased.
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-03-15
Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Row, Jeffrey R.; Knick, Steven T.; Oyler-McCance, Sara J.; Lougheed, Stephen C.; Fedy, Bradley C.
2017-01-01
Dispersal can impact population dynamics and geographic variation, and thus, genetic approaches that can establish which landscape factors influence population connectivity have ecological and evolutionary importance. Mixed models that account for the error structure of pairwise datasets are increasingly used to compare models relating genetic differentiation to pairwise measures of landscape resistance. A model selection framework based on information criteria metrics or explained variance may help disentangle the ecological and landscape factors influencing genetic structure, yet there are currently no consensus for the best protocols. Here, we develop landscape-directed simulations and test a series of replicates that emulate independent empirical datasets of two species with different life history characteristics (greater sage-grouse; eastern foxsnake). We determined that in our simulated scenarios, AIC and BIC were the best model selection indices and that marginal R2 values were biased toward more complex models. The model coefficients for landscape variables generally reflected the underlying dispersal model with confidence intervals that did not overlap with zero across the entire model set. When we controlled for geographic distance, variables not in the underlying dispersal models (i.e., nontrue) typically overlapped zero. Our study helps establish methods for using linear mixed models to identify the features underlying patterns of dispersal across a variety of landscapes.
Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei
2017-09-25
It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
Therapy preferences of patients with lung and colon cancer: a discrete choice experiment.
Schmidt, Katharina; Damm, Kathrin; Vogel, Arndt; Golpon, Heiko; Manns, Michael P; Welte, Tobias; Graf von der Schulenburg, J-Matthias
2017-01-01
There is increasing interest in studies that examine patient preferences to measure health-related outcomes. Understanding patients' preferences can improve the treatment process and is particularly relevant for oncology. In this study, we aimed to identify the subgroup-specific treatment preferences of German patients with lung cancer (LC) or colorectal cancer (CRC). Six discrete choice experiment (DCE) attributes were established on the basis of a systematic literature review and qualitative interviews. The DCE analyses comprised generalized linear mixed-effects model and latent class mixed logit model. The study cohort comprised 310 patients (194 with LC, 108 with CRC, 8 with both types of cancer) with a median age of 63 (SD =10.66) years. The generalized linear mixed-effects model showed a significant ( P <0.05) degree of association for all of the tested attributes. "Strongly increased life expectancy" was the attribute given the greatest weight by all patient groups. Using latent class mixed logit model analysis, we identified three classes of patients. Patients who were better informed tended to prefer a more balanced relationship between length and health-related quality of life (HRQoL) than those who were less informed. Class 2 (LC patients with low HRQoL who had undergone surgery) gave a very strong weighting to increased length of life. We deduced from Class 3 patients that those with a relatively good life expectancy (CRC compared with LC) gave a greater weight to moderate effects on HRQoL than to a longer life. Overall survival was the most important attribute of therapy for patients with LC or CRC. Differences in treatment preferences between subgroups should be considered in regard to treatment and development of guidelines. Patients' preferences were not affected by sex or age, but were affected by the cancer type, HRQoL, surgery status, and the main source of information on the disease.
Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin
2017-01-01
Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. PMID:28952708
Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin
2017-09-27
Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.
Integrability and Linear Stability of Nonlinear Waves
NASA Astrophysics Data System (ADS)
Degasperis, Antonio; Lombardo, Sara; Sommacal, Matteo
2018-03-01
It is well known that the linear stability of solutions of 1+1 partial differential equations which are integrable can be very efficiently investigated by means of spectral methods. We present here a direct construction of the eigenmodes of the linearized equation which makes use only of the associated Lax pair with no reference to spectral data and boundary conditions. This local construction is given in the general N× N matrix scheme so as to be applicable to a large class of integrable equations, including the multicomponent nonlinear Schrödinger system and the multiwave resonant interaction system. The analytical and numerical computations involved in this general approach are detailed as an example for N=3 for the particular system of two coupled nonlinear Schrödinger equations in the defocusing, focusing and mixed regimes. The instabilities of the continuous wave solutions are fully discussed in the entire parameter space of their amplitudes and wave numbers. By defining and computing the spectrum in the complex plane of the spectral variable, the eigenfrequencies are explicitly expressed. According to their topological properties, the complete classification of these spectra in the parameter space is presented and graphically displayed. The continuous wave solutions are linearly unstable for a generic choice of the coupling constants.
NASA Astrophysics Data System (ADS)
Abramov, R. V.
2011-12-01
Chaotic multiscale dynamical systems are common in many areas of science, one of the examples being the interaction of the low-frequency dynamics in the atmosphere with the fast turbulent weather dynamics. One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation property promotes the suppression of chaos at the slow variables through the rapid mixing at the fast variables, both theoretically and through numerical simulations. A suitable mathematical framework is developed, connecting the slow dynamics on the tangent subspaces to the infinite-time linear response of the mean state to a constant external forcing at the fast variables. Additionally, it is shown that the uncoupled dynamics for the slow variables may remain chaotic while the complete multiscale system loses chaos and becomes completely predictable at the slow variables through increasing chaos and turbulence at the fast variables. This result contradicts the common sense intuition, where, naturally, one would think that coupling a slow weakly chaotic system with another much faster and much stronger chaotic system would result in general increase of chaos at the slow variables.
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
Terluin, Berend; de Boer, Michiel R; de Vet, Henrica C W
2016-01-01
The network approach to psychopathology conceives mental disorders as sets of symptoms causally impacting on each other. The strengths of the connections between symptoms are key elements in the description of those symptom networks. Typically, the connections are analysed as linear associations (i.e., correlations or regression coefficients). However, there is insufficient awareness of the fact that differences in variance may account for differences in connection strength. Differences in variance frequently occur when subgroups are based on skewed data. An illustrative example is a study published in PLoS One (2013;8(3):e59559) that aimed to test the hypothesis that the development of psychopathology through "staging" was characterized by increasing connection strength between mental states. Three mental states (negative affect, positive affect, and paranoia) were studied in severity subgroups of a general population sample. The connection strength was found to increase with increasing severity in six of nine models. However, the method used (linear mixed modelling) is not suitable for skewed data. We reanalysed the data using inverse Gaussian generalized linear mixed modelling, a method suited for positively skewed data (such as symptoms in the general population). The distribution of positive affect was normal, but the distributions of negative affect and paranoia were heavily skewed. The variance of the skewed variables increased with increasing severity. Reanalysis of the data did not confirm increasing connection strength, except for one of nine models. Reanalysis of the data did not provide convincing evidence in support of staging as characterized by increasing connection strength between mental states. Network researchers should be aware that differences in connection strength between symptoms may be caused by differences in variances, in which case they should not be interpreted as differences in impact of one symptom on another symptom.
Bowen, Stephen R; Chappell, Richard J; Bentzen, Søren M; Deveau, Michael A; Forrest, Lisa J; Jeraj, Robert
2012-01-01
Purpose To quantify associations between pre-radiotherapy and post-radiotherapy PET parameters via spatially resolved regression. Materials and methods Ten canine sinonasal cancer patients underwent PET/CT scans of [18F]FDG (FDGpre), [18F]FLT (FLTpre), and [61Cu]Cu-ATSM (Cu-ATSMpre). Following radiotherapy regimens of 50 Gy in 10 fractions, veterinary patients underwent FDG PET/CT scans at three months (FDGpost). Regression of standardized uptake values in baseline FDGpre, FLTpre and Cu-ATSMpre tumour voxels to those in FDGpost images was performed for linear, log-linear, generalized-linear and mixed-fit linear models. Goodness-of-fit in regression coefficients was assessed by R2. Hypothesis testing of coefficients over the patient population was performed. Results Multivariate linear model fits of FDGpre to FDGpost were significantly positive over the population (FDGpost~0.17 FDGpre, p=0.03), and classified slopes of RECIST non-responders and responders to be different (0.37 vs. 0.07, p=0.01). Generalized-linear model fits related FDGpre to FDGpost by a linear power law (FDGpost~FDGpre0.93, p<0.001). Univariate mixture model fits of FDGpre improved R2 from 0.17 to 0.52. Neither baseline FLT PET nor Cu-ATSM PET uptake contributed statistically significant multivariate regression coefficients. Conclusions Spatially resolved regression analysis indicates that pre-treatment FDG PET uptake is most strongly associated with three-month post-treatment FDG PET uptake in this patient population, though associations are histopathology-dependent. PMID:22682748
NASA Astrophysics Data System (ADS)
Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.
2017-09-01
This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.
Towards enhancing and delaying disturbances in free shear flows
NASA Technical Reports Server (NTRS)
Criminale, W. O.; Jackson, T. L.; Lasseigne, D. G.
1994-01-01
The family of shear flows comprising the jet, wake, and the mixing layer are subjected to perturbations in an inviscid incompressible fluid. By modeling the basic mean flows as parallel with piecewise linear variations for the velocities, complete and general solutions to the linearized equations of motion can be obtained in closed form as functions of all space variables and time when posed as an initial value problem. The results show that there is a continuous as well as the discrete spectrum that is more familiar in stability theory and therefore there can be both algebraic and exponential growth of disturbances in time. These bases make it feasible to consider control of such flows. To this end, the possibility of enhancing the disturbances in the mixing layer and delaying the onset in the jet and wake is investigated. It is found that growth of perturbations can be delayed to a considerable degree for the jet and the wake but, by comparison, cannot be enhanced in the mixing layer. By using moving coordinates, a method for demonstrating the predominant early and long time behavior of disturbances in these flows is given for continuous velocity profiles. It is shown that the early time transients are always algebraic whereas the asymptotic limit is that of an exponential normal mode. Numerical treatment of the new governing equations confirm the conclusions reached by use of the piecewise linear basic models. Although not pursued here, feedback mechanisms designed for control of the flow could be devised using the results of this work.
Invasion of cooperators in lattice populations: linear and non-linear public good games.
Vásárhelyi, Zsóka; Scheuring, István
2013-08-01
A generalized version of the N-person volunteer's dilemma (NVD) Game has been suggested recently for illustrating the problem of N-person social dilemmas. Using standard replicator dynamics it can be shown that coexistence of cooperators and defectors is typical in this model. However, the question of how a rare mutant cooperator could invade a population of defectors is still open. Here we examined the dynamics of individual based stochastic models of the NVD. We analyze the dynamics in well-mixed and viscous populations. We show in both cases that coexistence between cooperators and defectors is possible; moreover, spatial aggregation of types in viscous populations can easily lead to pure cooperation. Furthermore we analyze the invasion of cooperators in populations consisting predominantly of defectors. In accordance with analytical results, in deterministic systems, we found the invasion of cooperators successful in the well-mixed case only if their initial concentration was higher than a critical threshold, defined by the replicator dynamics of the NVD. In the viscous case, however, not the initial concentration but the initial number determines the success of invasion. We show that even a single mutant cooperator can invade with a high probability, because the local density of aggregated cooperators exceeds the threshold defined by the game. Comparing the results to models using different benefit functions (linear or sigmoid), we show that the role of the benefit function is much more important in the well-mixed than in the viscous case. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie
2017-08-01
Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.
A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.
Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin
2017-02-01
The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed. Copyright © 2016 Elsevier Ltd. All rights reserved.
A mixed formulation for interlaminar stresses in dropped-ply laminates
NASA Technical Reports Server (NTRS)
Harrison, Peter N.; Johnson, Eric R.
1993-01-01
A structural model is developed for the linear elastic response of structures consisting of multiple layers of varying thickness such as laminated composites containing internal ply drop-offs. The assumption of generalized plane deformation is used to reduce the solution domain to two dimensions while still allowing some out-of-plane deformation. The Hellinger-Reissner variational principle is applied to a layerwise assumed stress distribution with the resulting governing equations solved using finite differences.
Marín, Linda; Perfecto, Ivette
2013-04-01
Spiders are a very diverse group of invertebrate predators found in agroecosystems and natural systems. However, spider distribution, abundance, and eventually their ecological function in ecosystems can be influenced by abiotic and biotic factors such as agricultural intensification and dominant ants. Here we explore the influence of both agricultural intensification and the dominant arboreal ant Azteca instabilis on the spider community in coffee agroecosystems in southern Mexico. To measure the influence of the arboreal ant Azteca instabilis (F. Smith) on the spider community inhabiting the coffee layer of coffee agroecosystems, spiders were collected from coffee plants that were and were not patrolled by the ant in sites differing in agricultural intensification. For 2008, generalized linear mixed models showed that spider diversity was affected positively by agricultural intensification but not by the ant. However, results suggested that some spider species were associated with A. instabilis. Therefore, in 2009 we concentrated our research on the effect of A. instabilis on spider diversity and composition. For 2009, generalized linear mixed models show that spider richness and abundance per plant were significantly higher in the presence of A. instabilis. In addition, analyses of visual counts of insects and sticky traps data show that more resources were present in plants patrolled by the ant. The positive effect of A. instabilis on spiders seems to be caused by at least two mechanisms: high abundance of insects and protection against predators.
Advanced Statistical Analyses to Reduce Inconsistency of Bond Strength Data.
Minamino, T; Mine, A; Shintani, A; Higashi, M; Kawaguchi-Uemura, A; Kabetani, T; Hagino, R; Imai, D; Tajiri, Y; Matsumoto, M; Yatani, H
2017-11-01
This study was designed to clarify the interrelationship of factors that affect the value of microtensile bond strength (µTBS), focusing on nondestructive testing by which information of the specimens can be stored and quantified. µTBS test specimens were prepared from 10 noncarious human molars. Six factors of µTBS test specimens were evaluated: presence of voids at the interface, X-ray absorption coefficient of resin, X-ray absorption coefficient of dentin, length of dentin part, size of adhesion area, and individual differences of teeth. All specimens were observed nondestructively by optical coherence tomography and micro-computed tomography before µTBS testing. After µTBS testing, the effect of these factors on µTBS data was analyzed by the general linear model, linear mixed effects regression model, and nonlinear regression model with 95% confidence intervals. By the general linear model, a significant difference in individual differences of teeth was observed ( P < 0.001). A significantly positive correlation was shown between µTBS and length of dentin part ( P < 0.001); however, there was no significant nonlinearity ( P = 0.157). Moreover, a significantly negative correlation was observed between µTBS and size of adhesion area ( P = 0.001), with significant nonlinearity ( P = 0.014). No correlation was observed between µTBS and X-ray absorption coefficient of resin ( P = 0.147), and there was no significant nonlinearity ( P = 0.089). Additionally, a significantly positive correlation was observed between µTBS and X-ray absorption coefficient of dentin ( P = 0.022), with significant nonlinearity ( P = 0.036). A significant difference was also observed between the presence and absence of voids by linear mixed effects regression analysis. Our results showed correlations between various parameters of tooth specimens and µTBS data. To evaluate the performance of the adhesive more precisely, the effect of tooth variability and a method to reduce variation in bond strength values should also be considered.
Grams, Vanessa; Wellmann, Robin; Preuß, Siegfried; Grashorn, Michael A; Kjaer, Jörgen B; Bessei, Werner; Bennewitz, Jörn
2015-09-30
Feather pecking (FP) in laying hens is a well-known and multi-factorial behaviour with a genetic background. In a selection experiment, two lines were developed for 11 generations for high (HFP) and low (LFP) feather pecking, respectively. Starting with the second generation of selection, there was a constant difference in mean number of FP bouts between both lines. We used the data from this experiment to perform a quantitative genetic analysis and to map selection signatures. Pedigree and phenotypic data were available for the last six generations of both lines. Univariate quantitative genetic analyses were conducted using mixed linear and generalized mixed linear models assuming a Poisson distribution. Selection signatures were mapped using 33,228 single nucleotide polymorphisms (SNPs) genotyped on 41 HFP and 34 LFP individuals of generation 11. For each SNP, we estimated Wright's fixation index (FST). We tested the null hypothesis that FST is driven purely by genetic drift against the alternative hypothesis that it is driven by genetic drift and selection. The mixed linear model failed to analyze the LFP data because of the large number of 0s in the observation vector. The Poisson model fitted the data well and revealed a small but continuous genetic trend in both lines. Most of the 17 genome-wide significant SNPs were located on chromosomes 3 and 4. Thirteen clusters with at least two significant SNPs within an interval of 3 Mb maximum were identified. Two clusters were mapped on chromosomes 3, 4, 8 and 19. Of the 17 genome-wide significant SNPs, 12 were located within the identified clusters. This indicates a non-random distribution of significant SNPs and points to the presence of selection sweeps. Data on FP should be analysed using generalised linear mixed models assuming a Poisson distribution, especially if the number of FP bouts is small and the distribution is heavily peaked at 0. The FST-based approach was suitable to map selection signatures that need to be confirmed by linkage or association mapping.
[Cost variation in care groups?
Mohnen, S M; Molema, C C M; Steenbeek, W; van den Berg, M J; de Bruin, S R; Baan, C A; Struijs, J N
2017-01-01
Is the simple mean of the costs per diabetes patient a suitable tool with which to compare care groups? Do the total costs of care per diabetes patient really give the best insight into care group performance? Cross-sectional, multi-level study. The 2009 insurance claims of 104,544 diabetes patients managed by care groups in the Netherlands were analysed. The data were obtained from Vektis care information centre. For each care group we determined the mean costs per patient of all the curative care and diabetes-specific hospital care using the simple mean method, then repeated it using the 'generalized linear mixed model'. We also calculated for which proportion the differences found could be attributed to the care groups themselves. The mean costs of the total curative care per patient were €3,092 - €6,546; there were no significant differences between care groups. The mixed model method resulted in less variation (€2,884 - €3,511), and there were a few significant differences. We found a similar result for diabetes-specific hospital care and the ranking position of the care groups proved to be dependent on the method used. The care group effect was limited, although it was greater in the diabetes-specific hospital costs than in the total costs of curative care (6.7% vs. 0.4%). The method used to benchmark care groups carries considerable weight. Simply stated, determining the mean costs of care (still often done) leads to an overestimation of the differences between care groups. The generalized linear mixed model is more accurate and yields better comparisons. However, the fact remains that 'total costs of care' is a faulty indicator since care groups have little impact on them. A more informative indicator is 'costs of diabetes-specific hospital care' as these costs are more influenced by care groups.
A big data approach to the development of mixed-effects models for seizure count data.
Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M
2017-05-01
Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2015-10-01
In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.
Extant or Absent: Formation Water in New York State Drinking Water Wells
NASA Astrophysics Data System (ADS)
Christian, K.; Lautz, L. K.
2013-12-01
The current moratorium on hydraulic fracturing in New York State (NYS) provides an opportunity to collect baseline shallow groundwater quality data pre-hydraulic fracturing, which is essential for determining the natural variability of groundwater chemistry and to evaluate future claims of impaired groundwater quality if hydraulic fracturing occurs in the State. Concerns regarding the future environmental impact of shale gas extraction in NYS include potential shallow groundwater contamination due to migration of methane or formation water from shale gas extraction sites. Treatment, storage and disposal of saline flowback fluids after gas extraction could also be a source of water contamination. In this study, we combine southern NYS shallow groundwater chemistry data from Project Shale-Water Interaction Forensic Tools (SWIFT, n=60), the National Uranium Resource Evaluation program (NURE, n=684), and the USGS 305(b) Ambient Groundwater Quality Monitoring program (USGS, n=89) to examine evidence of formation water mixing with groundwater using the methodology of Warner et al. (2012). Groundwater characterized as low salinity (<20 mg/L Cl-) accounted for 72% of samples and 28% of samples had high salinity (>20 mg/L Cl-). A plot of bromide versus chloride shows high salinity groundwater samples with Br/Cl ratios >0.0001 fall on the mixing line between low salinity groundwater and Appalachian Basin formation water. Based on the observed linear relationship between bromide and chloride, it appears there is up to 1% formation water mixing with shallow groundwater in the region. The presence of formation water in shallow groundwater would indicate the existence of natural migratory pathways between deep formation wells and shallow groundwater aquifers. A plot of sodium versus chloride also illustrates a linear trend for Type D waters (R^2= 0.776), but the relationship is weaker than that for bromide versus chloride (R^2= 0.924). Similar linear relationships are not observed between other ions and chloride, including Mg, Ca, and Sr. If high salinity groundwater samples from NYS contain small percentages of formation water, we expect linear relationships between chloride and these other, generally conservative ions. The absence of these linear relationships suggests high salinity could be associated with contamination by landfill leachate, septic effluent, road salt, or other potential sources of elevated salt. Future work needs to determine if mixing of shallow groundwater with other potential sources of salinity, such as road deicers, can explain the observed linear relationships. Strontium isotopes from shallow groundwater samples will also be compared to those for NY formation water.
CHAMP: a locally adaptive unmixing-based hyperspectral anomaly detection algorithm
NASA Astrophysics Data System (ADS)
Crist, Eric P.; Thelen, Brian J.; Carrara, David A.
1998-10-01
Anomaly detection offers a means by which to identify potentially important objects in a scene without prior knowledge of their spectral signatures. As such, this approach is less sensitive to variations in target class composition, atmospheric and illumination conditions, and sensor gain settings than would be a spectral matched filter or similar algorithm. The best existing anomaly detectors generally fall into one of two categories: those based on local Gaussian statistics, and those based on linear mixing moles. Unmixing-based approaches better represent the real distribution of data in a scene, but are typically derived and applied on a global or scene-wide basis. Locally adaptive approaches allow detection of more subtle anomalies by accommodating the spatial non-homogeneity of background classes in a typical scene, but provide a poorer representation of the true underlying background distribution. The CHAMP algorithm combines the best attributes of both approaches, applying a linear-mixing model approach in a spatially adaptive manner. The algorithm itself, and teste results on simulated and actual hyperspectral image data, are presented in this paper.
Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming
2014-01-01
The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.
Neural Population Coding of Multiple Stimuli
Ma, Wei Ji
2015-01-01
In natural scenes, objects generally appear together with other objects. Yet, theoretical studies of neural population coding typically focus on the encoding of single objects in isolation. Experimental studies suggest that neural responses to multiple objects are well described by linear or nonlinear combinations of the responses to constituent objects, a phenomenon we call stimulus mixing. Here, we present a theoretical analysis of the consequences of common forms of stimulus mixing observed in cortical responses. We show that some of these mixing rules can severely compromise the brain's ability to decode the individual objects. This cost is usually greater than the cost incurred by even large reductions in the gain or large increases in neural variability, explaining why the benefits of attention can be understood primarily in terms of a stimulus selection, or demixing, mechanism rather than purely as a gain increase or noise reduction mechanism. The cost of stimulus mixing becomes even higher when the number of encoded objects increases, suggesting a novel mechanism that might contribute to set size effects observed in myriad psychophysical tasks. We further show that a specific form of neural correlation and heterogeneity in stimulus mixing among the neurons can partially alleviate the harmful effects of stimulus mixing. Finally, we derive simple conditions that must be satisfied for unharmful mixing of stimuli. PMID:25740513
The long-solved problem of the best-fit straight line: Application to isotopic mixing lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wehr, Richard; Saleska, Scott R.
It has been almost 50 years since York published an exact and general solution for the best-fit straight line to independent points with normally distributed errors in both x and y. York's solution is highly cited in the geophysical literature but almost unknown outside of it, so that there has been no ebb in the tide of books and papers wrestling with the problem. Much of the post-1969 literature on straight-line fitting has sown confusion not merely by its content but by its very existence. The optimal least-squares fit is already known; the problem is already solved. Here we introducemore » the non-specialist reader to York's solution and demonstrate its application in the interesting case of the isotopic mixing line, an analytical tool widely used to determine the isotopic signature of trace gas sources for the study of biogeochemical cycles. The most commonly known linear regression methods – ordinary least-squares regression (OLS), geometric mean regression (GMR), and orthogonal distance regression (ODR) – have each been recommended as the best method for fitting isotopic mixing lines. In fact, OLS, GMR, and ODR are all special cases of York's solution that are valid only under particular measurement conditions, and those conditions do not hold in general for isotopic mixing lines. Here, using Monte Carlo simulations, we quantify the biases in OLS, GMR, and ODR under various conditions and show that York's general – and convenient – solution is always the least biased.« less
The long-solved problem of the best-fit straight line: Application to isotopic mixing lines
Wehr, Richard; Saleska, Scott R.
2017-01-03
It has been almost 50 years since York published an exact and general solution for the best-fit straight line to independent points with normally distributed errors in both x and y. York's solution is highly cited in the geophysical literature but almost unknown outside of it, so that there has been no ebb in the tide of books and papers wrestling with the problem. Much of the post-1969 literature on straight-line fitting has sown confusion not merely by its content but by its very existence. The optimal least-squares fit is already known; the problem is already solved. Here we introducemore » the non-specialist reader to York's solution and demonstrate its application in the interesting case of the isotopic mixing line, an analytical tool widely used to determine the isotopic signature of trace gas sources for the study of biogeochemical cycles. The most commonly known linear regression methods – ordinary least-squares regression (OLS), geometric mean regression (GMR), and orthogonal distance regression (ODR) – have each been recommended as the best method for fitting isotopic mixing lines. In fact, OLS, GMR, and ODR are all special cases of York's solution that are valid only under particular measurement conditions, and those conditions do not hold in general for isotopic mixing lines. Here, using Monte Carlo simulations, we quantify the biases in OLS, GMR, and ODR under various conditions and show that York's general – and convenient – solution is always the least biased.« less
Sea surface temperature anomalies, planetary waves, and air-sea feedback in the middle latitudes
NASA Technical Reports Server (NTRS)
Frankignoul, C.
1985-01-01
Current analytical models for large-scale air-sea interactions in the middle latitudes are reviewed in terms of known sea-surface temperature (SST) anomalies. The scales and strength of different atmospheric forcing mechanisms are discussed, along with the damping and feedback processes controlling the evolution of the SST. Difficulties with effective SST modeling are described in terms of the techniques and results of case studies, numerical simulations of mixed-layer variability and statistical modeling. The relationship between SST and diabatic heating anomalies is considered and a linear model is developed for the response of the stationary atmosphere to the air-sea feedback. The results obtained with linear wave models are compared with the linear model results. Finally, sample data are presented from experiments with general circulation models into which specific SST anomaly data for the middle latitudes were introduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Ambra, P.; Vassilevski, P. S.
2014-05-30
Adaptive Algebraic Multigrid (or Multilevel) Methods (αAMG) are introduced to improve robustness and efficiency of classical algebraic multigrid methods in dealing with problems where no a-priori knowledge or assumptions on the near-null kernel of the underlined matrix are available. Recently we proposed an adaptive (bootstrap) AMG method, αAMG, aimed to obtain a composite solver with a desired convergence rate. Each new multigrid component relies on a current (general) smooth vector and exploits pairwise aggregation based on weighted matching in a matrix graph to define a new automatic, general-purpose coarsening process, which we refer to as “the compatible weighted matching”. Inmore » this work, we present results that broaden the applicability of our method to different finite element discretizations of elliptic PDEs. In particular, we consider systems arising from displacement methods in linear elasticity problems and saddle-point systems that appear in the application of the mixed method to Darcy problems.« less
Inverse solutions for electrical impedance tomography based on conjugate gradients methods
NASA Astrophysics Data System (ADS)
Wang, M.
2002-01-01
A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.
NASA Astrophysics Data System (ADS)
Najarbashi, G.; Mirzaei, S.
2016-03-01
Multi-mode entangled coherent states are important resources for linear optics quantum computation and teleportation. Here we introduce the generalized balanced N-mode coherent states which recast in the multi-qudit case. The necessary and sufficient condition for bi-separability of such balanced N-mode coherent states is found. We particularly focus on pure and mixed multi-qubit and multi-qutrit like states and examine the degree of bipartite as well as tripartite entanglement using the concurrence measure. Unlike the N-qubit case, it is shown that there are qutrit states violating monogamy inequality. Using parity, displacement operator and beam splitters, we will propose a scheme for generating balanced N-mode entangled coherent states for even number of terms in superposition.
Wei Wu; Charlesb Hall; Lianjun Zhang
2006-01-01
We predicted the spatial pattern of hourly probability of cloud cover in the Luquillo Experimental Forest (LEF) in North-Eastern Puerto Rico using four different models. The probability of cloud cover (defined as âthe percentage of the area covered by clouds in each pixel on the mapâ in this paper) at any hour and any place is a function of three topographic variables...
KMgene: a unified R package for gene-based association analysis for complex traits.
Yan, Qi; Fang, Zhou; Chen, Wei; Stegle, Oliver
2018-02-09
In this report, we introduce an R package KMgene for performing gene-based association tests for familial, multivariate or longitudinal traits using kernel machine (KM) regression under a generalized linear mixed model (GLMM) framework. Extensive simulations were performed to evaluate the validity of the approaches implemented in KMgene. http://cran.r-project.org/web/packages/KMgene. qi.yan@chp.edu or wei.chen@chp.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.
An overview of longitudinal data analysis methods for neurological research.
Locascio, Joseph J; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.
Karimi, Hamid Reza; Gao, Huijun
2008-07-01
A mixed H2/Hinfinity output-feedback control design methodology is presented in this paper for second-order neutral linear systems with time-varying state and input delays. Delay-dependent sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller, which guarantees asymptotic stability and a mixed H2/Hinfinity performance for the closed-loop system of the second-order neutral linear system, is then developed directly instead of coupling the model to a first-order neutral system. A Lyapunov-Krasovskii method underlies the LMI-based mixed H2/Hinfinity output-feedback control design using some free weighting matrices. The simulation results illustrate the effectiveness of the proposed methodology.
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Vu, Cung Khac; Nihei, Kurt; Johnson, Paul A; Guyer, Robert; Ten Cate, James A; Le Bas, Pierre-Yves; Larmat, Carene S
2014-12-30
A system and a method for investigating rock formations includes generating, by a first acoustic source, a first acoustic signal comprising a first plurality of pulses, each pulse including a first modulated signal at a central frequency; and generating, by a second acoustic source, a second acoustic signal comprising a second plurality of pulses. A receiver arranged within the borehole receives a detected signal including a signal being generated by a non-linear mixing process from the first-and-second acoustic signal in a non-linear mixing zone within the intersection volume. The method also includes-processing the received signal to extract the signal generated by the non-linear mixing process over noise or over signals generated by a linear interaction process, or both.
Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction
NASA Astrophysics Data System (ADS)
Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.
1996-11-01
Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?
Mutual Exclusion of Urea and Trimethylamine N-oxide from Amino Acids in Mixed Solvent Environment
NASA Astrophysics Data System (ADS)
Ganguly, Pritam; Hajari, Timir; Shea, Joan-Emma; van der Vegt, Nico F. A.
2015-03-01
We study the solvation thermodynamics of individual amino acids in mixed urea and trimethylamine N-oxide (TMAO) solutions using molecular dynamics simulations and the Kirkwood-Buff theory. Our results on the preferential interactions between the amino acids and the cosolvents (urea and TMAO) show a mutual exclusion of both the cosolvents from the amino acid surface in the mixed cosolvent condition which is followed by an increase in the cosolvent-cosolvent aggregation away from the amino acid surface. The effects of the mixed cosolvents on the association of the amino acids and the preferential solvation of the amino acids by water are found to be highly non-linear in terms of the effects of the individual cosolvents. A similar result has been found for the association of the protein backbone, mimicked by triglycine. Our results have been confirmed by different TMAO force-fields and the mutual exclusions of the cosolvents from the amino acids are found to be independent of the choice of the strength of the TMAO-water interactions. Based on our data, a general mechanism can potentially be proposed for the effects of the mixed cosolvents on the preferential solvations of the solutes including the case of cononsolvency.
Stochastic Mixing Model with Power Law Decay of Variance
NASA Technical Reports Server (NTRS)
Fedotov, S.; Ihme, M.; Pitsch, H.
2003-01-01
Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.
Long time stability of small-amplitude Breathers in a mixed FPU-KG model
NASA Astrophysics Data System (ADS)
Paleari, Simone; Penati, Tiziano
2016-12-01
In the limit of small couplings in the nearest neighbor interaction, and small total energy, we apply the resonant normal form result of a previous paper of ours to a finite but arbitrarily large mixed Fermi-Pasta-Ulam Klein-Gordon chain, i.e., with both linear and nonlinear terms in both the on-site and interaction potential, with periodic boundary conditions. An existence and orbital stability result for Breathers of such a normal form, which turns out to be a generalized discrete nonlinear Schrödinger model with exponentially decaying all neighbor interactions, is first proved. Exploiting such a result as an intermediate step, a long time stability theorem for the true Breathers of the KG and FPU-KG models, in the anti-continuous limit, is proven.
Discrete transparent boundary conditions for the mixed KDV-BBM equation
NASA Astrophysics Data System (ADS)
Besse, Christophe; Noble, Pascal; Sanchez, David
2017-09-01
In this paper, we consider artificial boundary conditions for the linearized mixed Korteweg-de Vries (KDV) and Benjamin-Bona-Mahoney (BBM) equation which models water waves in the small amplitude, large wavelength regime. Continuous (respectively discrete) artificial boundary conditions involve non local operators in time which in turn requires to compute time convolutions and invert the Laplace transform of an analytic function (respectively the Z-transform of an holomorphic function). In this paper, we propose a new, stable and fairly general strategy to carry out this crucial step in the design of transparent boundary conditions. For large time simulations, we also introduce a methodology based on the asymptotic expansion of coefficients involved in exact direct transparent boundary conditions. We illustrate the accuracy of our methods for Gaussian and wave packets initial data.
Scalable algorithms for three-field mixed finite element coupled poromechanics
NASA Astrophysics Data System (ADS)
Castelletto, Nicola; White, Joshua A.; Ferronato, Massimiliano
2016-12-01
We introduce a class of block preconditioners for accelerating the iterative solution of coupled poromechanics equations based on a three-field formulation. The use of a displacement/velocity/pressure mixed finite-element method combined with a first order backward difference formula for the approximation of time derivatives produces a sequence of linear systems with a 3 × 3 unsymmetric and indefinite block matrix. The preconditioners are obtained by approximating the two-level Schur complement with the aid of physically-based arguments that can be also generalized in a purely algebraic approach. A theoretical and experimental analysis is presented that provides evidence of the robustness, efficiency and scalability of the proposed algorithm. The performance is also assessed for a real-world challenging consolidation experiment of a shallow formation.
Time and frequency domain analysis of sampled data controllers via mixed operation equations
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1981-01-01
Specification of the mathematical equations required to define the dynamic response of a linear continuous plant, subject to sampled data control, is complicated by the fact that the digital components of the control system cannot be modeled via linear ordinary differential equations. This complication can be overcome by introducing two new mathematical operations; namely, the operation of zero order hold and digial delay. It is shown that by direct utilization of these operations, a set of linear mixed operation equations can be written and used to define the dynamic response characteristics of the controlled system. It also is shown how these linear mixed operation equations lead, in an automatable manner, directly to a set of finite difference equations which are in a format compatible with follow on time and frequency domain analysis methods.
Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Linderoth
2011-11-06
the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.
Gries, Katharine S; Regier, Dean A; Ramsey, Scott D; Patrick, Donald L
2017-06-01
To develop a statistical model generating utility estimates for prostate cancer specific health states, using preference weights derived from the perspectives of prostate cancer patients, men at risk for prostate cancer, and society. Utility estimate values were calculated using standard gamble (SG) methodology. Study participants valued 18 prostate-specific health states with the five attributes: sexual function, urinary function, bowel function, pain, and emotional well-being. Appropriateness of model (linear regression, mixed effects, or generalized estimating equation) to generate prostate cancer utility estimates was determined by paired t-tests to compare observed and predicted values. Mixed-corrected standard SG utility estimates to account for loss aversion were calculated based on prospect theory. 132 study participants assigned values to the health states (n = 40 men at risk for prostate cancer; n = 43 men with prostate cancer; n = 49 general population). In total, 792 valuations were elicited (six health states for each 132 participants). The most appropriate model for the classification system was a mixed effects model; correlations between the mean observed and predicted utility estimates were greater than 0.80 for each perspective. Developing a health-state classification system with preference weights for three different perspectives demonstrates the relative importance of main effects between populations. The predicted values for men with prostate cancer support the hypothesis that patients experiencing the disease state assign higher utility estimates to health states and there is a difference in valuations made by patients and the general population.
Bayesian reconstruction of projection reconstruction NMR (PR-NMR).
Yoon, Ji Won
2014-11-01
Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.
Adaptive convex combination approach for the identification of improper quaternion processes.
Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P
2014-01-01
Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).
An Overview of Longitudinal Data Analysis Methods for Neurological Research
Locascio, Joseph J.; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825
Rajeswaran, Jeevanantham; Blackstone, Eugene H
2017-02-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time-varying coefficients.
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.
ERIC Educational Resources Information Center
Shama, Gilli; Dreyfus, Tommy
1994-01-01
Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…
Mixed H∞ and passive control for linear switched systems via hybrid control approach
NASA Astrophysics Data System (ADS)
Zheng, Qunxian; Ling, Youzhu; Wei, Lisheng; Zhang, Hongbin
2018-03-01
This paper investigates the mixed H∞ and passive control problem for linear switched systems based on a hybrid control strategy. To solve this problem, first, a new performance index is proposed. This performance index can be viewed as the mixed weighted H∞ and passivity performance. Then, the hybrid controllers are used to stabilise the switched systems. The hybrid controllers consist of dynamic output-feedback controllers for every subsystem and state updating controllers at the switching instant. The design of state updating controllers not only depends on the pre-switching subsystem and the post-switching subsystem, but also depends on the measurable output signal. The hybrid controllers proposed in this paper can include some existing ones as special cases. Combine the multiple Lyapunov functions approach with the average dwell time technique, new sufficient conditions are obtained. Under the new conditions, the closed-loop linear switched systems are globally uniformly asymptotically stable with a mixed H∞ and passivity performance index. Moreover, the desired hybrid controllers can be constructed by solving a set of linear matrix inequalities. Finally, a numerical example and a practical example are given.
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
Learning oncogenetic networks by reducing to mixed integer linear programming.
Shahrabi Farahani, Hossein; Lagergren, Jens
2013-01-01
Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.
Planetary Ices and the Linear Mixing Approximation
Bethkenhagen, M.; Meyer, Edmund Richard; Hamel, S.; ...
2017-10-10
Here, the validity of the widely used linear mixing approximation (LMA) for the equations of state (EOSs) of planetary ices is investigated at pressure–temperature conditions typical for the interiors of Uranus and Neptune. The basis of this study is ab initio data ranging up to 1000 GPa and 20,000 K, calculated via density functional theory molecular dynamics simulations. In particular, we determine a new EOS for methane and EOS data for the 1:1 binary mixtures of methane, ammonia, and water, as well as their 2:1:4 ternary mixture. Additionally, the self-diffusion coefficients in the ternary mixture are calculated along three different Uranus interior profiles and compared to the values of the pure compounds. We find that deviations of the LMA from the results of the real mixture are generally small; for the thermal EOSs they amount to 4% or less. The diffusion coefficients in the mixture agree with those of the pure compounds within 20% or better. Finally, a new adiabatic model of Uranus with an inner layer of almost pure ices is developed. The model is consistent with the gravity field data and results in a rather cold interior (more » $${T}_{\\mathrm{core}}\\sim 4000$$ K).« less
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
Schaid, Daniel J
2010-01-01
Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.
Small area estimation of proportions with different levels of auxiliary data.
Chandra, Hukum; Kumar, Sushil; Aditya, Kaustav
2018-03-01
Binary data are often of interest in many small areas of applications. The use of standard small area estimation methods based on linear mixed models becomes problematic for such data. An empirical plug-in predictor (EPP) under a unit-level generalized linear mixed model with logit link function is often used for the estimation of a small area proportion. However, this EPP requires the availability of unit-level population information for auxiliary data that may not be always accessible. As a consequence, in many practical situations, this EPP approach cannot be applied. Based on the level of auxiliary information available, different small area predictors for estimation of proportions are proposed. Analytic and bootstrap approaches to estimating the mean squared error of the proposed small area predictors are also developed. Monte Carlo simulations based on both simulated and real data show that the proposed small area predictors work well for generating the small area estimates of proportions and represent a practical alternative to the above approach. The developed predictor is applied to generate estimates of the proportions of indebted farm households at district-level using debt investment survey data from India. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
Elastic properties and optical absorption studies of mixed alkali borogermanate glasses
NASA Astrophysics Data System (ADS)
Taqiullah, S. M.; Ahmmad, Shaik Kareem; Samee, M. A.; Rahman, Syed
2018-05-01
First time the mixed alkali effect (MAE) has been investigated in the glass system xNa2O-(30-x)Li2O-40B2O3- 30GeO2 (0≤x≤30 mol%) through density and optical absorption studies. The present glasses were prepared by melt quench technique. The density of the present glasses varies non-linearly exhibiting mixed alkali effect. Using the density data, the elastic moduli namely Young's modulus, bulk and shear modulus show strong linear dependence as a function of compositional parameter. From the absorption edge studies, the values of optical band gap energies for all transitions have been evaluated. It was established that the type of electronic transition in the present glass system is indirect allowed. The indirect optical band gap exhibit non-linear behavior with compositional parameter showing the mixed alkali effect.
NASA Astrophysics Data System (ADS)
Mapes, B. E.; Kelly, P.; Song, S.; Hu, I. K.; Kuang, Z.
2015-12-01
An economical 10-layer global primitive equation solver is driven by time-independent forcing terms, derived from a training process, to produce a realisting eddying basic state with a tracer q trained to act like water vapor mixing ratio. Within this basic state, linearized anomaly moist physics in the column are applied in the form of a 20x20 matrix. The control matrix was derived from the results of Kuang (2010, 2012) who fitted a linear response function from a cloud resolving model in a state of deep convecting equilibrium. By editing this matrix in physical space and eigenspace, scaling and clipping its action, and optionally adding terms for processes that do not conserve moist statice energy (radiation, surface fluxes), we can decompose and explain the model's diverse moist process coupled variability. Recitified effects of this variability on the general circulation and climate, even in strictly zero-mean centered anomaly physic cases, also are sometimes surprising.
Structural Equation Modeling: A Framework for Ocular and Other Medical Sciences Research
Christ, Sharon L.; Lee, David J.; Lam, Byron L.; Diane, Zheng D.
2017-01-01
Structural equation modeling (SEM) is a modeling framework that encompasses many types of statistical models and can accommodate a variety of estimation and testing methods. SEM has been used primarily in social sciences but is increasingly used in epidemiology, public health, and the medical sciences. SEM provides many advantages for the analysis of survey and clinical data, including the ability to model latent constructs that may not be directly observable. Another major feature is simultaneous estimation of parameters in systems of equations that may include mediated relationships, correlated dependent variables, and in some instances feedback relationships. SEM allows for the specification of theoretically holistic models because multiple and varied relationships may be estimated together in the same model. SEM has recently expanded by adding generalized linear modeling capabilities that include the simultaneous estimation of parameters of different functional form for outcomes with different distributions in the same model. Therefore, mortality modeling and other relevant health outcomes may be evaluated. Random effects estimation using latent variables has been advanced in the SEM literature and software. In addition, SEM software has increased estimation options. Therefore, modern SEM is quite general and includes model types frequently used by health researchers, including generalized linear modeling, mixed effects linear modeling, and population average modeling. This article does not present any new information. It is meant as an introduction to SEM and its uses in ocular and other health research. PMID:24467557
Alfvén wave interactions in the solar wind
NASA Astrophysics Data System (ADS)
Webb, G. M.; McKenzie, J. F.; Hu, Q.; le Roux, J. A.; Zank, G. P.
2012-11-01
Alfvén wave mixing (interaction) equations used in locally incompressible turbulence transport equations in the solar wind are analyzed from the perspective of linear wave theory. The connection between the wave mixing equations and non-WKB Alfven wave driven wind theories are delineated. We discuss the physical wave energy equation and the canonical wave energy equation for non-WKB Alfven waves and the WKB limit. Variational principles and conservation laws for the linear wave mixing equations for the Heinemann and Olbert non-WKB wind model are obtained. The connection with wave mixing equations used in locally incompressible turbulence transport in the solar wind are discussed.
Phase mixing versus nonlinear advection in drift-kinetic plasma turbulence
NASA Astrophysics Data System (ADS)
Schekochihin, A. A.; Parker, J. T.; Highcock, E. G.; Dellar, P. J.; Dorland, W.; Hammett, G. W.
2016-04-01
> A scaling theory of long-wavelength electrostatic turbulence in a magnetised, weakly collisional plasma (e.g. drift-wave turbulence driven by ion temperature gradients) is proposed, with account taken both of the nonlinear advection of the perturbed particle distribution by fluctuating flows and of its phase mixing, which is caused by the streaming of the particles along the mean magnetic field and, in a linear problem, would lead to Landau damping. It is found that it is possible to construct a consistent theory in which very little free energy leaks into high velocity moments of the distribution function, rendering the turbulent cascade in the energetically relevant part of the wavenumber space essentially fluid-like. The velocity-space spectra of free energy expressed in terms of Hermite-moment orders are steep power laws and so the free-energy content of the phase space does not diverge at infinitesimal collisionality (while it does for a linear problem); collisional heating due to long-wavelength perturbations vanishes in this limit (also in contrast with the linear problem, in which it occurs at the finite rate equal to the Landau damping rate). The ability of the free energy to stay in the low velocity moments of the distribution function is facilitated by the `anti-phase-mixing' effect, whose presence in the nonlinear system is due to the stochastic version of the plasma echo (the advecting velocity couples the phase-mixing and anti-phase-mixing perturbations). The partitioning of the wavenumber space between the (energetically dominant) region where this is the case and the region where linear phase mixing wins its competition with nonlinear advection is governed by the `critical balance' between linear and nonlinear time scales (which for high Hermite moments splits into two thresholds, one demarcating the wavenumber region where phase mixing predominates, the other where plasma echo does).
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
USDA-ARS?s Scientific Manuscript database
The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...
Fuel-air mixing and combustion in a two-dimensional Wankel engine
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Schock, H. J.; Ramos, J. I.
1987-01-01
A two-equation turbulence model, an algebraic grid generalization method, and an approximate factorization time-linearized numerical technique are used to study the effects of mixture stratification at the intake port and gaseous fuel injection on the flow field and fuel-air mixing in a two-dimensional rotary engine model. The fuel distribution in the combustion chamber is found to be a function of the air-fuel mixture fluctuations at the intake port. It is shown that the fuel is advected by the flow field induced by the rotor and is concentrated near the leading apex during the intake stroke, while during compression, the fuel concentration is highest near the trailing apex and is lowest near the rotor. It is also found that the fuel concentration near the trailing apex and rotor is small except at high injection velocities.
Hyperentanglement purification using imperfect spatial entanglement.
Wang, Tie-Jun; Mi, Si-Chen; Wang, Chuan
2017-02-06
As the interaction between the photons and the environment which will make the entangled photon pairs in less entangled states or even in mixed states, the security and the efficiency of quantum communication will decrease. We present an efficient hyperentanglement purification protocol that distills nonlocal high-fidelity hyper-entangled Bell states in both polarization and spatial-mode degrees of freedom from ensembles of two-photon system in mixed states using linear optics. Here, we consider the influence of the photon loss in the channel which generally is ignored in the conventional entanglement purification and hyperentanglement purification (HEP) schemes. Compared with previous HEP schemes, our HEP scheme decreases the requirement for nonlocal resources by employing high-dimensional mode-check measurement, and leads to a higher fidelity, especially in the range where the conventional HEP schemes become invalid but our scheme still can work.
NASA Astrophysics Data System (ADS)
Cho, Junhan
2014-03-01
Here we show how to control molecular interactions via mixing AB and AC diblock copolymers, where one copolymer exhibits upper order-disorder transition and the other does lower disorder-order transition. Linear ABC triblock copolymers possessing both barotropic and baroplastic pairs are also taken into account. A recently developed random-phase approximation (RPA) theory and the self-consistent field theory (SCFT) for general compressible mixtures are used to analyze stability criteria and morphologies for the given systems. It is demonstrated that the copolymer systems can yield a variety of phase behaviors in their temperature and pressure dependence upon proper mixing conditions and compositions, which is caused by the delicate force fields generated in the systems. We acknowledge the financial support from National Research Foundation of Korea and Center for Photofunctional Energy Materials.
Guo, P; Huang, G H
2009-01-01
In this study, an inexact fuzzy chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is proposed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing inexact two-stage programming and mixed-integer linear programming techniques by incorporating uncertainties expressed as multiple uncertainties of intervals and dual probability distributions within a general optimization framework. The developed method can provide an effective linkage between the predefined environmental policies and the associated economic implications. Four special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it provides a linkage to predefined policies that have to be respected when a modeling effort is undertaken; secondly, it is useful for tackling uncertainties presented as intervals, probabilities, fuzzy sets and their incorporation; thirdly, it facilitates dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period, multi-level, and multi-option context; fourthly, the penalties are exercised with recourse against any infeasibility, which permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised solid waste-generation rates are violated. In a companion paper, the developed method is applied to a real case for the long-term planning of waste management in the City of Regina, Canada.
Kemmitt, G; Valverde-Garcia, P; Hufnagl, A; Bacci, L; Zotz, A
2015-04-01
The impact of the fungicides mancozeb, myclobutanil, and meptyldinocap on populations of Typhlodromus pyri Scheuten was evaluated under field conditions, when applied following the good agricultural practices recommended for their use. Two complementary statistical models were used to analyze the population reduction compared to the control: a linear mixed model to estimate the mean effect of the fungicide, and a generalized linear mixed model (proportional odds mixed model) to estimate the cumulative probability for those effects being equal or less than a specific IOBC class (International Organization for Biological and Integrated Control of Noxious Animal and Plants). Findings from 27 field experiments in a range of different vine-growing regions in Europe indicated that the use of mancozeb, myclobutanil, and meptyldinocap caused minimal impact on naturally occurring populations of T. pyri. Both statistical models confirmed that although adverse effects on T. pyri can occur under certain conditions after several applications of any of the three fungicides studied, the probability of the effects occurring is low and they will not persist. These methods demonstrated how data from a series of trials could be used to evaluate the variability of the effects caused by the chemical rather than relying on the worst-case findings from a single trial. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Investigation of micromixing by acoustically oscillated sharp-edges
Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco
2016-01-01
Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel. PMID:27158292
Investigation of micromixing by acoustically oscillated sharp-edges.
Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco
2016-03-01
Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel.
A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation
Rajeswaran, Jeevanantham; Blackstone, Eugene H.
2014-01-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830
General-Purpose Software For Computer Graphics
NASA Technical Reports Server (NTRS)
Rogers, Joseph E.
1992-01-01
NASA Device Independent Graphics Library (NASADIG) is general-purpose computer-graphics package for computer-based engineering and management applications which gives opportunity to translate data into effective graphical displays for presentation. Features include two- and three-dimensional plotting, spline and polynomial interpolation, control of blanking of areas, multiple log and/or linear axes, control of legends and text, control of thicknesses of curves, and multiple text fonts. Included are subroutines for definition of areas and axes of plots; setup and display of text; blanking of areas; setup of style, interpolation, and plotting of lines; control of patterns and of shading of colors; control of legends, blocks of text, and characters; initialization of devices; and setting of mixed alphabets. Written in FORTRAN 77.
NASA Astrophysics Data System (ADS)
Schemel, Laurence E.; Cox, Marisa H.; Runkel, Robert L.; Kimball, Briant A.
2006-08-01
The acidic discharge from Cement Creek, containing elevated concentrations of dissolved metals and sulphate, mixed with the circumneutral-pH Animas River over a several hundred metre reach (mixing zone) near Silverton, CO, during this study. Differences in concentrations of Ca, Mg, Si, Sr, and SO42- between the creek and the river were sufficiently large for these analytes to be used as natural tracers in the mixing zone. In addition, a sodium chloride (NaCl) tracer was injected into Cement Creek, which provided a Cl- reference tracer in the mixing zone. Conservative transport of the dissolved metals and sulphate through the mixing zone was verified by mass balances and by linear mixing plots relative to the injected reference tracer. At each of seven sites in the mixing zone, five samples were collected at evenly spaced increments of the observed across-channel gradients, as determined by specific conductance. This created sets of samples that adequately covered the ranges of mixtures (mixing ratios, in terms of the fraction of Animas River water, %AR). Concentratis measured in each mixing zone sample and in the upstream Animas River and Cement Creek were used to compute %AR for the reference and natural tracers. Values of %AR from natural tracers generally showed good agreement with values from the reference tracer, but variability in discharge and end-member concentrations and analytical errors contributed to unexpected outlier values for both injected and natural tracers. The median value (MV) %AR (calculated from all of the tracers) reduced scatter in the mixing plots for the dissolved metals, indicating that the MV estimate reduced the effects of various potential errors that could affect any tracer.
NASA Astrophysics Data System (ADS)
Divine, D. V.; Godtliebsen, F.; Rue, H.
2012-01-01
The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.
Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism
NASA Astrophysics Data System (ADS)
Aurell, Erik
2018-06-01
The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z. The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.
Missing Data in Clinical Studies: Issues and Methods
Ibrahim, Joseph G.; Chu, Haitao; Chen, Ming-Hui
2012-01-01
Missing data are a prevailing problem in any type of data analyses. A participant variable is considered missing if the value of the variable (outcome or covariate) for the participant is not observed. In this article, various issues in analyzing studies with missing data are discussed. Particularly, we focus on missing response and/or covariate data for studies with discrete, continuous, or time-to-event end points in which generalized linear models, models for longitudinal data such as generalized linear mixed effects models, or Cox regression models are used. We discuss various classifications of missing data that may arise in a study and demonstrate in several situations that the commonly used method of throwing out all participants with any missing data may lead to incorrect results and conclusions. The methods described are applied to data from an Eastern Cooperative Oncology Group phase II clinical trial of liver cancer and a phase III clinical trial of advanced non–small-cell lung cancer. Although the main area of application discussed here is cancer, the issues and methods we discuss apply to any type of study. PMID:22649133
Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism
NASA Astrophysics Data System (ADS)
Aurell, Erik
2018-04-01
The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z . The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
Neurodevelopment in Early Childhood Affected by Prenatal Lead Exposure and Iron Intake.
Shah-Kulkarni, Surabhi; Ha, Mina; Kim, Byung-Mi; Kim, Eunjeong; Hong, Yun-Chul; Park, Hyesook; Kim, Yangho; Kim, Bung-Nyun; Chang, Namsoo; Oh, Se-Young; Kim, Young Ju; Kimʼs, Young Ju; Lee, Boeun; Ha, Eun-Hee
2016-01-01
No safe threshold level of lead exposure in children has been recognized. Also, the information on shielding effect of maternal dietary iron intake during pregnancy on the adverse effects of prenatal lead exposure on children's postnatal neurocognitive development is very limited. We examined the association of prenatal lead exposure and neurodevelopment in children at 6, 12, 24, and 36 months and the protective action of maternal dietary iron intake against the impact of lead exposure. The study participants comprise 965 pregnant women and their subsequent offspring of the total participants enrolled in the Mothers and Children's environmental health study: a prospective birth cohort study. Generalized linear model and linear mixed model analysis were performed to analyze the effect of prenatal lead exposure and mother's dietary iron intake on children's cognitive development at 6, 12, 24, and 36 months. Maternal late pregnancy lead was marginally associated with deficits in mental development index (MDI) of children at 6 months. Mothers having less than 75th percentile of dietary iron intake during pregnancy showed significant increase in the harmful effect of late pregnancy lead exposure on MDI at 6 months. Linear mixed model analyses showed the significant detrimental effect of prenatal lead exposure in late pregnancy on cognitive development up to 36 months in children of mothers having less dietary iron intake during pregnancy. Thus, our findings imply importance to reduce prenatal lead exposure and have adequate iron intake for better neurodevelopment in children.
Neurodevelopment in Early Childhood Affected by Prenatal Lead Exposure and Iron Intake
Shah-Kulkarni, Surabhi; Ha, Mina; Kim, Byung-Mi; Kim, Eunjeong; Hong, Yun-Chul; Park, Hyesook; Kim, Yangho; Kim, Bung-Nyun; Chang, Namsoo; Oh, Se-Young; Kim, Young Ju; Lee, Boeun; Ha, Eun-Hee
2016-01-01
Abstract No safe threshold level of lead exposure in children has been recognized. Also, the information on shielding effect of maternal dietary iron intake during pregnancy on the adverse effects of prenatal lead exposure on children's postnatal neurocognitive development is very limited. We examined the association of prenatal lead exposure and neurodevelopment in children at 6, 12, 24, and 36 months and the protective action of maternal dietary iron intake against the impact of lead exposure. The study participants comprise 965 pregnant women and their subsequent offspring of the total participants enrolled in the Mothers and Children's environmental health study: a prospective birth cohort study. Generalized linear model and linear mixed model analysis were performed to analyze the effect of prenatal lead exposure and mother's dietary iron intake on children's cognitive development at 6, 12, 24, and 36 months. Maternal late pregnancy lead was marginally associated with deficits in mental development index (MDI) of children at 6 months. Mothers having less than 75th percentile of dietary iron intake during pregnancy showed significant increase in the harmful effect of late pregnancy lead exposure on MDI at 6 months. Linear mixed model analyses showed the significant detrimental effect of prenatal lead exposure in late pregnancy on cognitive development up to 36 months in children of mothers having less dietary iron intake during pregnancy. Thus, our findings imply importance to reduce prenatal lead exposure and have adequate iron intake for better neurodevelopment in children. PMID:26825887
Vivas, M; Silveira, S F; Viana, A P; Amaral, A T; Cardoso, D L; Pereira, M G
2014-07-02
Diallel crossing methods provide information regarding the performance of genitors between themselves and their hybrid combinations. However, with a large number of parents, the number of hybrid combinations that can be obtained and evaluated become limited. One option regarding the number of parents involved is the adoption of circulant diallels. However, information is lacking regarding diallel analysis using mixed models. This study aimed to evaluate the efficacy of the method of linear mixed models to estimate, for variable resistance to foliar fungal diseases, components of general and specific combining ability in a circulant table with different s values. Subsequently, 50 diallels were simulated for each s value, and the correlations and estimates of the combining abilities of the different diallel combinations were analyzed. The circulant diallel method using mixed modeling was effective in the classification of genitors regarding their combining abilities relative to the complete diallels. The numbers of crosses in which each genitor(s) will compose the circulant diallel and the estimated heritability affect the combining ability estimates. With three crosses per parent, it is possible to obtain good concordance (correlation above 0.8) between the combining ability estimates.
NASA Technical Reports Server (NTRS)
Banse, Karl; Yong, Marina
1990-01-01
As a proxy for satellite CZCS observations and concurrent measurements of primary production rates, data from 138 stations occupied seasonally during 1967-1968 in the offshore eastern tropical Pacific were analyzed in terms of six temporal groups and our current regimes. Multiple linear regressions on column production Pt show that simulated satellite pigment is generally weakly correlated, but sometimes not correlated with Pt, and that incident irradiance, sea surface temperature, nitrate, transparency, and depths of mixed layer or nitracline assume little or no importance. After a proxy for the light-saturated chlorophyll-specific photosynthetic rate P(max) is added, the coefficient of determination ranges from 0.55 to 0.91 (median of 0.85) for the 10 cases. In stepwise multiple linear regressions the P(max) proxy is the best predictor for Pt.
Attitude dynamics simulation subroutines for systems of hinge-connected rigid bodies
NASA Technical Reports Server (NTRS)
Fleischer, G. E.; Likins, P. W.
1974-01-01
Several computer subroutines are designed to provide the solution to minimum-dimension sets of discrete-coordinate equations of motion for systems consisting of an arbitrary number of hinge-connected rigid bodies assembled in a tree topology. In particular, these routines may be applied to: (1) the case of completely unrestricted hinge rotations, (2) the totally linearized case (all system rotations are small), and (3) the mixed, or partially linearized, case. The use of the programs in each case is demonstrated using a five-body spacecraft and attitude control system configuration. The ability of the subroutines to accommodate prescribed motions of system bodies is also demonstrated. Complete listings and user instructions are included for these routines (written in FORTRAN V) which are intended as multi- and general-purpose tools in the simulation of spacecraft and other complex electromechanical systems.
ERIC Educational Resources Information Center
McCluskey, James J.
1997-01-01
A study of 160 undergraduate journalism students trained to design projects (stacks) using HyperCard on Macintosh computers determined that right-brain dominant subjects outperformed left-brain and mixed-brain dominant subjects, whereas left-brain dominant subjects out performed mixed-brain dominant subjects in several areas. Recommends future…
Perturbation theory for cosmologies with nonlinear structure
NASA Astrophysics Data System (ADS)
Goldberg, Sophia R.; Gallagher, Christopher S.; Clifton, Timothy
2017-11-01
The next generation of cosmological surveys will operate over unprecedented scales, and will therefore provide exciting new opportunities for testing general relativity. The standard method for modelling the structures that these surveys will observe is to use cosmological perturbation theory for linear structures on horizon-sized scales, and Newtonian gravity for nonlinear structures on much smaller scales. We propose a two-parameter formalism that generalizes this approach, thereby allowing interactions between large and small scales to be studied in a self-consistent and well-defined way. This uses both post-Newtonian gravity and cosmological perturbation theory, and can be used to model realistic cosmological scenarios including matter, radiation and a cosmological constant. We find that the resulting field equations can be written as a hierarchical set of perturbation equations. At leading-order, these equations allow us to recover a standard set of Friedmann equations, as well as a Newton-Poisson equation for the inhomogeneous part of the Newtonian energy density in an expanding background. For the perturbations in the large-scale cosmology, however, we find that the field equations are sourced by both nonlinear and mode-mixing terms, due to the existence of small-scale structures. These extra terms should be expected to give rise to new gravitational effects, through the mixing of gravitational modes on small and large scales—effects that are beyond the scope of standard linear cosmological perturbation theory. We expect our formalism to be useful for accurately modeling gravitational physics in universes that contain nonlinear structures, and for investigating the effects of nonlinear gravity in the era of ultra-large-scale surveys.
Evaluation of goal kicking performance in international rugby union matches.
Quarrie, Kenneth L; Hopkins, Will G
2015-03-01
Goal kicking is an important element in rugby but has been the subject of minimal research. To develop and apply a method to describe the on-field pattern of goal-kicking and rank the goal kicking performance of players in international rugby union matches. Longitudinal observational study. A generalized linear mixed model was used to analyze goal-kicking performance in a sample of 582 international rugby matches played from 2002 to 2011. The model adjusted for kick distance, kick angle, a rating of the importance of each kick, and venue-related conditions. Overall, 72% of the 6769 kick attempts were successful. Forty-five percent of points scored during the matches resulted from goal kicks, and in 5.7% of the matches the result of the match hinged on the outcome of a kick attempt. There was an extremely large decrease in success with increasing distance (odds ratio for two SD distance 0.06, 90% confidence interval 0.05-0.07) and a small decrease with increasingly acute angle away from the mid-line of the goal posts (odds ratio for 2 SD angle, 0.44, 0.39-0.49). Differences between players were typically small (odds ratio for 2 between-player SD 0.53, 0.45-0.65). The generalized linear mixed model with its random-effect solutions provides a tool for ranking the performance of goal kickers in rugby. This modelling approach could be applied to other performance indicators in rugby and in other sports in which discrete outcomes are measured repeatedly on players or teams. Copyright © 2015. Published by Elsevier Ltd.
Emergency department length of stay for ethanol intoxication encounters.
Klein, Lauren R; Driver, Brian E; Miner, James R; Martel, Marc L; Cole, Jon B
2017-12-08
Emergency Department (ED) encounters for ethanol intoxication are becoming increasingly common. The purpose of this study was to explore factors associated with ED length of stay (LOS) for ethanol intoxication encounters. This was a multi-center, retrospective, observational study of patients presenting to the ED for ethanol intoxication. Data were abstracted from the electronic medical record. To explore factors associated with ED LOS, we created a mixed-effects generalized linear model. We identified 18,664 eligible patients from 6 different EDs during the study period (2012-2016). The median age was 37years, 69% were male, and the median ethanol concentration was 213mg/dL. Median LOS was 348min (range 43-1658). Using a mixed-effects generalized linear model, independent variables associated with a significant increase in ED LOS included use of parenteral sedation (beta=0.30, increase in LOS=34%), laboratory testing (beta=0.21, increase in LOS=23%), as well as the hour of arrival to the ED, such that patients arriving to the ED during evening hours (between 18:00 and midnight) had up to an 86% increase in LOS. Variables not significantly associated with an increase in LOS included age, gender, ethanol concentration, psychiatric disposition, using the ED frequently for ethanol intoxication, CT use, and daily ED volume. Variables such as diagnostic testing, treatments, and hour of arrival may influence ED LOS in patients with acute ethanol intoxication. Identification and further exploration of these factors may assist in developing hospital and community based improvements to modify LOS in this population. Copyright © 2017 Elsevier Inc. All rights reserved.
Koom, Woong Sub; Choi, Mi Yeon; Lee, Jeongshim; Park, Eun Jung; Kim, Ju Hye; Kim, Sun-Hyun; Kim, Yong Bae
2016-06-01
The purpose of this study was to evaluate the efficacy of art therapy to control fatigue in cancer patients during course of radiotherapy and its impact on quality of life (QoL). Fifty cancer patients receiving radiotherapy received weekly art therapy sessions using famous painting appreciation. Fatigue and QoL were assessed using the Brief Fatigue Inventory (BFI) Scale and the Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F) at baseline before starting radiotherapy, every week for 4 weeks during radiotherapy, and at the end of radiotherapy. Mean changes of scores over time were analyzed using a generalized linear mixed model. Of the 50 patients, 34 (68%) participated in 4 sessions of art therapy. Generalized linear mixed models testing for the effect of time on mean score changes showed no significant changes in scores from baseline for the BFI and FACIT-F. The mean BFI score and FACIT-F total score changed from 3.1 to 2.7 and from 110.7 to 109.2, respectively. Art therapy based on the appreciation of famous paintings led to increases in self-esteem by increasing self-realization and forming social relationships. Fatigue and QoL in cancer patients with art therapy do not deteriorate during a period of radiotherapy. Despite the single-arm small number of participants and pilot design, this study provides a strong initial demonstration that art therapy of appreciation for famous painting is worthy of further study for fatigue and QoL improvement. Further, it can play an important role in routine practice in cancer patients during radiotherapy.
Role of diversity in ICA and IVA: theory and applications
NASA Astrophysics Data System (ADS)
Adalı, Tülay
2016-05-01
Independent component analysis (ICA) has been the most popular approach for solving the blind source separation problem. Starting from a simple linear mixing model and the assumption of statistical independence, ICA can recover a set of linearly-mixed sources to within a scaling and permutation ambiguity. It has been successfully applied to numerous data analysis problems in areas as diverse as biomedicine, communications, finance, geo- physics, and remote sensing. ICA can be achieved using different types of diversity—statistical property—and, can be posed to simultaneously account for multiple types of diversity such as higher-order-statistics, sample dependence, non-circularity, and nonstationarity. A recent generalization of ICA, independent vector analysis (IVA), generalizes ICA to multiple data sets and adds the use of one more type of diversity, statistical dependence across the data sets, for jointly achieving independent decomposition of multiple data sets. With the addition of each new diversity type, identification of a broader class of signals become possible, and in the case of IVA, this includes sources that are independent and identically distributed Gaussians. We review the fundamentals and properties of ICA and IVA when multiple types of diversity are taken into account, and then ask the question whether diversity plays an important role in practical applications as well. Examples from various domains are presented to demonstrate that in many scenarios it might be worthwhile to jointly account for multiple statistical properties. This paper is submitted in conjunction with the talk delivered for the "Unsupervised Learning and ICA Pioneer Award" at the 2016 SPIE Conference on Sensing and Analysis Technologies for Biomedical and Cognitive Applications.
Diegelmann, Mona; Jansen, Carl-Philipp; Wahl, Hans-Werner; Schilling, Oliver K; Schnabel, Eva-Luisa; Hauer, Klaus
2018-06-01
Physical activity (PA) may counteract depressive symptoms in nursing home (NH) residents considering biological, psychological, and person-environment transactional pathways. Empirical results, however, have remained inconsistent. Addressing potential shortcomings of previous research, we examined the effect of a whole-ecology PA intervention program on NH residents' depressive symptoms using generalized linear mixed-models (GLMMs). We used longitudinal data from residents of two German NHs who were included without any pre-selection regarding physical and mental functioning (n = 163, M age = 83.1, 53-100 years; 72% female) and assessed on four occasions each three months apart. Residents willing to participate received a 12-week PA training program. Afterwards, the training was implemented in weekly activity schedules by NH staff. We ran GLMMs to account for the highly skewed depressive symptoms outcome measure (12-item Geriatric Depression Scale-Residential) by using gamma distribution. Exercising (n = 78) and non-exercising residents (n = 85) showed a comparable level of depressive symptoms at pretest. For exercising residents, depressive symptoms stabilized between pre-, posttest, and at follow-up, whereas an increase was observed for non-exercising residents. The intervention group's stabilization in depressive symptoms was maintained at follow-up, but increased further for non-exercising residents. Implementing an innovative PA intervention appears to be a promising approach to prevent the increase of NH residents' depressive symptoms. At the data-analytical level, GLMMs seem to be a promising tool for intervention research at large, because all longitudinally available data points and non-normality of outcome data can be considered.
Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari
2017-09-01
Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Wang, Min
2017-06-01
This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.
NASA Astrophysics Data System (ADS)
Terrano, Daniel; Tsuper, Ilona; Maraschky, Adam; Holland, Nolan; Streletzky, Kiril
Temperature sensitive nanoparticles were generated from a construct (H20F) of three chains of elastin-like polypeptides (ELP) linked to a negatively charged foldon domain. This ELP system was mixed at different ratios with linear chains of ELP (H40L) which lacks the foldon domain. The mixed system is soluble at room temperature and at a transition temperature (Tt) will form swollen micelles with the hydrophobic linear chains hidden inside. This system was studied using depolarized dynamic light scattering (DDLS) and static light scattering (SLS) to determine the size, shape, and internal structure of the mixed micelles. The mixed micelle in equal parts of H20F and H40L show a constant apparent hydrodynamic radius of 40-45 nm at the concentration window from 25:25 to 60:60 uM (1:1 ratio). At a fixed 50 uM concentration of the H20F, varying H40L concentration from 5 to 80 uM resulted in a linear growth in the hydrodynamic radius from about 11 to about 62 nm, along with a 1000-fold increase in VH signal. A possible simple model explaining the growth of the swollen micelles is considered. Lastly, the VH signal can indicate elongation in the geometry of the particle or could possibly be a result from anisotropic properties from the core of the micelle. SLS was used to study the molecular weight, and the radius of gyration of the micelle to help identify the structure and morphology of mixed micelles and the tangible cause of the VH signal.
NASA Astrophysics Data System (ADS)
Zhao, H.; Hao, Y.; Liu, X.; Hou, M.; Zhao, X.
2018-04-01
Hyperspectral remote sensing is a completely non-invasive technology for measurement of cultural relics, and has been successfully applied in identification and analysis of pigments of Chinese historical paintings. Although the phenomenon of mixing pigments is very usual in Chinese historical paintings, the quantitative analysis of the mixing pigments in the ancient paintings is still unsolved. In this research, we took two typical mineral pigments, vermilion and stone yellow as example, made precisely mixed samples using these two kinds of pigments, and measured their spectra in the laboratory. For the mixing spectra, both fully constrained least square (FCLS) method and derivative of ratio spectroscopy (DRS) were performed. Experimental results showed that the mixing spectra of vermilion and stone yellow had strong nonlinear mixing characteristics, but at some bands linear unmixing could also achieve satisfactory results. DRS using strong linear bands can reach much higher accuracy than that of FCLS using full bands.
Approximating a nonlinear advanced-delayed equation from acoustics
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2016-10-01
We approximate the solution of a particular non-linear mixed type functional differential equation from physiology, the mucosal wave model of the vocal oscillation during phonation. The mathematical equation models a superficial wave propagating through the tissues. The numerical scheme is adapted from the work presented in [1, 2, 3], using homotopy analysis method (HAM) to solve the non linear mixed type equation under study.
We investigated the use of output from Bayesian stable isotope mixing models as constraints for a linear inverse food web model of a temperate intertidal seagrass system in the Marennes-Oléron Bay, France. Linear inverse modeling (LIM) is a technique that estimates a complete net...
Improving the Power of GWAS and Avoiding Confounding from Population Stratification with PC-Select
Tucker, George; Price, Alkes L.; Berger, Bonnie
2014-01-01
Using a reduced subset of SNPs in a linear mixed model can improve power for genome-wide association studies, yet this can result in insufficient correction for population stratification. We propose a hybrid approach using principal components that does not inflate statistics in the presence of population stratification and improves power over standard linear mixed models. PMID:24788602
AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, Brendan; Polizzi, Eric
2013-03-01
The self-consistent iterative procedure in Density Functional Theory calculations is revisited using a new, highly efficient and robust algorithm for solving the non-linear eigenvector problem (i.e. H(X)X = EX;) of the Kohn-Sham equations. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm, and provides a fundamental and practical numerical solution for addressing the non-linearity of the Hamiltonian with the occupied eigenvectors. In contrast to SCF techniques, the traditional outer iterations are replaced by subspace iterations that are intrinsic to the FEAST algorithm, while the non-linearity is handled at the level of a projected reduced system which is orders of magnitude smaller than the original one. Using a series of numerical examples, it will be shown that our approach can outperform the traditional SCF mixing techniques such as Pulay-DIIS by providing a high converge rate and by converging to the correct solution regardless of the choice of the initial guess. We also discuss a practical implementation of the technique that can be achieved effectively using the FEAST solver package. This research is supported by NSF under Grant #ECCS-0846457 and Intel Corporation.
Optimization of light quality from color mixing light-emitting diode systems for general lighting
NASA Astrophysics Data System (ADS)
Thorseth, Anders
2012-03-01
Given the problem of metamerisms inherent in color mixing in light-emitting diode (LED) systems with more than three distinct colors, a method for optimizing the spectral output of multicolor LED system with regards to standardized light quality parameters has been developed. The composite spectral power distribution from the LEDs are simulated using spectral radiometric measurements of single commercially available LEDs for varying input power, to account for the efficiency droop and other non-linear effects in electrical power vs. light output. The method uses electrical input powers as input parameters in a randomized steepest decent optimization. The resulting spectral power distributions are evaluated with regard to the light quality using the standard characteristics: CIE color rendering index, correlated color temperature and chromaticity distance. The results indicate Pareto optimal boundaries for each system, mapping the capabilities of the simulated lighting systems with regard to the light quality characteristics.
Wildhaber, M.L.; Holan, S.H.; Bryan, J.L.; Gladish, D.W.; Ellersieck, M.
2011-01-01
In 2003, the US Army Corps of Engineers initiated the Pallid Sturgeon Population Assessment Program (PSPAP) to monitor pallid sturgeon and the fish community of the Missouri River. The power analysis of PSPAP presented here was conducted to guide sampling design and effort decisions. The PSPAP sampling design has a nested structure with multiple gear subsamples within a river bend. Power analyses were based on a normal linear mixed model, using a mixed cell means approach, with variance estimates from the original data. It was found that, at current effort levels, at least 20 years for pallid and 10 years for shovelnose sturgeon is needed to detect a 5% annual decline. Modified bootstrap simulations suggest power estimates from the original data are conservative due to excessive zero fish counts. In general, the approach presented is applicable to a wide array of animal monitoring programs.
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Spectroscopic studies on Solvatochromism of mixed-chelate copper(II) complexes using MLR technique
NASA Astrophysics Data System (ADS)
Golchoubian, Hamid; Moayyedi, Golasa; Fazilati, Hakimeh
2012-01-01
Mixed-chelate copper(II) complexes with a general formula [Cu(acac)(diamine)]X where acac = acetylacetonate ion, diamine = N,N-dimethyl,N'-benzyl-1,2-diaminoethane and X = BPh 4-, PF 6-, ClO 4- and BF 4- have been prepared. The complexes were characterized on the basis of elemental analysis, molar conductance, UV-vis and IR spectroscopies. The complexes are solvatochromic and their solvatochromism were investigated by visible spectroscopy. All complexes demonstrated the positive solvatochromism and among the complexes [Cu(acac)(diamine)]BPh 4·H 2O showed the highest Δ νmax value. To explore the mechanism of interaction between solvent molecules and the complexes, different solvent parameters such as DN, AN, α and β using multiple linear regression (MLR) method were employed. The statistical results suggested that the DN parameter of the solvent plays a dominate contribution to the shift of the d-d absorption band of the complexes.
Mean-trajectory approximation for electronic and vibrational-electronic nonlinear spectroscopy
NASA Astrophysics Data System (ADS)
Loring, Roger F.
2017-04-01
Mean-trajectory approximations permit the calculation of nonlinear vibrational spectra from semiclassically quantized trajectories on a single electronically adiabatic potential surface. By describing electronic degrees of freedom with classical phase-space variables and subjecting these to semiclassical quantization, mean-trajectory approximations may be extended to compute both nonlinear electronic spectra and vibrational-electronic spectra. A general mean-trajectory approximation for both electronic and nuclear degrees of freedom is presented, and the results for purely electronic and for vibrational-electronic four-wave mixing experiments are quantitatively assessed for harmonic surfaces with linear electronic-nuclear coupling.
Viscoelastic stability in a single-screw channel flow
NASA Astrophysics Data System (ADS)
Agbessi, Y.; Bu, L. X.; Béreaux, Y.; Charmeau, J.-Y.
2018-05-01
In this work, we perform a linear stability analysis on pressure and drag flows of an Upper Convected Maxwell viscoelastic fluid. We use the well-recognised method of expanding the disturbances in Chebyschev polynomials and solve the resulting generalized eigenvalues problem with a collocation spectra method. Both the level of elasticity and the back-pressure vary. In a second stage, recent analytic solutions of viscoelastic fluid flows in slowly varying sections [1] are used to extend this stability analysis to flows in a compression or in a diverging section of a single screw channel, for example a wave mixing screw.
Clark, Michelle M; Blangero, John; Dyer, Thomas D; Sobel, Eric M; Sinsheimer, Janet S
2016-01-01
Maternal-offspring gene interactions, aka maternal-fetal genotype (MFG) incompatibilities, are neglected in complex diseases and quantitative trait studies. They are implicated in birth to adult onset diseases but there are limited ways to investigate their influence on quantitative traits. We present the quantitative-MFG (QMFG) test, a linear mixed model where maternal and offspring genotypes are fixed effects and residual correlations between family members are random effects. The QMFG handles families of any size, common or general scenarios of MFG incompatibility, and additional covariates. We develop likelihood ratio tests (LRTs) and rapid score tests and show they provide correct inference. In addition, the LRT's alternative model provides unbiased parameter estimates. We show that testing the association of SNPs by fitting a standard model, which only considers the offspring genotypes, has very low power or can lead to incorrect conclusions. We also show that offspring genetic effects are missed if the MFG modeling assumptions are too restrictive. With genome-wide association study data from the San Antonio Family Heart Study, we demonstrate that the QMFG score test is an effective and rapid screening tool. The QMFG test therefore has important potential to identify pathways of complex diseases for which the genetic etiology remains to be discovered. © 2015 John Wiley & Sons Ltd/University College London.
Epidemiological Survey of Dyslipidemia in Civil Aviators in China from 2006 to 2011
Zhao, Rongfu; Xiao, Dan; Fan, Xiaoying; Ge, Zesong; Wang, Linsheng; Yan, Tiecheng; Wang, Jianzhi; Wei, Qixin; Zhao, Yan
2014-01-01
Aim. This study aimed to analyze blood lipid levels, temporal trend, and age distribution of dyslipidemia in civil aviators in China. Methods. The 305 Chinese aviators were selected randomly and followed up from 2006 to 2011. Their total cholesterol (TC), triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), and low-density lipoprotein cholesterol (LDL-C) levels were evaluated annually. Mean values for each parameter by year were compared using a linear mixed-effects model. The temporal trend of borderline high, high, and low status for each index and of overall borderline high, hyperlipidemia, and dyslipidemia by year was tested using a generalized linear mixed model. Results. The aviators' TC (F = 4.33, P < 0.01), HDL-C (F = 23.25, P < 0.01), and LDL-C (F = 6.13, P < 0.01) values differed across years. The prevalence of dyslipidemia (F = 5.53, P < 0.01), borderline high (F = 6.52, P < 0.01), and hyperlipidemia (F = 3.90, P < 0.01) also differed across years. The prevalence rates for hyperlipidemia and dyslipidemia were the highest in the 41–50-year-old and 31–40-year-old groups. Conclusions. Civil aviators in China were in high dyslipidemia and borderline high level and presented with dyslipidemia younger than other Chinese populations. PMID:24693285
High linearity current communicating passive mixer employing a simple resistor bias
NASA Astrophysics Data System (ADS)
Rongjiang, Liu; Guiliang, Guo; Yuepeng, Yan
2013-03-01
A high linearity current communicating passive mixer including the mixing cell and transimpedance amplifier (TIA) is introduced. It employs the resistor in the TIA to reduce the source voltage and the gate voltage of the mixing cell. The optimum linearity and the maximum symmetric switching operation are obtained at the same time. The mixer is implemented in a 0.25 μm CMOS process. The test shows that it achieves an input third-order intercept point of 13.32 dBm, conversion gain of 5.52 dB, and a single sideband noise figure of 20 dB.
Shape functions for velocity interpolation in general hexahedral cells
Naff, R.L.; Russell, T.F.; Wilson, J.D.
2002-01-01
Numerical methods for grids with irregular cells require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element (CVMFE) methods, vector shape functions approximate velocities and vector test functions enforce a discrete form of Darcy's law. In this paper, a new vector shape function is developed for use with irregular, hexahedral cells (trilinear images of cubes). It interpolates velocities and fluxes quadratically, because as shown here, the usual Piola-transformed shape functions, which interpolate linearly, cannot match uniform flow on general hexahedral cells. Truncation-error estimates for the shape function are demonstrated. CVMFE simulations of uniform and non-uniform flow with irregular meshes show first- and second-order convergence of fluxes in the L2 norm in the presence and absence of singularities, respectively.
Mössler, Karin; Gold, Christian; Aßmus, Jörg; Schumacher, Karin; Calvet, Claudine; Reimer, Silke; Iversen, Gun; Schmid, Wolfgang
2017-09-21
This study examined whether the therapeutic relationship in music therapy with children with Autism Spectrum Disorder predicts generalized changes in social skills. Participants (4-7 years, N = 48) were assessed at baseline, 5 and 12 months. The therapeutic relationship, as observed from session videos, and the generalized change in social skills, as judged by independent blinded assessors and parents, were evaluated using standardized tools (Assessment of the Quality of Relationship; ADOS; SRS). Linear mixed effect models showed significant interaction effects between the therapeutic relationship and several outcomes at 5 and 12 months. We found the music therapeutic relationship to be an important predictor of the development of social skills, as well as communication and language specifically.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Beyond generalized Proca theories
NASA Astrophysics Data System (ADS)
Heisenberg, Lavinia; Kase, Ryotaro; Tsujikawa, Shinji
2016-09-01
We consider higher-order derivative interactions beyond second-order generalized Proca theories that propagate only the three desired polarizations of a massive vector field besides the two tensor polarizations from gravity. These new interactions follow the similar construction criteria to those arising in the extension of scalar-tensor Horndeski theories to Gleyzes-Langlois-Piazza-Vernizzi (GLPV) theories. On the isotropic cosmological background, we show the existence of a constraint with a vanishing Hamiltonian that removes the would-be Ostrogradski ghost. We study the behavior of linear perturbations on top of the isotropic cosmological background in the presence of a matter perfect fluid and find the same number of propagating degrees of freedom as in generalized Proca theories (two tensor polarizations, two transverse vector modes, and two scalar modes). Moreover, we obtain the conditions for the avoidance of ghosts and Laplacian instabilities of tensor, vector, and scalar perturbations. We observe key differences in the scalar sound speed, which is mixed with the matter sound speed outside the domain of generalized Proca theories.
NASA Astrophysics Data System (ADS)
Huang, Wen Deng; Chen, Guang De; Yuan, Zhao Lin; Yang, Chuang Hua; Ye, Hong Gang; Wu, Ye Long
2016-02-01
The theoretical investigations of the interface optical phonons, electron-phonon couplings and its ternary mixed effects in zinc-blende spherical quantum dots are obtained by using the dielectric continuum model and modified random-element isodisplacement model. The features of dispersion curves, electron-phonon coupling strengths, and its ternary mixed effects for interface optical phonons in a single zinc-blende GaN/AlxGa1-xN spherical quantum dot are calculated and discussed in detail. The numerical results show that there are three branches of interface optical phonons. One branch exists in low frequency region; another two branches exist in high frequency region. The interface optical phonons with small quantum number l have more important contributions to the electron-phonon interactions. It is also found that ternary mixed effects have important influences on the interface optical phonon properties in a single zinc-blende GaN/AlxGa1-xN quantum dot. With the increase of Al component, the interface optical phonon frequencies appear linear changes, and the electron-phonon coupling strengths appear non-linear changes in high frequency region. But in low frequency region, the frequencies appear non-linear changes, and the electron-phonon coupling strengths appear linear changes.
Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz
2015-04-01
Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Hall-Aspland, S A; Hall, A P; Rogers, T L
2005-03-01
Mixing models are used to determine diets where the number of prey items are greater than one, however, the limitation of the linear mixing method is the lack of a unique solution when the number of potential sources is greater than the number (n) of isotopic signatures +1. Using the IsoSource program all possible combinations of each source contribution (0-100%) in preselected small increments can be examined and a range of values produced for each sample analysed. We propose the use of a Moore Penrose (M-P) pseudoinverse, which involves the inverse of a 2x2 matrix. This is easily generalized to the case of a single isotope with (p) prey sources and produces a specific solution. The Antarctic leopard seal (Hydrurga leptonyx) was used as a model species to test this method. This seal is an opportunistic predator, which preys on a wide range of species including seals, penguins, fish and krill. The M-P method was used to determine the contribution to diet from each of the four prey types based on blood and fur samples collected over three consecutive austral summers. The advantage of the M-P method was the production of a vector of fractions f for each predator isotopic value, allowing us to identify the relative variation in dietary proportions. Comparison of the calculated fractions from this method with 'means' from IsoSource allowed confidence in the new approach for the case of a single isotope, N.
Application of Design Methodologies for Feedback Compensation Associated with Linear Systems
NASA Technical Reports Server (NTRS)
Smith, Monty J.
1996-01-01
The work that follows is concerned with the application of design methodologies for feedback compensation associated with linear systems. In general, the intent is to provide a well behaved closed loop system in terms of stability and robustness (internal signals remain bounded with a certain amount of uncertainty) and simultaneously achieve an acceptable level of performance. The approach here has been to convert the closed loop system and control synthesis problem into the interpolation setting. The interpolation formulation then serves as our mathematical representation of the design process. Lifting techniques have been used to solve the corresponding interpolation and control synthesis problems. Several applications using this multiobjective design methodology have been included to show the effectiveness of these techniques. In particular, the mixed H 2-H performance criteria with algorithm has been used on several examples including an F-18 HARV (High Angle of Attack Research Vehicle) for sensitivity performance.
Resolvent positive linear operators exhibit the reduction phenomenon
Altenberg, Lee
2012-01-01
The spectral bound, s(αA + βV), of a combination of a resolvent positive linear operator A and an operator of multiplication V, was shown by Kato to be convex in . Kato's result is shown here to imply, through an elementary “dual convexity” lemma, that s(αA + βV) is also convex in α > 0, and notably, ∂s(αA + βV)/∂α ≤ s(A). Diffusions typically have s(A) ≤ 0, so that for diffusions with spatially heterogeneous growth or decay rates, greater mixing reduces growth. Models of the evolution of dispersal in particular have found this result when A is a Laplacian or second-order elliptic operator, or a nonlocal diffusion operator, implying selection for reduced dispersal. These cases are shown here to be part of a single, broadly general, “reduction” phenomenon. PMID:22357763
Polar versus Cartesian velocity models for maneuvering target tracking with IMM
NASA Astrophysics Data System (ADS)
Laneuville, Dann
This paper compares various model sets in different IMM filters for the maneuvering target tracking problem. The aim is to see whether we can improve the tracking performance of what is certainly the most widely used model set in the literature for the maneuvering target tracking problem: a Nearly Constant Velocity model and a Nearly Coordinated Turn model. Our new challenger set consists of a mixed Cartesian position and polar velocity state vector to describe the uniform motion segments and is augmented with the turn rate to obtain the second model for the maneuvering segments. This paper also gives a general procedure to discretize up to second order any non-linear continuous time model with linear diffusion. Comparative simulations on an air defence scenario with a 2D radar, show that this new approach improves significantly the tracking performance in this case.
Electric-field-driven electron-transfer in mixed-valence molecules.
Blair, Enrique P; Corcelli, Steven A; Lent, Craig S
2016-07-07
Molecular quantum-dot cellular automata is a computing paradigm in which digital information is encoded by the charge configuration of a mixed-valence molecule. General-purpose computing can be achieved by arranging these compounds on a substrate and exploiting intermolecular Coulombic coupling. The operation of such a device relies on nonequilibrium electron transfer (ET), whereby the time-varying electric field of one molecule induces an ET event in a neighboring molecule. The magnitude of the electric fields can be quite large because of close spatial proximity, and the induced ET rate is a measure of the nonequilibrium response of the molecule. We calculate the electric-field-driven ET rate for a model mixed-valence compound. The mixed-valence molecule is regarded as a two-state electronic system coupled to a molecular vibrational mode, which is, in turn, coupled to a thermal environment. Both the electronic and vibrational degrees-of-freedom are treated quantum mechanically, and the dissipative vibrational-bath interaction is modeled with the Lindblad equation. This approach captures both tunneling and nonadiabatic dynamics. Relationships between microscopic molecular properties and the driven ET rate are explored for two time-dependent applied fields: an abruptly switched field and a linearly ramped field. In both cases, the driven ET rate is only weakly temperature dependent. When the model is applied using parameters appropriate to a specific mixed-valence molecule, diferrocenylacetylene, terahertz-range ET transfer rates are predicted.
NASA Astrophysics Data System (ADS)
Li, Tanda; Bedding, Timothy R.; Huber, Daniel; Ball, Warrick H.; Stello, Dennis; Murphy, Simon J.; Bland-Hawthorn, Joss
2018-03-01
Stellar models rely on a number of free parameters. High-quality observations of eclipsing binary stars observed by Kepler offer a great opportunity to calibrate model parameters for evolved stars. Our study focuses on six Kepler red giants with the goal of calibrating the mixing-length parameter of convection as well as the asteroseismic surface term in models. We introduce a new method to improve the identification of oscillation modes that exploits theoretical frequencies to guide the mode identification (`peak-bagging') stage of the data analysis. Our results indicate that the convective mixing-length parameter (α) is ≈14 per cent larger for red giants than for the Sun, in agreement with recent results from modelling the APOGEE stars. We found that the asteroseismic surface term (i.e. the frequency offset between the observed and predicted modes) correlates with stellar parameters (Teff, log g) and the mixing-length parameter. This frequency offset generally decreases as giants evolve. The two coefficients a-1 and a3 for the inverse and cubic terms that have been used to describe the surface term correction are found to correlate linearly. The effect of the surface term is also seen in the p-g mixed modes; however, established methods for correcting the effect are not able to properly correct the g-dominated modes in late evolved stars.
Linear signal noise summer accurately determines and controls S/N ratio
NASA Technical Reports Server (NTRS)
Sundry, J. L.
1966-01-01
Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.
Fabian C.C. Uzoh; William W. Oliver
2008-01-01
A diameter increment model is developed and evaluated for individual trees of ponderosa pine throughout the species range in the United States using a multilevel linear mixed model. Stochastic variability is broken down among period, locale, plot, tree and within-tree components. Covariates acting at tree and stand level, as breast height diameter, density, site index...
The effect of dropout on the efficiency of D-optimal designs of linear mixed models.
Ortega-Azurduy, S A; Tan, F E S; Berger, M P F
2008-06-30
Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.
Theoretical studies of solar oscillations
NASA Technical Reports Server (NTRS)
Goldreich, P.
1980-01-01
Possible sources for the excitation of the solar 5 minute oscillations were investigated and a linear non-adiabatic stability code was applied to a preliminary study of the solar g-modes with periods near 160 minutes. Although no definitive conclusions concerning the excitation of these modes were reached, the excitation of the 5 minute oscillations by turbulent stresses in the convection zone remains a viable possibility. Theoretical calculations do not offer much support for the identification of the 160 minute global solar oscillation (reported by several independent observers) as a solar g-mode. A significant advance was made in attempting to reconcile mixing-length theory with the results of the calculations of linearly unstable normal modes. Calculations show that in a convective envelope prepared according to mixing length theory, the only linearly unstable modes are those which correspond to the turbulent eddies which are the basic element of the heuristic mixing length theory.
Killiches, Matthias; Czado, Claudia
2018-03-22
We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.
Fictitious Domain Methods for Fracture Models in Elasticity.
NASA Astrophysics Data System (ADS)
Court, S.; Bodart, O.; Cayol, V.; Koko, J.
2014-12-01
As surface displacements depend non linearly on sources location and shape, simplifying assumptions are generally required to reduce computation time when inverting geodetic data. We present a generic Finite Element Method designed for pressurized or sheared cracks inside a linear elastic medium. A fictitious domain method is used to take the crack into account independently of the mesh. Besides the possibility of considering heterogeneous media, the approach permits the evolution of the crack through time or more generally through iterations: The goal is to change the less things we need when the crack geometry is modified; In particular no re-meshing is required (the boundary conditions at the level of the crack are imposed by Lagrange multipliers), leading to a gain of computation time and resources with respect to classic finite element methods. This method is also robust with respect to the geometry, since we expect to observe the same behavior whatever the shape and the position of the crack. We present numerical experiments which highlight the accuracy of our method (using convergence curves), the optimality of errors, and the robustness with respect to the geometry (with computation of errors on some quantities for all kind of geometric configurations). We will also provide 2D benchmark tests. The method is then applied to Piton de la Fournaise volcano, considering a pressurized crack - inside a 3-dimensional domain - and the corresponding computation time and accuracy are compared with results from a mixed Boundary element method. In order to determine the crack geometrical characteristics, and pressure, inversions are performed combining fictitious domain computations with a near neighborhood algorithm. Performances are compared with those obtained combining a mixed boundary element method with the same inversion algorithm.
Nuthmann, Antje; Einhäuser, Wolfgang; Schütz, Immo
2017-01-01
Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead ("central bias"). This problem is further exacerbated in the context of model comparisons, because some-but not all-models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox "GridFix" available.
Generalized functional linear models for gene-based case-control association studies.
Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao
2014-11-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.
Generalized Functional Linear Models for Gene-based Case-Control Association Studies
Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao
2014-01-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683
Koom, Woong Sub; Choi, Mi Yeon; Lee, Jeongshim; Park, Eun Jung; Kim, Ju Hye; Kim, Sun-Hyun; Kim, Yong Bae
2016-01-01
Purpose: The purpose of this study was to evaluate the efficacy of art therapy to control fatigue in cancer patients during course of radiotherapy and its impact on quality of life (QoL). Materials and Methods: Fifty cancer patients receiving radiotherapy received weekly art therapy sessions using famous painting appreciation. Fatigue and QoL were assessed using the Brief Fatigue Inventory (BFI) Scale and the Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F) at baseline before starting radiotherapy, every week for 4 weeks during radiotherapy, and at the end of radiotherapy. Mean changes of scores over time were analyzed using a generalized linear mixed model. Results: Of the 50 patients, 34 (68%) participated in 4 sessions of art therapy. Generalized linear mixed models testing for the effect of time on mean score changes showed no significant changes in scores from baseline for the BFI and FACIT-F. The mean BFI score and FACIT-F total score changed from 3.1 to 2.7 and from 110.7 to 109.2, respectively. Art therapy based on the appreciation of famous paintings led to increases in self-esteem by increasing self-realization and forming social relationships. Conclusion: Fatigue and QoL in cancer patients with art therapy do not deteriorate during a period of radiotherapy. Despite the single-arm small number of participants and pilot design, this study provides a strong initial demonstration that art therapy of appreciation for famous painting is worthy of further study for fatigue and QoL improvement. Further, it can play an important role in routine practice in cancer patients during radiotherapy. PMID:27306778
Adolescent Loss-of-Control Eating and Weight Loss Maintenance After Bariatric Surgery.
Goldschmidt, Andrea B; Khoury, Jane; Jenkins, Todd M; Bond, Dale S; Thomas, J Graham; Utzinger, Linsey M; Zeller, Meg H; Inge, Thomas H; Mitchell, James E
2018-01-01
Loss-of-control (LOC) eating is common in adults undergoing bariatric surgery and is associated with poorer weight outcomes. Its long-term course in adolescent bariatric surgery patients and associations with weight outcomes are unclear. Adolescents ( n = 234; age range = 13-19 years) undergoing bariatric surgery across 5 US sites were assessed for postsurgery follow-up at 6 months and 1, 2, 3, and 4 years. Descriptive statistics and generalized linear mixed models were used to describe the prevalence of LOC eating episodes involving objectively large amounts of food and continuous eating, respectively. Generalized linear mixed models investigated the association of any LOC eating with short- and long-term BMI changes. At baseline, objectively large LOC eating was reported by 15.4% of adolescents, and continuous LOC eating by 27.8% of adolescents. Both forms of LOC eating were significantly lower at all postsurgical time points relative to presurgery (range = 0.5%-14.5%; P s < .05). However, both behaviors gradually increased from 6-month to 4-year follow-up ( P s < .05). Presurgical LOC eating was not related to percent BMI change over follow-up ( P = .79). However, LOC eating at 1-, 2-, and 3-year follow-up was associated with lower percent BMI change from baseline at the next consecutive assessment ( P s < .05). Although presurgical LOC eating was not related to relative weight loss after surgery, postoperative LOC eating may adversely affect long-term weight outcomes. Rates of LOC eating decreased from presurgery to 6-months postsurgery but increased thereafter. Therefore, this behavior may warrant additional empirical and clinical attention. Copyright © 2018 by the American Academy of Pediatrics.
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C
2013-03-01
Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.
On conforming mixed finite element methods for incompressible viscous flow problems
NASA Technical Reports Server (NTRS)
Gunzburger, M. D; Nicolaides, R. A.; Peterson, J. S.
1982-01-01
The application of conforming mixed finite element methods to obtain approximate solutions of linearized Navier-Stokes equations is examined. Attention is given to the convergence rates of various finite element approximations of the pressure and the velocity field. The optimality of the convergence rates are addressed in terms of comparisons of the approximation convergence to a smooth solution in relation to the best approximation available for the finite element space used. Consideration is also devoted to techniques for efficient use of a Gaussian elimination algorithm to obtain a solution to a system of linear algebraic equations derived by finite element discretizations of linear partial differential equations.
General Slowing and Education Mediate Task Switching Performance Across the Life-Span
Moretti, Luca; Semenza, Carlo; Vallesi, Antonino
2018-01-01
Objective: This study considered the potential role of both protective factors (cognitive reserve, CR) and adverse ones (general slowing) in modulating cognitive flexibility in the adult life-span. Method: Ninety-eight individuals performed a task-switching (TS) paradigm in which we adopted a manipulation concerning the timing between the cue and the target. Working memory demands were minimized by using transparent cues. Additionally, indices of cognitive integrity, depression, processing speed and different CR dimensions were collected and used in linear models accounting for TS performance under the different time constraints. Results: The main results showed similar mixing costs and higher switching costs in older adults, with an overall age-dependent effect of general slowing on these costs. The link between processing speed and TS performance was attenuated when participants had more time to prepare. Among the different CR indices, formal education only was associated with reduced switch costs under time pressure. Discussion: Even though CR is often operationalized as a unitary construct, the present research confirms the benefits of using tools designed to distinguish between different CR dimensions. Furthermore, our results provide empirical support to the assumption that processing speed influence on executive performance depends on time constraints. Finally, it is suggested that whether age differences appear in terms of switch or mixing costs depends on working memory demands (which were low in our tasks with transparent cues). PMID:29780341
Numerically pricing American options under the generalized mixed fractional Brownian motion model
NASA Astrophysics Data System (ADS)
Chen, Wenting; Yan, Bowen; Lian, Guanghua; Zhang, Ying
2016-06-01
In this paper, we introduce a robust numerical method, based on the upwind scheme, for the pricing of American puts under the generalized mixed fractional Brownian motion (GMFBM) model. By using portfolio analysis and applying the Wick-Itô formula, a partial differential equation (PDE) governing the prices of vanilla options under the GMFBM is successfully derived for the first time. Based on this, we formulate the pricing of American puts under the current model as a linear complementarity problem (LCP). Unlike the classical Black-Scholes (B-S) model or the generalized B-S model discussed in Cen and Le (2011), the newly obtained LCP under the GMFBM model is difficult to be solved accurately because of the numerical instability which results from the degeneration of the governing PDE as time approaches zero. To overcome this difficulty, a numerical approach based on the upwind scheme is adopted. It is shown that the coefficient matrix of the current method is an M-matrix, which ensures its stability in the maximum-norm sense. Remarkably, we have managed to provide a sharp theoretic error estimate for the current method, which is further verified numerically. The results of various numerical experiments also suggest that this new approach is quite accurate, and can be easily extended to price other types of financial derivatives with an American-style exercise feature under the GMFBM model.
Nonlinear excitation of the ablative Rayleigh-Taylor instability for all wave numbers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, H.; Betti, R.; Gopalaswamy, V.
Small-scale perturbations in the ablative Rayleigh-Taylor instability (ARTI) are often neglected because they are linearly stable when their wavelength is shorter than a linear cutoff. Using 2D and 3D numerical simulations, it is shown that linearly stable modes of any wavelength can be destabilized. This instability regime requires finite amplitude initial perturbations and linearly stable ARTI modes are more easily destabilized in 3D than in 2D. In conclusion, it is shown that for conditions found in laser fusion targets, short wavelength ARTI modes are more efficient at driving mixing of ablated material throughout the target since the nonlinear bubble densitymore » increases with the wave number and small scale bubbles carry a larger mass flux of mixed material.« less
Nonlinear excitation of the ablative Rayleigh-Taylor instability for all wave numbers
Zhang, H.; Betti, R.; Gopalaswamy, V.; ...
2018-01-16
Small-scale perturbations in the ablative Rayleigh-Taylor instability (ARTI) are often neglected because they are linearly stable when their wavelength is shorter than a linear cutoff. Using 2D and 3D numerical simulations, it is shown that linearly stable modes of any wavelength can be destabilized. This instability regime requires finite amplitude initial perturbations and linearly stable ARTI modes are more easily destabilized in 3D than in 2D. In conclusion, it is shown that for conditions found in laser fusion targets, short wavelength ARTI modes are more efficient at driving mixing of ablated material throughout the target since the nonlinear bubble densitymore » increases with the wave number and small scale bubbles carry a larger mass flux of mixed material.« less
The roll-up and merging of coherent structures in shallow mixing layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, M. Y., E-mail: celmy@connect.ust.hk; Ghidaoui, M. S.; Kolyshkin, A. A.
2016-09-15
The current study seeks a fundamental explanation to the development of two-dimensional coherent structures (2DCSs) in shallow mixing layers. A nonlinear numerical model based on the depth-averaged shallow water equations is used to investigate the temporal evolution of shallow mixing layers, where the mapping from temporal to spatial results is made using the velocity at the center of the mixing layers. The flow is periodic in the streamwise direction. Transmissive boundary conditions are used in the cross-stream boundaries to prevent reflections. Numerical results are compared to linear stability analysis, mean-field theory, and secondary stability analysis. Results suggest that the onsetmore » and development of 2DCS in shallow mixing layers are the result of a sequence of instabilities governed by linear theory, mean-field theory, and secondary stability theory. The linear instability of the shearing velocity gradient gives the onset of 2DCS. When the perturbations reach a certain amplitude, the flow field of the perturbations changes from a wavy shape to a vortical (2DCS) structure because of nonlinearity. The development of the vertical 2DCS does not appear to follow weakly nonlinear theory; instead, it follows mean-field theory. After the formation of 2DCS, separate 2DCSs merge to form larger 2DCS. In this way, 2DCSs grow and shallow mixing layers develop and grow in scale. The merging of 2DCS in shallow mixing layers is shown to be caused by the secondary instability of the 2DCS. Eventually 2DCSs are dissipated by bed friction. The sequence of instabilities can cause the upscaling of the turbulent kinetic energy in shallow mixing layers.« less
Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.
Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul
2015-01-01
Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.
Des Roches, Carrie A; Vallila-Rohter, Sofia; Villard, Sarah; Tripodis, Yorghos; Caplan, David; Kiran, Swathi
2016-12-01
The current study examined treatment outcomes and generalization patterns following 2 sentence comprehension therapies: object manipulation (OM) and sentence-to-picture matching (SPM). Findings were interpreted within the framework of specific deficit and resource reduction accounts, which were extended in order to examine the nature of generalization following treatment of sentence comprehension deficits in aphasia. Forty-eight individuals with aphasia were enrolled in 1 of 8 potential treatment assignments that varied by task (OM, SPM), complexity of trained sentences (complex, simple), and syntactic movement (noun phrase, wh-movement). Comprehension of trained and untrained sentences was probed before and after treatment using stimuli that differed from the treatment stimuli. Linear mixed-model analyses demonstrated that, although both OM and SPM treatments were effective, OM resulted in greater improvement than SPM. Analyses of covariance revealed main effects of complexity in generalization; generalization from complex to simple linguistically related sentences was observed both across task and across movement. Results are consistent with the complexity account of treatment efficacy, as generalization effects were consistently observed from complex to simpler structures. Furthermore, results provide support for resource reduction accounts that suggest that generalization can extend across linguistic boundaries, such as across movement type.
Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais
2017-01-01
In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.
Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems
Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...
2012-01-01
Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less
Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.
2014-01-01
This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390
Ozone response to emission reductions in the southeastern United States
NASA Astrophysics Data System (ADS)
Blanchard, Charles L.; Hidy, George M.
2018-06-01
Ozone (O3) formation in the southeastern US is studied in relation to nitrogen oxide (NOx) emissions using long-term (1990s-2015) surface measurements of the Southeastern Aerosol Research and Characterization (SEARCH) network, U.S. Environmental Protection Agency (EPA) O3 measurements, and EPA Clean Air Status and Trends Network (CASTNET) nitrate deposition data. Annual fourth-highest daily peak 8 h O3 mixing ratios at EPA monitoring sites in Georgia, Alabama, and Mississippi exhibit statistically significant (p < 0.0001) linear correlations with annual NOx emissions in those states between 1996 and 2015. The annual fourth-highest daily peak 8 h O3 mixing ratios declined toward values of ˜ 45-50 ppbv and monthly O3 maxima decreased at rates averaging ˜ 1-1.5 ppbv yr-1. Mean annual total oxidized nitrogen (NOy) mixing ratios at SEARCH sites declined in proportion to NOx emission reductions. CASTNET data show declining wet and dry nitrate deposition since the late 1990s, with total (wet plus dry) nitrate deposition fluxes decreasing linearly in proportion to reductions of NOx emissions by ˜ 60 % in Alabama and Georgia. Annual nitrate deposition rates at Georgia and Alabama CASTNET sites correspond to 30 % of Georgia emission rates and 36 % of Alabama emission rates, respectively. The fraction of NOx emissions lost to deposition has not changed. SEARCH and CASTNET sites exhibit downward trends in mean annual nitric acid (HNO3) concentrations. Observed relationships of O3 to NOz (NOy-NOx) support past model predictions of increases in cycling of NO and increasing responsiveness of O3 to NOx. The study data provide a long-term record that can be used to examine the accuracy of process relationships embedded in modeling efforts. Quantifying observed O3 trends and relating them to reductions in ambient NOy species concentrations offers key insights into processes of general relevance to air quality management and provides important information supporting strategies for reducing O3 mixing ratios.
Christiansen, Lars B; Cerin, Ester; Badland, Hannah; Kerr, Jacqueline; Davey, Rachel; Troelsen, Jens; van Dyck, Delfien; Mitáš, Josef; Schofield, Grant; Sugiyama, Takemi; Salvo, Deborah; Sarmiento, Olga L; Reis, Rodrigo; Adams, Marc; Frank, Larry; Sallis, James F
2016-12-01
Mounting evidence documents the importance of urban form for active travel, but international studies could strengthen the evidence. The aim of the study was to document the strength, shape, and generalizability of relations of objectively measured built environment variables with transport-related walking and cycling. This cross-sectional study maximized variation of environments and demographics by including multiple countries and by selecting adult participants living in neighborhoods based on higher and lower classifications of objectively measured walkability and socioeconomic status. Analyses were conducted on 12,181 adults aged 18-66 years, drawn from 14 cities across 10 countries worldwide. Frequency of transport-related walking and cycling over the last seven days was assessed by questionnaire and four objectively measured built environment variables were calculated. Associations of built environment variables with transport-related walking and cycling variables were estimated using generalized additive mixed models, and were tested for curvilinearity and study site moderation. We found positive associations of walking for transport with all the environmental attributes, but also found that the relationships was only linear for land use mix, but not for residential density, intersection density, and the number of parks. Our findings suggest that there may be optimum values in these attributes, beyond which higher densities or number of parks could have minor or even negative impact. Cycling for transport was associated linearly with residential density, intersection density (only for any cycling), and land use mix, but not with the number of parks. Across 14 diverse cities and countries, living in more densely populated areas, having a well-connected street network, more diverse land uses, and having more parks were positively associated with transport-related walking and/or cycling. Except for land-use-mix, all built environment variables had curvilinear relationships with walking, with a plateau in the relationship at higher levels of the scales.
Christiansen, Lars B.; Cerin, Ester; Badland, Hannah; Kerr, Jacqueline; Davey, Rachel; Troelsen, Jens; van Dyck, Delfien; Mitáš, Josef; Schofield, Grant; Sugiyama, Takemi; Salvo, Deborah; Sarmiento, Olga L.; Reis, Rodrigo; Adams, Marc; Frank, Larry; Sallis, James F.
2016-01-01
Introduction Mounting evidence documents the importance of urban form for active travel, but international studies could strengthen the evidence. The aim of the study was to document the strength, shape, and generalizability of relations of objectively measured built environment variables with transport-related walking and cycling. Methods This cross-sectional study maximized variation of environments and demographics by including multiple countries and by selecting adult participants living in neighborhoods based on higher and lower classifications of objectively measured walkability and socioeconomic status. Analyses were conducted on 12,181 adults aged 18–66 years, drawn from 14 cities across 10 countries worldwide. Frequency of transport-related walking and cycling over the last seven days was assessed by questionnaire and four objectively measured built environment variables were calculated. Associations of built environment variables with transport-related walking and cycling variables were estimated using generalized additive mixed models, and were tested for curvilinearity and study site moderation. Results We found positive associations of walking for transport with all the environmental attributes, but also found that the relationships was only linear for land use mix, but not for residential density, intersection density, and the number of parks. Our findings suggest that there may be optimum values in these attributes, beyond which higher densities or number of parks could have minor or even negative impact. Cycling for transport was associated linearly with residential density, intersection density (only for any cycling), and land use mix, but not with the number of parks. Conclusion Across 14 diverse cities and countries, living in more densely populated areas, having a well-connected street network, more diverse land uses, and having more parks were positively associated with transport-related walking and/or cycling. Except for land-use-mix, all built environment variables had curvilinear relationships with walking, with a plateau in the relationship at higher levels of the scales. PMID:28111613
Penalized nonparametric scalar-on-function regression via principal coordinates
Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu
2016-01-01
A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963
Generalization of the swelling method to measure the intrinsic curvature of lipids
NASA Astrophysics Data System (ADS)
Barragán Vidal, I. A.; Müller, M.
2017-12-01
Via computer simulation of a coarse-grained model of two-component lipid bilayers, we compare two methods of measuring the intrinsic curvatures of the constituting monolayers. The first one is a generalization of the swelling method that, in addition to the assumption that the spontaneous curvature linearly depends on the composition of the lipid mixture, incorporates contributions from its elastic energy. The second method measures the effective curvature-composition coupling between the apposing leaflets of bilayer structures (planar bilayers or cylindrical tethers) to extract the spontaneous curvature. Our findings demonstrate that both methods yield consistent results. However, we highlight that the two-leaflet structure inherent to the latter method has the advantage of allowing measurements for mixed lipid systems up to their critical point of demixing as well as in the regime of high concentration (of either species).
Dual energy CT: How to best blend both energies in one fused image?
NASA Astrophysics Data System (ADS)
Eusemann, Christian; Holmes, David R., III; Schmidt, Bernhard; Flohr, Thomas G.; Robb, Richard; McCollough, Cynthia; Hough, David M.; Huprich, James E.; Wittmer, Michael; Siddiki, Hasan; Fletcher, Joel G.
2008-03-01
In x-ray based imaging, attenuation depends on the type of tissue scanned and the average energy level of the x-ray beam, which can be adjusted via the x-ray tube potential. Conventional computed tomography (CT) imaging uses a single kV value, usually 120kV. Dual energy CT uses two different tube potentials (e.g. 80kV & 140kV) to obtain two image datasets with different attenuation characteristics. This difference in attenuation levels allows for classification of the composition of the tissues. In addition, the different energies significantly influence the contrast resolution and noise characteristics of the two image datasets. 80kV images provide greater contrast resolution than 140kV, but are limited because of increased noise. While dual-energy CT may provide useful clinical information, the question arises as to how to best realize and visualize this benefit. In conventional single energy CT, patient image data is presented to the physicians using well understood organ specific window and level settings. Instead of viewing two data series (one for each tube potential), the images are most often fused into a single image dataset using a linear mixing of the data with a 70% 140kV and a 30% 80kV mixing ratio, as available on one commercial systems. This ratio provides a reasonable representation of the anatomy/pathology, however due to the linear nature of the blending, the advantages of each dataset (contrast or sharpness) is partially offset by its drawbacks (blurring or noise). This project evaluated a variety of organ specific linear and non-linear mixing algorithms to optimize the blending of the low and high kV information for display in a way that combines the benefits (contrast and sharpness) of both energies in a single image. A blinded review analysis by subspecialty abdominal radiologists found that, unique, tunable, non-linear mixing algorithms that we developed outperformed linear, fixed mixing for a variety of different organs and pathologies of interest.
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
Hilbert complexes of nonlinear elasticity
NASA Astrophysics Data System (ADS)
Angoshtari, Arzhang; Yavari, Arash
2016-12-01
We introduce some Hilbert complexes involving second-order tensors on flat compact manifolds with boundary that describe the kinematics and the kinetics of motion in nonlinear elasticity. We then use the general framework of Hilbert complexes to write Hodge-type and Helmholtz-type orthogonal decompositions for second-order tensors. As some applications of these decompositions in nonlinear elasticity, we study the strain compatibility equations of linear and nonlinear elasticity in the presence of Dirichlet boundary conditions and the existence of stress functions on non-contractible bodies. As an application of these Hilbert complexes in computational mechanics, we briefly discuss the derivation of a new class of mixed finite element methods for nonlinear elasticity.
Longitudinal associations between stressors and work ability in hospital workers.
Carmen Martinez, Maria; da Silva Alexandre, Tiago; Dias de Oliveira Latorre, Maria do Rosario; Marina Fischer, Frida
This study sought to assess associations between work stressors and work ability in a cohort (2009-2012) of 498 hospital workers. Time-dependent variables associated with the Work Ability Index (WAI) were evaluated using general linear mixed models. Analyses included effects of individual and work characteristics. Except for work demands, the work stressors (job control, social support, effort-reward imbalance, overcommitment and work-related activities that cause pain/injury) were associated with WAI (p < 0.050) at intercept and in the time interaction. Daytime work and morning shift work were associated with decreased WAI (p < 0.010). Work stressors negatively affected work ability over time independently of other variables.
Onset of dissolution-driven instabilities in fluids with nonmonotonic density profile
NASA Astrophysics Data System (ADS)
Jafari Raad, Seyed Mostafa; Hassanzadeh, Hassan
2015-11-01
Analog systems have recently been used in several experiments in the context of convective mixing of C O2 . We generalize the nonmonotonic density dependence of the growth of instabilities and provide a scaling relation for the onset of instability. The results of linear stability analysis and direct numerical simulations show that these fluids do not resemble the dynamics of C O2 -water convective instabilities. A typical analog system, such as water-propylene glycol, is found to be less unstable than C O2 -water. These results provide a basis for further research and proper selection of analog systems and are essential to the interpretation of experiments.
Ferguson, Ian D; Weiser, Peter; Torok, Kathryn S
2015-01-01
Herein we report successful treatment of an adolescent Caucasian female with severe progressive localized scleroderma (mixed subtype, including generalized morphea and linear scleroderma of the trunk/limb) using infliximab and leflunomide. The patient demonstrated improvement after the first 9 months of therapy based on her clinical examination, objective measures, and patient and parent global assessments. Infliximab is a potential treatment option for pediatric localized scleroderma patients who have progression of disease or who are unable to tolerate the side effect profile of more standard systemic therapy. Larger longitudinal studies or case series are needed to confirm and further investigate infliximab's role in localized scleroderma.
Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p < 0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p < 0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanna, T.; Vijayajayanthi, M.; Lakshmanan, M.
The bright soliton solutions of the mixed coupled nonlinear Schroedinger equations with two components (2-CNLS) with linear self- and cross-coupling terms have been obtained by identifying a transformation that transforms the corresponding equation to the integrable mixed 2-CNLS equations. The study on the collision dynamics of bright solitons shows that there exists periodic energy switching, due to the coupling terms. This periodic energy switching can be controlled by the new type of shape changing collisions of bright solitons arising in a mixed 2-CNLS system, characterized by intensity redistribution, amplitude dependent phase shift, and relative separation distance. We also point outmore » that this system exhibits large periodic intensity switching even with very small linear self-coupling strengths.« less
NASA Technical Reports Server (NTRS)
Ramsey, Michael S.; Christensen, Philip R.
1992-01-01
Accurate interpretation of thermal infrared data depends upon the understanding and removal of complicating effects. These effects may include physical mixing of various mineralogies and particle sizes, atmospheric absorption and emission, surficial coatings, geometry effects, and differential surface temperatures. The focus is the examination of the linear spectral mixing of individual mineral or endmember spectra. Linear addition of spectra, for particles larger than the wavelength, allows for a straight-forward method of deconvolving the observed spectra, predicting a volume percent of each endmember. The 'forward analysis' of linear mixing (comparing the spectra of physical mixtures to numerical mixtures) has received much attention. The reverse approach of un-mixing thermal emission spectra was examined with remotely sensed data, but no laboratory verification exists. Understanding of the effects of spectral mixing on high resolution laboratory spectra allows for the extrapolation to lower resolution, and often more complicated, remotely gathered data. Thermal Infrared Multispectral Scanner (TIMS) data for Meteor Crater, Arizona were acquired in Sep. 1987. The spectral un-mixing of these data gives a unique test of the laboratory results. Meteor Crater (1.2 km in diameter and 180 m deep) is located in north-central Arizona, west of Canyon Diablo. The arid environment, paucity of vegetation, and low relief make the region ideal for remote data acquisition. Within the horizontal sedimentary sequence that forms the upper Colorado Plateau, the oldest unit sampled by the impact crater was the Permian Coconino Sandstone. A thin bed of the Toroweap Formation, also of Permian age, conformably overlays the Coconino. Above the Toroweap lies the Permian Kiabab Limestone which, in turn, is covered by a thin veneer of the Moenkopi Formation. The Moenkopi is Triassic in age and has two distinct sub-units in the vicinity of the crater. The lower Wupatki member is a fine-grained sandstone, while the upper Moqui member is a fissile siltstone. Ejecta from these units are preserved as inverted stratigraphy up to 2 crater radii from the rim. The mineralogical contrast between the units, relative lack of post-emplacement erosion and ejecta mixing provide a unique site to apply the un-mixing model. Selection of the aforementioned units as endmembers reveals distinct patterns in the ejecta of the crater.
NASA Astrophysics Data System (ADS)
Boyd, Thomas J.; Barham, Bethany P.; Hall, Gregory J.; Osburn, Christopher L.
2010-09-01
Ultrafiltered and low molecular weight dissolved organic matter (UDOM and LMW-DOM, respectively) fluorescence was studied under simulated estuarine mixing using samples collected from Delaware, Chesapeake, and San Francisco Bays (USA) transects. UDOM was concentrated by tangential flow ultrafiltration (TFF) from the marine (>33 PSU), mid-estuarine (˜16 PSU), and freshwater (<1 PSU) members. TFF permeates (<1 kDa) from the three members were used to create artificial salinity transects ranging from ˜0 to ˜36, with 4 PSU increments. UDOM from the end- or mid-members was added in equal amounts to each salinity-mix. Three-dimensional fluorescence excitation-emission matrix (EEMs) spectra were generated for each end-member permeate and UDOM through the full estuarine mixing transect. Fluorescence components such as proteinaceous, terrigenous, and marine derived humic peaks, and certain fluorescent ratios were noticeably altered by simulated estuarine mixing, suggesting that LMW DOM and UDOM undergo physicochemical alteration as they move to or from the freshwater, mid-estuarine, or coastal ocean members. LMW fluorescence components fit a decreasing linear mixing model from mid salinities to the ocean end-member, but were more highly fluorescent than mixing alone would predict in lower salinities (<8). Significant shifts were also seen in UDOM peak emission wavelengths with blue-shifting toward the ocean end-member. Humic-type components in UDOM generally showed lower fluorescent intensities at low salinities, higher at mid-salinities, and lower again toward the ocean end-member. T (believed to be proteinaceous) and N (labile organic matter) peaks behaved similarly to each other, but not to B peak fluorescence, which showed virtually no variation in permeate or UDOM mixes with salinity. PCA and PARAFAC models showed similar results suggesting trends could be modeled for DOM end- and mid-member sources. Changes in fluorescence properties due to estuarine mixing may be important when using CDOM as a proxy for DOM cycling in coastal systems.
NASA Technical Reports Server (NTRS)
Schlesinger, Robert E.
1990-01-01
Results are presented from a linear Lagrangian entraining parcel model of an overshooting thunderstorm cloud top. The model, which is similar to that of Adler and Mack (1986), gives analytic exact solutions for vertical velocity and temperature by representing mixing with Rayleigh damping instead of nonlinearly. Model results are presented for various combinations of stratospheric lapse rate, drag intensity, and mixing strength. The results are compared to those of Adler and Mack.
Magezi, David A
2015-01-01
Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).
A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831
A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.
Linear stability analysis of particle-laden hypopycnal plumes
NASA Astrophysics Data System (ADS)
Farenzena, Bruno Avila; Silvestrini, Jorge Hugo
2017-12-01
Gravity-driven riverine outflows are responsible for carrying sediments to the coastal waters. The turbulent mixing in these flows is associated with shear and gravitational instabilities such as Kelvin-Helmholtz, Holmboe, and Rayleigh-Taylor. Results from temporal linear stability analysis of a two-layer stratified flow are presented, investigating the behavior of settling particles and mixing region thickness on the flow stability in the presence of ambient shear. The particles are considered suspended in the transport fluid, and its sedimentation is modeled with a constant valued settling velocity. Three scenarios, regarding the mixing region thickness, were identified: the poorly mixed environment, the strong mixed environment, and intermediate scenario. It was observed that Kelvin-Helmholtz and settling convection modes are the two fastest growing modes depending on the particles settling velocity and the total Richardson number. The second scenario presents a modified Rayleigh-Taylor instability, which is the dominant mode. The third case can have Kelvin-Helmholtz, settling convection, and modified Rayleigh-Taylor modes as the fastest growing mode depending on the combination of parameters.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
Novoderezhkin, Vladimir I.; Dekker, Jan P.; van Grondelle, Rienk
2007-01-01
We propose an exciton model for the Photosystem II reaction center (RC) based on a quantitative simultaneous fit of the absorption, linear dichroism, circular dichroism, steady-state fluorescence, triplet-minus-singlet, and Stark spectra together with the spectra of pheophytin-modified RCs, and so-called RC5 complexes that lack one of the peripheral chlorophylls. In this model, the excited state manifold includes a primary charge-transfer (CT) state that is supposed to be strongly mixed with the pure exciton states. We generalize the exciton theory of Stark spectra by 1), taking into account the coupling to a CT state (whose static dipole cannot be treated as a small parameter in contrast to usual excited states); and 2), expressing the line shape functions in terms of the modified Redfield approach (the same as used for modeling of the linear responses). This allows a consistent modeling of the whole set of experimental data using a unified physical picture. We show that the fluorescence and Stark spectra are extremely sensitive to the assignment of the primary CT state, its energy, and coupling to the excited states. The best fit of the data is obtained supposing that the initial charge separation occurs within the special-pair PD1PD2. Additionally, the scheme with primary electron transfer from the accessory chlorophyll to pheophytin gave a reasonable quantitative fit. We show that the effectiveness of these two pathways is strongly dependent on the realization of the energetic disorder. Supposing a mixed scheme of primary charge separation with a disorder-controlled competition of the two channels, we can explain the coexistence of fast sub-ps and slow ps components of the Phe-anion formation as revealed by different ultrafast spectroscopic techniques. PMID:17526589
Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects
Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.
2015-01-01
The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135
Exact solutions of a hierarchy of mixing speeds models
NASA Astrophysics Data System (ADS)
Cornille, H.; Platkowski, T.
1992-07-01
This paper presents several new aspects of discrete kinetic theory (DKT). First a hierarchy of d-dimensional (d=1,2,3) models is proposed with (2d+3) velocities and three moduli speeds: 0, 2, and a third one that can be arbitrary. It is assumed that the particles at rest have an internal energy which, for microscopic collisions, supplies for the loss of the kinetic energy. In a more general way than usual, collisions are allowed that mix particles with different speeds. Second, for the (1+1)-dimensional restriction of the systems of PDE for these models which have two independent quadratic collision terms we construct different exact solutions. The usual types of exact solutions are studied: periodic solutions and shock wave solutions obtained from the standard linearization of the scalar Riccati equations called Riccatian shock waves. Then other types of solutions of the coupled Riccati equations are found called non-Riccatian shock waves and they are compared with the previous ones. The main new result is that, between the upstream and downstream states, these new solutions are not necessarily monotonous. Further, for the shock problem, a two-dimensional dynamical system of ODE is solved numerically with limit values corresponding to the upstream and downstream states. As a by-product of this study two new linearizations for the Riccati coupled equations with two functions are proposed.
Cosmic non-TEM radiation and synthetic feed array sensor system in ASIC mixed signal technology
NASA Astrophysics Data System (ADS)
Centureli, F.; Scotti, G.; Tommasino, P.; Trifiletti, A.; Romano, F.; Cimmino, R.; Saitto, A.
2014-08-01
The paper deals with the opportunity to introduce "Not strictly TEM waves" Synthetic detection Method (NTSM), consisting in a Three Axis Digital Beam Processing (3ADBP), to enhance the performances of radio telescope and sensor systems. Current Radio Telescopes generally use the classic 3D "TEM waves" approximation Detection Method, which consists in a linear tomography process (Single or Dual axis beam forming processing) neglecting the small z component. The Synthetic FEED ARRAY three axis Sensor SYSTEM is an innovative technique using a synthetic detection of the generic "NOT strictly TEM Waves radiation coming from the Cosmo, which processes longitudinal component of Angular Momentum too. Than the simultaneous extraction from radiation of both the linear and quadratic information component, may reduce the complexity to reconstruct the Early Universe in the different requested scales. This next order approximation detection of the observed cosmologic processes, may improve the efficacy of the statistical numerical model used to elaborate the same information acquired. The present work focuses on detection of such waves at carrier frequencies in the bands ranging from LF to MMW. The work shows in further detail the new generation of on line programmable and reconfigurable Mixed Signal ASIC technology that made possible the innovative Synthetic Sensor. Furthermore the paper shows the ability of such technique to increase the Radio Telescope Array Antenna performances.
Hodge, Melissa G; Hovinga, Mary; Shepherd, John A; Egleston, Brian; Gabriel, Kelley; Van Horn, Linda; Robson, Alan; Snetselaar, Linda; Stevens, Victor K; Jung, Seungyoun; Dorgan, Joanne
2015-02-01
This study prospectively investigates associations between youth moderate-to-vigorous-intensity physical activity (MVPA) and body composition in young adult women using data from the Dietary Intervention Study in Children (DISC) and the DISC06 Follow-Up Study. MVPA was assessed by questionnaire on 5 occasions between the ages 8 and 18 years and at age 25-29 years in 215 DISC female participants. Using whole body dual-energy x-ray absorptiometry (DXA), overall adiposity and body fat distribution were assessed at age 25-29 years by percent body fat (%fat) and android-to-gynoid (A:G) fat ratio, respectively. Linear mixed effects models and generalized linear latent and mixed models were used to assess associations of youth MVPA with both outcomes. Young adult MVPA, adjusted for other young adult characteristics, was significantly inversely associated with young adult %fat (%fat decreased from 37.4% in the lowest MVPA quartile to 32.8% in the highest (p-trend = 0.02)). Adjusted for youth and young adult characteristics including young adult MVPA, youth MVPA also was significantly inversely associated with young adult %fat (β=-0.40 per 10 MET-hrs/wk, p = .02) . No significant associations between MVPA and A:G fat ratio were observed. Results suggest that youth and young adult MVPA are important independent predictors of adiposity in young women.
Effect of academic status on outcomes of surgery for rectal cancer.
Cagino, Kristen; Altieri, Maria S; Yang, Jie; Nie, Lizhou; Talamini, Mark; Spaniolas, Konstantinos; Denoya, Paula; Pryor, Aurora
2018-06-01
The purpose of our study was to investigate surgical outcomes following advanced colorectal procedures at academic versus community institutions. The SPARCS database was used to identify patients undergoing Abdominoperineal resection (APR) and Low Anterior Resection between 2009 and 2014. Linear mixed models and generalized linear mixed models were used to compare outcomes. Laparoscopic versus open procedures, surgery type, volume status, and stoma formation between academic and community facilities were compared. Higher percentages of laparoscopic surgeries (58.68 vs. 41.32%, p value < 0.0001), more APR surgeries (64.60 vs. 35.40%, p value < 0.0001), more high volume hospitals (69.46 vs. 30.54%, p value < 0.0001), and less stoma formation (48.00 vs. 52.00%, p value < 0.0001) were associated with academic centers. After adjusting for confounding factors, academic facilities were more likely to perform APR surgeries (OR 1.35, 95% CI 1.04-1.74, p value = 0.0235). Minorities and Medicaid patients were more likely to receive care at an academic facility. Stoma formation, open surgery, and APR were associated with longer LOS and higher rate of ED visit and 30-day readmission. Laparoscopy and APR are more commonly performed at academic than community facilities. Age, sex, race, and socioeconomic status affect the facility at which and the type of surgery patients receive, thereby influencing surgical outcomes.
Van Ael, Evy; De Cooman, Ward; Blust, Ronny; Bervoets, Lieven
2015-01-01
Large datasets from total and dissolved metal concentrations in Flemish (Belgium) fresh water systems and the associated macroinvertebrate-based biotic index MMIF (Multimetric Macroinvertebrate Index Flanders) were used to estimate critical metal concentrations for good ecological water quality, as imposed by the European Water Framework Directive (2000). The contribution of different stressors (metals and water characteristics) to the MMIF were studied by constructing generalized linear mixed effect models. Comparison between estimated critical concentrations and the European and Flemish EQS, shows that the EQS for As, Cd, Cu and Zn seem to be sufficient to reach a good ecological quality status as expressed by the invertebrate-based biotic index. In contrast, the EQS for Cr, Hg and Pb are higher than the estimated critical concentrations, which suggests that when environmental concentrations are at the same level as the EQS a good quality status might not be reached. The construction of mixed models that included metal concentrations in their structure did not lead to a significant outcome. However, mixed models showed the primary importance of water characteristics (oxygen level, temperature, ammonium concentration and conductivity) for the MMIF. Copyright © 2014 Elsevier Ltd. All rights reserved.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Electric-field-driven electron-transfer in mixed-valence molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blair, Enrique P., E-mail: enrique-blair@baylor.edu; Corcelli, Steven A., E-mail: scorcell@nd.edu; Lent, Craig S., E-mail: lent@nd.edu
2016-07-07
Molecular quantum-dot cellular automata is a computing paradigm in which digital information is encoded by the charge configuration of a mixed-valence molecule. General-purpose computing can be achieved by arranging these compounds on a substrate and exploiting intermolecular Coulombic coupling. The operation of such a device relies on nonequilibrium electron transfer (ET), whereby the time-varying electric field of one molecule induces an ET event in a neighboring molecule. The magnitude of the electric fields can be quite large because of close spatial proximity, and the induced ET rate is a measure of the nonequilibrium response of the molecule. We calculate themore » electric-field-driven ET rate for a model mixed-valence compound. The mixed-valence molecule is regarded as a two-state electronic system coupled to a molecular vibrational mode, which is, in turn, coupled to a thermal environment. Both the electronic and vibrational degrees-of-freedom are treated quantum mechanically, and the dissipative vibrational-bath interaction is modeled with the Lindblad equation. This approach captures both tunneling and nonadiabatic dynamics. Relationships between microscopic molecular properties and the driven ET rate are explored for two time-dependent applied fields: an abruptly switched field and a linearly ramped field. In both cases, the driven ET rate is only weakly temperature dependent. When the model is applied using parameters appropriate to a specific mixed-valence molecule, diferrocenylacetylene, terahertz-range ET transfer rates are predicted.« less
Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.
Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa
2017-01-01
TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.
Flavor non-universal gauge interactions and anomalies in B-meson decays
NASA Astrophysics Data System (ADS)
Tang, Yong; Wu, Yue-Liang
2018-02-01
Motivated by flavor non-universality and anomalies in semi-leptonic B-meson decays, we present a general and systematic discussion about how to construct anomaly-free U(1)‧ gauge theories based on an extended standard model with only three right-handed neutrinos. If all standard model fermions are vector-like under this new gauge symmetry, the most general family non-universal charge assignments, (a,b,c) for three-generation quarks and (d,e,f) for leptons, need satisfy just one condition to be anomaly-free, 3(a+b+c) = - (d+e+f). Any assignment can be linear combinations of five independent anomaly-free solutions. We also illustrate how such models can generally lead to flavor-changing interactions and easily resolve the anomalies in B-meson decays. Probes with {{B}}{s} - {{\\bar B}}{s} mixing, decay into τ ±, dilepton and dijet searches at colliders are also discussed. Supported by the Grant-in-Aid for Innovative Areas (16H06490)
A general numerical analysis program for the superconducting quasiparticle mixer
NASA Technical Reports Server (NTRS)
Hicks, R. G.; Feldman, M. J.; Kerr, A. R.
1986-01-01
A user-oriented computer program SISCAP (SIS Computer Analysis Program) for analyzing SIS mixers is described. The program allows arbitrary impedance terminations to be specified at all LO harmonics and sideband frequencies. It is therefore able to treat a much more general class of SIS mixers than the widely used three-frequency analysis, for which the harmonics are assumed to be short-circuited. An additional program, GETCHI, provides the necessary input data to program SISCAP. The SISCAP program performs a nonlinear analysis to determine the SIS junction voltage waveform produced by the local oscillator. The quantum theory of mixing is used in its most general form, treating the large signal properties of the mixer in the time domain. A small signal linear analysis is then used to find the conversion loss and port impedances. The noise analysis includes thermal noise from the termination resistances and shot noise from the periodic LO current. Quantum noise is not considered. Many aspects of the program have been adequately verified and found accurate.
7 CFR 29.3155 - Mixed (M Group).
Code of Federal Regulations, 2013 CFR
2013-01-01
... Light Mixed. General quality of X3, C3, B3, T3, medium to tissuey body, light general color, under 20..., medium to tissuey body, light general color under 20 percent greenish, and 20 percent injury tolerance. M5F Low Light Mixed. General quality of X5, C5, B5, T5, medium to tissuey body, light general color...
7 CFR 29.3155 - Mixed (M Group).
Code of Federal Regulations, 2014 CFR
2014-01-01
... Light Mixed. General quality of X3, C3, B3, T3, medium to tissuey body, light general color, under 20..., medium to tissuey body, light general color under 20 percent greenish, and 20 percent injury tolerance. M5F Low Light Mixed. General quality of X5, C5, B5, T5, medium to tissuey body, light general color...
7 CFR 29.3155 - Mixed (M Group).
Code of Federal Regulations, 2012 CFR
2012-01-01
... Light Mixed. General quality of X3, C3, B3, T3, medium to tissuey body, light general color, under 20..., medium to tissuey body, light general color under 20 percent greenish, and 20 percent injury tolerance. M5F Low Light Mixed. General quality of X5, C5, B5, T5, medium to tissuey body, light general color...
Modelling the Progression of Competitive Performance of an Academy's Soccer Teams.
Malcata, Rita M; Hopkins, Will G; Richardson, Scott
2012-01-01
Progression of a team's performance is a key issue in competitive sport, but there appears to have been no published research on team progression for periods longer than a season. In this study we report the game-score progression of three teams of a youth talent-development academy over five seasons using a novel analytic approach based on generalised mixed modelling. The teams consisted of players born in 1991, 1992 and 1993; they played totals of 115, 107 and 122 games in Asia and Europe between 2005 and 2010 against teams differing in age by up to 3 years. Game scores predicted by the mixed model were assumed to have an over-dispersed Poisson distribution. The fixed effects in the model estimated an annual linear pro-gression for Aspire and for the other teams (grouped as a single opponent) with adjustment for home-ground advantage and for a linear effect of age difference between competing teams. A random effect allowed for different mean scores for Aspire and opposition teams. All effects were estimated as factors via log-transformation and presented as percent differences in scores. Inferences were based on the span of 90% confidence intervals in relation to thresholds for small factor effects of x/÷1.10 (+10%/-9%). Most effects were clear only when data for the three teams were combined. Older teams showed a small 27% increase in goals scored per year of age difference (90% confidence interval 13 to 42%). Aspire experienced a small home-ground advantage of 16% (-5 to 41%), whereas opposition teams experienced 31% (7 to 60%) on their own ground. After adjustment for these effects, the Aspire teams scored on average 1.5 goals per match, with little change in the five years of their existence, whereas their opponents' scores fell from 1.4 in their first year to 1.0 in their last. The difference in progression was trivial over one year (7%, -4 to 20%), small over two years (15%, -8 to 44%), but unclear over >2 years. In conclusion, the generalized mixed model has marginal utility for estimating progression of soccer scores, owing to the uncertainty arising from low game scores. The estimates are likely to be more precise and useful in sports with higher game scores. Key pointsA generalized linear mixed model is the approach for tracking game scores, key performance indicators or other measures of performance based on counts in sports where changes within and/or between games/seasons have to be considered.Game scores in soccer could be useful to track performance progression of teams, but hundreds of games are needed.Fewer games will be needed for tracking performance represented by counts with high scores, such as game scores in rugby or key performance indicators based on frequent events or player actions in any team sport.
Modelling the Progression of Competitive Performance of an Academy’s Soccer Teams
Malcata, Rita M.; Hopkins, Will G; Richardson, Scott
2012-01-01
Progression of a team’s performance is a key issue in competitive sport, but there appears to have been no published research on team progression for periods longer than a season. In this study we report the game-score progression of three teams of a youth talent-development academy over five seasons using a novel analytic approach based on generalised mixed modelling. The teams consisted of players born in 1991, 1992 and 1993; they played totals of 115, 107 and 122 games in Asia and Europe between 2005 and 2010 against teams differing in age by up to 3 years. Game scores predicted by the mixed model were assumed to have an over-dispersed Poisson distribution. The fixed effects in the model estimated an annual linear pro-gression for Aspire and for the other teams (grouped as a single opponent) with adjustment for home-ground advantage and for a linear effect of age difference between competing teams. A random effect allowed for different mean scores for Aspire and opposition teams. All effects were estimated as factors via log-transformation and presented as percent differences in scores. Inferences were based on the span of 90% confidence intervals in relation to thresholds for small factor effects of x/÷1.10 (+10%/-9%). Most effects were clear only when data for the three teams were combined. Older teams showed a small 27% increase in goals scored per year of age difference (90% confidence interval 13 to 42%). Aspire experienced a small home-ground advantage of 16% (-5 to 41%), whereas opposition teams experienced 31% (7 to 60%) on their own ground. After adjustment for these effects, the Aspire teams scored on average 1.5 goals per match, with little change in the five years of their existence, whereas their opponents’ scores fell from 1.4 in their first year to 1.0 in their last. The difference in progression was trivial over one year (7%, -4 to 20%), small over two years (15%, -8 to 44%), but unclear over >2 years. In conclusion, the generalized mixed model has marginal utility for estimating progression of soccer scores, owing to the uncertainty arising from low game scores. The estimates are likely to be more precise and useful in sports with higher game scores. Key pointsA generalized linear mixed model is the approach for tracking game scores, key performance indicators or other measures of performance based on counts in sports where changes within and/or between games/seasons have to be considered.Game scores in soccer could be useful to track performance progression of teams, but hundreds of games are needed.Fewer games will be needed for tracking performance represented by counts with high scores, such as game scores in rugby or key performance indicators based on frequent events or player actions in any team sport. PMID:24149364
Identification of Genetic Loci Associated with Quality Traits in Almond via Association Mapping
Font i Forcada, Carolina; Oraguzie, Nnadozie; Reyes-Chin-Wo, Sebastian; Espiau, Maria Teresa; Socias i Company, Rafael; Fernández i Martí, Angel
2015-01-01
To design an appropriate association study, we need to understand population structure and the structure of linkage disequilibrium within and among populations as well as in different regions of the genome in an organism. In this study, we have used a total of 98 almond accessions, from five continents located and maintained at the Centro de Investigación y Tecnología Agroalimentaria de Aragón (CITA; Spain), and 40 microsatellite markers. Population structure analysis performed in ‘Structure’ grouped the accessions into two principal groups; the Mediterranean (Western-Europe) and the non-Mediterranean, with K = 3, being the best fit for our data. There was a strong subpopulation structure with linkage disequilibrium decaying with increasing genetic distance resulting in lower levels of linkage disequilibrium between more distant markers. A significant impact of population structure on linkage disequilibrium in the almond cultivar groups was observed. The mean r2 value for all intra-chromosomal loci pairs was 0.040, whereas, the r2 for the inter-chromosomal loci pairs was 0.036. For analysis of association between the markers and phenotypic traits, five models comprising both general linear models and mixed linear models were selected to test the marker trait associations. The mixed linear model (MLM) approach using co-ancestry values from population structure and kinship estimates (K model) as covariates identified a maximum of 16 significant associations for chemical traits and 12 for physical traits. This study reports for the first time the use of association mapping for determining marker-locus trait associations in a world-wide almond germplasm collection. It is likely that association mapping will have the most immediate and largest impact on the tier of crops such as almond with the greatest economic value. PMID:26111146
Blood biomarkers in male and female participants after an Ironman-distance triathlon.
Danielsson, Tom; Carlsson, Jörg; Schreyer, Hendrik; Ahnesjö, Jonas; Ten Siethoff, Lasse; Ragnarsson, Thony; Tugetam, Åsa; Bergman, Patrick
2017-01-01
While overall physical activity is clearly associated with a better short-term and long-term health, prolonged strenuous physical activity may result in a rise in acute levels of blood-biomarkers used in clinical practice for diagnosis of various conditions or diseases. In this study, we explored the acute effects of a full Ironman-distance triathlon on biomarkers related to heart-, liver-, kidney- and skeletal muscle damage immediately post-race and after one week's rest. We also examined if sex, age, finishing time and body composition influenced the post-race values of the biomarkers. A sample of 30 subjects was recruited (50% women) to the study. The subjects were evaluated for body composition and blood samples were taken at three occasions, before the race (T1), immediately after (T2) and one week after the race (T3). Linear regression models were fitted to analyse the independent contribution of sex and finishing time controlled for weight, body fat percentage and age, on the biomarkers at the termination of the race (T2). Linear mixed models were fitted to examine if the biomarkers differed between the sexes over time (T1-T3). Being male was a significant predictor of higher post-race (T2) levels of myoglobin, CK, and creatinine levels and body weight was negatively associated with myoglobin. In general, the models were unable to explain the variation of the dependent variables. In the linear mixed models, an interaction between time (T1-T3) and sex was seen for myoglobin and creatinine, in which women had a less pronounced response to the race. Overall women appear to tolerate the effects of prolonged strenuous physical activity better than men as illustrated by their lower values of the biomarkers both post-race as well as during recovery.
Identification of Genetic Loci Associated with Quality Traits in Almond via Association Mapping.
Font i Forcada, Carolina; Oraguzie, Nnadozie; Reyes-Chin-Wo, Sebastian; Espiau, Maria Teresa; Socias i Company, Rafael; Fernández i Martí, Angel
2015-01-01
To design an appropriate association study, we need to understand population structure and the structure of linkage disequilibrium within and among populations as well as in different regions of the genome in an organism. In this study, we have used a total of 98 almond accessions, from five continents located and maintained at the Centro de Investigación y Tecnología Agroalimentaria de Aragón (CITA; Spain), and 40 microsatellite markers. Population structure analysis performed in 'Structure' grouped the accessions into two principal groups; the Mediterranean (Western-Europe) and the non-Mediterranean, with K = 3, being the best fit for our data. There was a strong subpopulation structure with linkage disequilibrium decaying with increasing genetic distance resulting in lower levels of linkage disequilibrium between more distant markers. A significant impact of population structure on linkage disequilibrium in the almond cultivar groups was observed. The mean r2 value for all intra-chromosomal loci pairs was 0.040, whereas, the r2 for the inter-chromosomal loci pairs was 0.036. For analysis of association between the markers and phenotypic traits, five models comprising both general linear models and mixed linear models were selected to test the marker trait associations. The mixed linear model (MLM) approach using co-ancestry values from population structure and kinship estimates (K model) as covariates identified a maximum of 16 significant associations for chemical traits and 12 for physical traits. This study reports for the first time the use of association mapping for determining marker-locus trait associations in a world-wide almond germplasm collection. It is likely that association mapping will have the most immediate and largest impact on the tier of crops such as almond with the greatest economic value.
Turbulence closure for mixing length theories
NASA Astrophysics Data System (ADS)
Jermyn, Adam S.; Lesaffre, Pierre; Tout, Christopher A.; Chitre, Shashikumar M.
2018-05-01
We present an approach to turbulence closure based on mixing length theory with three-dimensional fluctuations against a two-dimensional background. This model is intended to be rapidly computable for implementation in stellar evolution software and to capture a wide range of relevant phenomena with just a single free parameter, namely the mixing length. We incorporate magnetic, rotational, baroclinic, and buoyancy effects exactly within the formalism of linear growth theories with non-linear decay. We treat differential rotation effects perturbatively in the corotating frame using a novel controlled approximation, which matches the time evolution of the reference frame to arbitrary order. We then implement this model in an efficient open source code and discuss the resulting turbulent stresses and transport coefficients. We demonstrate that this model exhibits convective, baroclinic, and shear instabilities as well as the magnetorotational instability. It also exhibits non-linear saturation behaviour, and we use this to extract the asymptotic scaling of various transport coefficients in physically interesting limits.
Stimulus sensitive gel with radioisotope and methods of making
Weller, Richard E.; Lind, Michael A.; Fisher, Darrell R.; Gutowska, Anna; Campbell, Allison A.
2005-03-22
The present invention is a thermally reversible stimulus-sensitive gel or gelling copolymer radioisotope carrier that is a linear random copolymer of an [meth-]acrylamide derivative and a hydrophilic comonomer, wherein the linear random copolymer is in the form of a plurality of linear chains having a plurality of molecular weights greater than or equal to a minimum gelling molecular weight cutoff. Addition of a biodegradable backbone and/or a therapeutic agent imparts further utility. The method of the present invention for making a thermally reversible stimulus-sensitive gelling copolymer radionuclcide carrier has the steps of: (a) mixing a stimulus-sensitive reversible gelling copolymer with an aqueous solvent as a stimulus-sensitive reversible gelling solution; and (b) mixing a radioisotope with said stimulus-sensitive reversible gelling solution as said radioisotope carrier. The gel is enhanced by either combining it with a biodegradable backbone and/or a therapeutic agent in a gelling solution made by mixing the copolymer with an aqueous solvent.
Stimulus sensitive gel with radioisotope and methods of making
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weller, Richard E; Lind, Michael A; Fisher, Darrell R
2001-10-02
The present invention is a thermally reversible stimulus-sensitive gel or gelling copolymer radioisotope carrier that is a linear random copolymer of an [meth]acrylamide derivative and a hydrophilic comonomer, wherein the linear random copolymer is in the form of a plurality of linear chains having a plurality of molecular weights greater than or equal to a minimum gelling molecular weight cutoff. Addition of a biodegradable backbone and/or a therapeutic agent imparts further utility. The method of the present invention for making a thermally reversible stimulus-sensitive gelling copolymer radionuclcide carrier has the steps of: (a) mixing a stimulus-sensitive reversible gelling copolymer withmore » an aqueous solvent as a stimulus-sensitive reversible gelling solution; and (b) mixing a radioisotope with said stimulus-sensitive reversible gelling solution as said radioisotope carrier. The gel is enhanced by either combining it with a biodegradable backbone and/or a therapeutic agent in a gelling solution made by mixing the copolymer with an aqueous solvent.« less
Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA
NASA Astrophysics Data System (ADS)
Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng
2011-12-01
The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
BIODEGRADATION PROBABILITY PROGRAM (BIODEG)
The Biodegradation Probability Program (BIODEG) calculates the probability that a chemical under aerobic conditions with mixed cultures of microorganisms will biodegrade rapidly or slowly. It uses fragment constants developed using multiple linear and non-linear regressions and d...
CFD simulation of vertical linear motion mixing in anaerobic digester tanks.
Meroney, Robert N; Sheker, Robert E
2014-09-01
Computational fluid dynamics (CFD) was used to simulate the mixing characteristics of a small circular anaerobic digester tank (diameter 6 m) equipped sequentially with 13 different plunger type vertical linear motion mixers and two different type internal draft-tube mixers. Rates of mixing of step injection of tracers were calculated from which active volume (AV) and hydraulic retention time (HRT) could be calculated. Washout characteristics were compared to analytic formulae to estimate any presence of partial mixing, dead volume, short-circuiting, or piston flow. Active volumes were also estimated based on tank regions that exceeded minimum velocity criteria. The mixers were ranked based on an ad hoc criteria related to the ratio of AV to unit power (UP) or AV/UP. The best plunger mixers were found to behave about the same as the conventional draft-tube mixers of similar UP.
NASA Technical Reports Server (NTRS)
Usry, J. W.; Witte, W. G.; Whitlock, C. H.; Gurganus, E. A.
1979-01-01
Experimental measurements were made of upwelled spectral signatures of various concentrations of industrial waste products mixed with water in a large water tank. Radiance and reflectance spectra for a biosolid waste product (sludge) mixed with conditioned tap water and natural river water are reported. Results of these experiments indicate that reflectance increases with increasing concentration of the sludge at practically all wavelengths for concentration of total suspended solids up to 117 ppm in conditioned tap water and 171 ppm in natural river water. Significant variations in the spectra were observed and may be useful in defining spectral characteristics for this waste product. No significant spectral differences were apparent in the reflectance spectra of the two experiments, especially for wavelengths greater than 540 nm. Reflectance values, however, were generally greater in natural river water for wavelengths greater than 540 nm. Reflectance may be considered to increase linearly with concentration of total suspended solids from 5 to 171 ppm at all wavelengths without introducing errors larger than 10 percent.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2001-01-01
The Independent Component Analysis is a recently developed technique for component extraction. This new method requires the statistical independence of the extracted components, a stronger constraint that uses higher-order statistics, instead of the classical decorrelation, a weaker constraint that uses only second-order statistics. This technique has been used recently for the analysis of geophysical time series with the goal of investigating the causes of variability in observed data (i.e. exploratory approach). We demonstrate with a data simulation experiment that, if initialized with a Principal Component Analysis, the Independent Component Analysis performs a rotation of the classical PCA (or EOF) solution. This rotation uses no localization criterion like other Rotation Techniques (RT), only the global generalization of decorrelation by statistical independence is used. This rotation of the PCA solution seems to be able to solve the tendency of PCA to mix several physical phenomena, even when the signal is just their linear sum.
The role of multi-target policy instruments in agri-environmental policy mixes.
Schader, Christian; Lampkin, Nicholas; Muller, Adrian; Stolze, Matthias
2014-12-01
The Tinbergen Rule has been used to criticise multi-target policy instruments for being inefficient. The aim of this paper is to clarify the role of multi-target policy instruments using the case of agri-environmental policy. Employing an analytical linear optimisation model, this paper demonstrates that there is no general contradiction between multi-target policy instruments and the Tinbergen Rule, if multi-target policy instruments are embedded in a policy-mix with a sufficient number of targeted instruments. We show that the relation between cost-effectiveness of the instruments, related to all policy targets, is the key determinant for an economically sound choice of policy instruments. If economies of scope with respect to achieving policy targets are realised, a higher cost-effectiveness of multi-target policy instruments can be achieved. Using the example of organic farming support policy, we discuss several reasons why economies of scope could be realised by multi-target agri-environmental policy instruments. Copyright © 2014 Elsevier Ltd. All rights reserved.
Baqué, Michèle; Amendt, Jens
2013-01-01
Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Basin-scale observations of isoprene and monoterpenes in the Arctic and Atlantic Oceans
NASA Astrophysics Data System (ADS)
Carpenter, L.; Hackenberg, S.; Andrews, S.; Minaeian, J.; Chance, R.; Arnold, S.; Spracklen, D. V.; Walker, H.; Brewin, R. J.; Tarran, G.; Tilstone, G.; Small, A.; Bouman, H. A.
2016-12-01
We report surface ocean concentrations, atmospheric mixing ratios and calculated sea-to-air fluxes of isoprene and six monoterpenes (α- and β-pinene, myrcene, Δ 3-carene, ocimene, and limonene) spanning approximately 130 degrees of latitude (80 °N- 50 °S) in the Arctic and Atlantic Oceans. Oceanic isoprene concentrations showed covariance with a number of concurrently monitored biological parameters, and these relationships were dependent on sea surface temperatures. Parameterisations of isoprene seawater concentrations based on linear regression analyses of these relationships perform well for Arctic and Atlantic data. Levels of all monoterpenes were generally low, with oceanic concentrations ranging from below the detection limit of <1 pmol L-1 to 5 pmol L-1 . In air, monoterpene mixing ratios varied from below the detection limit ( 1 pptv) to 5 pptv, after careful filtering for ship-related contamination. Unlike in previous studies, no clear trends or relationships of the monoterpenes with biological data were found. Limonene showed generally the highest levels in water (up to 84 pmol L-1 in the Atlantic Ocean) and air; however this was attributed mostly to shipborne contamination. We calculate global sea-air fluxes of isoprene and monoterpenes based on this data and compare to previous estimates.
Su, Min; Boots, Mike
2017-03-07
Understanding the drivers of parasite evolution and in particular disease virulence remains a major focus of evolutionary theory. Here, we examine the role of resource quality and in particular spatial environmental heterogeneity in the distribution of these resources on the evolution of virulence. There may be direct effects of resources on host susceptibility and pathogenicity alongside effects on reproduction that indirectly impact host-parasite population dynamics. Therefore, we assume that high resource quality may lead to both increased host reproduction and/or increased disease resistance. In completely mixed populations there is no effect of resource quality on the outcome of disease evolution. However, when there are local interactions higher resource quality generally selects for higher virulence/transmission for both linear and saturating transmission-virulence trade-off assumptions. The exception is that in castrators (i.e., infected hosts have no reproduction), higher virulence is selected for both low and high resource qualities at mixed local and global infection. Heterogeneity in the distribution of environment resources only has an effect on the outcome in castrators where random distributions generally select for higher virulence. Overall, our results further underline the importance of considering spatial structure in order to understand evolutionary processes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kanda, L Leann; Abdulhay, Amir; Erickson, Caitlin
2017-05-01
Individual animal personalities interact with environmental conditions to generate differences in behavior, a phenomenon of growing interest for understanding the effects of environmental enrichment on captive animals. Wheels are common environmental enrichment for laboratory rodents, but studies conflict on how this influences behavior, and interaction of wheels with individual personalities has rarely been examined. We examined whether wheel access altered personality profiles in adult Siberian dwarf hamsters. We assayed animals in a tunnel maze twice for baseline personality, then again at two and at seven weeks after the experimental group was provisioned with wheels in their home cages. Linear mixed model selection was used to assess changes in behavior over time and across environmental gradient of wheel exposure. While animals showed consistent inter-individual differences in activity, activity personality did not change upon exposure to a wheel. Boldness also varies among individuals, and there is evidence for female boldness scores converging after wheel exposure, that is, opposite shifts in behavior by high and low boldness individuals, although sample size is too small for the mixed model results to be robust. In general, Siberian dwarf hamsters appear to show low behavioral plasticity, particularly in general activity, in response to running wheels. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Khuseynov, Dmitry; Blackstone, Christopher C.; Culberson, Lori M.; Sanov, Andrei
2014-09-01
We present a model for laboratory-frame photoelectron angular distributions in direct photodetachment from (in principle) any molecular orbital using linearly polarized light. A transparent mathematical approach is used to generalize the Cooper-Zare central-potential model to anionic states of any mixed character. In the limit of atomic-anion photodetachment, the model reproduces the Cooper-Zare formula. In the case of an initial orbital described as a superposition of s and p-type functions, the model yields the previously obtained s-p mixing formula. The formalism is further advanced using the Hanstorp approximation, whereas the relative scaling of the partial-wave cross-sections is assumed to follow the Wigner threshold law. The resulting model describes the energy dependence of photoelectron anisotropy for any atomic, molecular, or cluster anions, usually without requiring a direct calculation of the transition dipole matrix elements. As a benchmark case, we apply the p-d variant of the model to the experimental results for NO- photodetachment and show that the observed anisotropy trend is described well using physically meaningful values of the model parameters. Overall, the presented formalism delivers insight into the photodetachment process and affords a new quantitative strategy for analyzing the photoelectron angular distributions and characterizing mixed-character molecular orbitals using photoelectron imaging spectroscopy of negative ions.
Khuseynov, Dmitry; Blackstone, Christopher C; Culberson, Lori M; Sanov, Andrei
2014-09-28
We present a model for laboratory-frame photoelectron angular distributions in direct photodetachment from (in principle) any molecular orbital using linearly polarized light. A transparent mathematical approach is used to generalize the Cooper-Zare central-potential model to anionic states of any mixed character. In the limit of atomic-anion photodetachment, the model reproduces the Cooper-Zare formula. In the case of an initial orbital described as a superposition of s and p-type functions, the model yields the previously obtained s-p mixing formula. The formalism is further advanced using the Hanstorp approximation, whereas the relative scaling of the partial-wave cross-sections is assumed to follow the Wigner threshold law. The resulting model describes the energy dependence of photoelectron anisotropy for any atomic, molecular, or cluster anions, usually without requiring a direct calculation of the transition dipole matrix elements. As a benchmark case, we apply the p-d variant of the model to the experimental results for NO(-) photodetachment and show that the observed anisotropy trend is described well using physically meaningful values of the model parameters. Overall, the presented formalism delivers insight into the photodetachment process and affords a new quantitative strategy for analyzing the photoelectron angular distributions and characterizing mixed-character molecular orbitals using photoelectron imaging spectroscopy of negative ions.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Cliff
2015-01-01
Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Review: Game theory of public goods in one-shot social dilemmas without assortment.
Archetti, Marco; Scheuring, István
2012-04-21
We review the theory of public goods in biology. In the N-person prisoner's dilemma, where the public good is a linear function of the individual contributions, cooperation requires some form of assortment, for example due to kin discrimination, population viscosity or repeated interactions. In most social species ranging from bacteria to humans, however, public goods are usually a non-linear function of the contributions, which makes cooperation possible without assortment. More specifically, a polymorphic state can be stable in which cooperators and non-cooperators coexist. The existence of mixed equilibria in public goods games is a fundamental result in the study of cooperation that has been overlooked so far, because of the disproportionate attention given to the two- and N-person prisoner's dilemma. Methods and results from games with pairwise interactions or linear benefits cannot, in general, be extended to the analysis of public goods. Game theory helps explain the production of public goods in one-shot, N-person interactions without assortment, it leads to predictions that can be easily tested and allows a prescriptive approach to cooperation. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vu, Cung Khac; Skelt, Christopher; Nihei, Kurt
A system and a method for generating a three-dimensional image of a rock formation, compressional velocity VP, shear velocity VS and velocity ratio VP/VS of a rock formation are provided. A first acoustic signal includes a first plurality of pulses. A second acoustic signal from a second source includes a second plurality of pulses. A detected signal returning to the borehole includes a signal generated by a non-linear mixing process from the first and second acoustic signals in a non-linear mixing zone within an intersection volume. The received signal is processed to extract the signal over noise and/or signals resultingmore » from linear interaction and the three dimensional image of is generated.« less
Qudit-Basis Universal Quantum Computation Using χ^{(2)} Interactions.
Niu, Murphy Yuezhen; Chuang, Isaac L; Shapiro, Jeffrey H
2018-04-20
We prove that universal quantum computation can be realized-using only linear optics and χ^{(2)} (three-wave mixing) interactions-in any (n+1)-dimensional qudit basis of the n-pump-photon subspace. First, we exhibit a strictly universal gate set for the qubit basis in the one-pump-photon subspace. Next, we demonstrate qutrit-basis universality by proving that χ^{(2)} Hamiltonians and photon-number operators generate the full u(3) Lie algebra in the two-pump-photon subspace, and showing how the qutrit controlled-Z gate can be implemented with only linear optics and χ^{(2)} interactions. We then use proof by induction to obtain our general qudit result. Our induction proof relies on coherent photon injection or subtraction, a technique enabled by χ^{(2)} interaction between the encoding modes and ancillary modes. Finally, we show that coherent photon injection is more than a conceptual tool, in that it offers a route to preparing high-photon-number Fock states from single-photon Fock states.
Venkataraman, Vinay; Turaga, Pavan; Baran, Michael; Lehrer, Nicole; Du, Tingfang; Cheng, Long; Rikakis, Thanassis; Wolf, Steven L.
2016-01-01
In this paper, we propose a general framework for tuning component-level kinematic features using therapists’ overall impressions of movement quality, in the context of a Home-based Adaptive Mixed Reality Rehabilitation (HAMRR) system. We propose a linear combination of non-linear kinematic features to model wrist movement, and propose an approach to learn feature thresholds and weights using high-level labels of overall movement quality provided by a therapist. The kinematic features are chosen such that they correlate with the quality of wrist movements to clinical assessment scores. Further, the proposed features are designed to be reliably extracted from an inexpensive and portable motion capture system using a single reflective marker on the wrist. Using a dataset collected from ten stroke survivors, we demonstrate that the framework can be reliably used for movement quality assessment in HAMRR systems. The system is currently being deployed for large-scale evaluations, and will represent an increasingly important application area of motion capture and activity analysis. PMID:25438331
NO(y) Correlation with N2O and CH4 in the Midlatitude Stratosphere
NASA Technical Reports Server (NTRS)
Kondo, Y.; Schmidt, U.; Sugita, T.; Engel, A.; Koike, M.; Aimedieu, P.; Gunson, M. R.; Rodriguez, J.
1996-01-01
Total reactive nitrogen (NO(y)), nitrous oxide (NO2), methane (CH4), and ozone (03) were measured on board a balloon launched from Aire sur l'Adour (44 deg N, 0 deg W), France on October 12, 1994. Generally, NO(y) was highly anti-correlated with N2O and CH4 at altitudes between 15 and 32 km. The linear NO(y) - N2O and NO(y) - CH4 relationships obtained by the present observations are very similar to those obtained on board ER-2 and DC-8 aircraft previously at altitude below 20 km in the northern hemisphere. They also agree well with the data obtained by the Atmospheric Trace Molecule Spectroscopy (ATMOS) instrument at 41 deg N in November 1994. Slight departures from linear correlations occurred around 29 km, where N2O and CH4 mixing ratios were larger than typical midlatitude values, suggesting horizontal transport of tropical airmasses to northern midlatitudes in a confined altitude region.
Qudit-Basis Universal Quantum Computation Using χ(2 ) Interactions
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-04-01
We prove that universal quantum computation can be realized—using only linear optics and χ(2 ) (three-wave mixing) interactions—in any (n +1 )-dimensional qudit basis of the n -pump-photon subspace. First, we exhibit a strictly universal gate set for the qubit basis in the one-pump-photon subspace. Next, we demonstrate qutrit-basis universality by proving that χ(2 ) Hamiltonians and photon-number operators generate the full u (3 ) Lie algebra in the two-pump-photon subspace, and showing how the qutrit controlled-Z gate can be implemented with only linear optics and χ(2 ) interactions. We then use proof by induction to obtain our general qudit result. Our induction proof relies on coherent photon injection or subtraction, a technique enabled by χ(2 ) interaction between the encoding modes and ancillary modes. Finally, we show that coherent photon injection is more than a conceptual tool, in that it offers a route to preparing high-photon-number Fock states from single-photon Fock states.
Explicit methods in extended phase space for inseparable Hamiltonian problems
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2015-03-01
We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.
Emergence of a fluctuation relation for heat in nonequilibrium Landauer processes
NASA Astrophysics Data System (ADS)
Taranto, Philip; Modi, Kavan; Pollock, Felix A.
2018-05-01
In a generalized framework for the Landauer erasure protocol, we study bounds on the heat dissipated in typical nonequilibrium quantum processes. In contrast to thermodynamic processes, quantum fluctuations are not suppressed in the nonequilibrium regime and cannot be ignored, making such processes difficult to understand and treat. Here we derive an emergent fluctuation relation that virtually guarantees the average heat produced to be dissipated into the reservoir either when the system or reservoir is large (or both) or when the temperature is high. The implication of our result is that for nonequilibrium processes, heat fluctuations away from its average value are suppressed independently of the underlying dynamics exponentially quickly in the dimension of the larger subsystem and linearly in the inverse temperature. We achieve these results by generalizing a concentration of measure relation for subsystem states to the case where the global state is mixed.
Pretz, Christopher R; Ketchum, Jessica M; Cuthbert, Jeffery P
2014-01-01
An untapped wealth of temporal information is captured within the Traumatic Brain Injury Model Systems National Database. Utilization of appropriate longitudinal analyses can provide an avenue toward unlocking the value of this information. This article highlights 2 statistical methods used for assessing change over time when examination of noncontinuous outcomes is of interest where this article focuses on investigation of dichotomous responses. Specifically, the intent of this article is to familiarize the rehabilitation community with the application of generalized estimating equations and generalized linear mixed models as used in longitudinal studies. An introduction to each method is provided where similarities and differences between the 2 are discussed. In addition, to reinforce the ideas and concepts embodied in each approach, we highlight each method, using examples based on data from the Rocky Mountain Regional Brain Injury System.
Bayesian Models for Astrophysical Data Using R, JAGS, Python, and Stan
NASA Astrophysics Data System (ADS)
Hilbe, Joseph M.; de Souza, Rafael S.; Ishida, Emille E. O.
2017-05-01
This comprehensive guide to Bayesian methods in astronomy enables hands-on work by supplying complete R, JAGS, Python, and Stan code, to use directly or to adapt. It begins by examining the normal model from both frequentist and Bayesian perspectives and then progresses to a full range of Bayesian generalized linear and mixed or hierarchical models, as well as additional types of models such as ABC and INLA. The book provides code that is largely unavailable elsewhere and includes details on interpreting and evaluating Bayesian models. Initial discussions offer models in synthetic form so that readers can easily adapt them to their own data; later the models are applied to real astronomical data. The consistent focus is on hands-on modeling, analysis of data, and interpretations that address scientific questions. A must-have for astronomers, its concrete approach will also be attractive to researchers in the sciences more generally.
Physical Activity Predicts Performance in an Unpracticed Bimanual Coordination Task.
Boisgontier, Matthieu P; Serbruyns, Leen; Swinnen, Stephan P
2017-01-01
Practice of a given physical activity is known to improve the motor skills related to this activity. However, whether unrelated skills are also improved is still unclear. To test the impact of physical activity on an unpracticed motor task, 26 young adults completed the international physical activity questionnaire and performed a bimanual coordination task they had never practiced before. Results showed that higher total physical activity predicted higher performance in the bimanual task, controlling for multiple factors such as age, physical inactivity, music practice, and computer games practice. Linear mixed models allowed this effect of physical activity to be generalized to a large population of bimanual coordination conditions. This finding runs counter to the notion that generalized motor abilities do not exist and supports the existence of a "learning to learn" skill that could be improved through physical activity and that impacts performance in tasks that are not necessarily related to the practiced activity.
VENVAL : a plywood mill cost accounting program
Henry Spelter
1991-01-01
This report documents a package of computer programs called VENVAL. These programs prepare plywood mill data for a linear programming (LP) model that, in turn, calculates the optimum mix of products to make, given a set of technologies and market prices. (The software to solve a linear program is not provided and must be obtained separately.) Linear programming finds...
Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC
2015-01-01
The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890
A review of the effect of traffic and weather characteristics on road safety.
Theofilatos, Athanasios; Yannis, George
2014-11-01
Taking into consideration the increasing availability of real-time traffic data and stimulated by the importance of proactive safety management, this paper attempts to provide a review of the effect of traffic and weather characteristics on road safety, identify the gaps and discuss the needs for further research. Despite the existence of generally mixed evidence on the effect of traffic parameters, a few patterns can be observed. For instance, traffic flow seems to have a non-linear relationship with accident rates, even though some studies suggest linear relationship with accidents. On the other hand, increased speed limits have found to have a straightforward positive relationship with accident occurrence. Regarding weather effects, the effect of precipitation is quite consistent and leads generally to increased accident frequency but does not seem to have a consistent effect on severity. The impact of other weather parameters on safety, such as visibility, wind speed and temperature is not found straightforward so far. The increasing use of real-time data not only makes easier to identify the safety impact of traffic and weather characteristics, but most importantly makes possible the identification of their combined effect. The more systematic use of these real-time data may address several of the research gaps identified in this research. Copyright © 2014 Elsevier Ltd. All rights reserved.
Between- and within-lake responses of macrophyte richness metrics to shoreline developmen
Beck, Marcus W.; Vondracek, Bruce C.; Hatch, Lorin K.
2013-01-01
Aquatic habitat in littoral environments can be affected by residential development of shoreline areas. We evaluated the relationship between macrophyte richness metrics and shoreline development to quantify indicator response at 2 spatial scales for Minnesota lakes. First, the response of total, submersed, and sensitive species to shoreline development was evaluated within lakes to quantify macrophyte response as a function of distance to the nearest dock. Within-lake analyses using generalized linear mixed models focused on 3 lakes of comparable size with a minimal influence of watershed land use. Survey points farther from docks had higher total species richness and presence of species sensitive to disturbance. Second, between-lake effects of shoreline development on total, submersed, emergent-floating, and sensitive species were evaluated for 1444 lakes. Generalized linear models were developed for all lakes and stratified subsets to control for lake depth and watershed land use. Between-lake analyses indicated a clear response of macrophyte richness metrics to increasing shoreline development, such that fewer emergent-floating and sensitive species were correlated with increasing density of docks. These trends were particularly evident for deeper lakes with lower watershed development. Our results provide further evidence that shoreline development is associated with degraded aquatic habitat, particularly by illustrating the response of macrophyte richness metrics across multiple lake types and different spatial scales.
Des Roches, Carrie A.; Vallila-Rohter, Sofia; Villard, Sarah; Tripodis, Yorghos; Caplan, David
2016-01-01
Purpose The current study examined treatment outcomes and generalization patterns following 2 sentence comprehension therapies: object manipulation (OM) and sentence-to-picture matching (SPM). Findings were interpreted within the framework of specific deficit and resource reduction accounts, which were extended in order to examine the nature of generalization following treatment of sentence comprehension deficits in aphasia. Method Forty-eight individuals with aphasia were enrolled in 1 of 8 potential treatment assignments that varied by task (OM, SPM), complexity of trained sentences (complex, simple), and syntactic movement (noun phrase, wh-movement). Comprehension of trained and untrained sentences was probed before and after treatment using stimuli that differed from the treatment stimuli. Results Linear mixed-model analyses demonstrated that, although both OM and SPM treatments were effective, OM resulted in greater improvement than SPM. Analyses of covariance revealed main effects of complexity in generalization; generalization from complex to simple linguistically related sentences was observed both across task and across movement. Conclusions Results are consistent with the complexity account of treatment efficacy, as generalization effects were consistently observed from complex to simpler structures. Furthermore, results provide support for resource reduction accounts that suggest that generalization can extend across linguistic boundaries, such as across movement type. PMID:27997950
Numerical Technology for Large-Scale Computational Electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharpe, R; Champagne, N; White, D
The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less
Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun
2013-09-01
By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.
Correcting for population structure and kinship using the linear mixed model: theory and extensions.
Hoffman, Gabriel E
2013-01-01
Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.
NASA Astrophysics Data System (ADS)
Stepanova, Larisa; Bronnikov, Sergej
2018-03-01
The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather than the coefficients. Moreover, use of cubic regression splines provides biological meaningful growth velocity and acceleration curves despite increased complexity in coefficient interpretation. Through this stepwise approach, we provide a set of tools to model longitudinal childhood data for non-statisticians using linear mixed-effect models.
A modular approach for item response theory modeling with the R package flirt.
Jeon, Minjeong; Rijmen, Frank
2016-06-01
The new R package flirt is introduced for flexible item response theory (IRT) modeling of psychological, educational, and behavior assessment data. flirt integrates a generalized linear and nonlinear mixed modeling framework with graphical model theory. The graphical model framework allows for efficient maximum likelihood estimation. The key feature of flirt is its modular approach to facilitate convenient and flexible model specifications. Researchers can construct customized IRT models by simply selecting various modeling modules, such as parametric forms, number of dimensions, item and person covariates, person groups, link functions, etc. In this paper, we describe major features of flirt and provide examples to illustrate how flirt works in practice.
Robust small area prediction for counts.
Tzavidis, Nikos; Ranalli, M Giovanna; Salvati, Nicola; Dreassi, Emanuela; Chambers, Ray
2015-06-01
A new semiparametric approach to model-based small area prediction for counts is proposed and used for estimating the average number of visits to physicians for Health Districts in Central Italy. The proposed small area predictor can be viewed as an outlier robust alternative to the more commonly used empirical plug-in predictor that is based on a Poisson generalized linear mixed model with Gaussian random effects. Results from the real data application and from a simulation experiment confirm that the proposed small area predictor has good robustness properties and in some cases can be more efficient than alternative small area approaches. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Convection with a simple chemically reactive passive scalar
NASA Astrophysics Data System (ADS)
Herring, J. R.; Wyngaard, J. C.
Convection between horizontal stress-free perfectly conducting plates is examined in the turbulent regime for air. Results are presented for an additional scalar undergoing simple linear decay. We discuss qualitative aspects of the flow in terms of spectral and three-dimensional contour maps of the velocity and scalar fields. The horizontal mean profiles of scalar gradients and fluxes agree rather well with simple mixing-length concepts. Further, the mean profiles for a range of the destruction-rate parameter are shown to be nearly completely characterized by the boundary fluxes. Finally, we shall use the present numerical data as a basis for exploring a generalization of eddy-diffusion concepts so as to properly incorporate non-local effects.
Fujimoto, Kayo; Williams, Mark L
2015-06-01
Mixing patterns within sexual networks have been shown to have an effect on HIV transmission, both within and across groups. This study examined sexual mixing patterns involving HIV-unknown status and risky sexual behavior conditioned on assortative/dissortative mixing by race/ethnicity. The sample used for this study consisted of drug-using male sex workers and their male sex partners. A log-linear analysis of 257 most at-risk MSM and 3,072 sex partners was conducted. The analysis found two significant patterns. HIV-positive most at-risk Black MSM had a strong tendency to have HIV-unknown Black partners (relative risk, RR = 2.91, p < 0.001) and to engage in risky sexual behavior (RR = 2.22, p < 0.001). White most at-risk MSM with unknown HIV status also had a tendency to engage in risky sexual behavior with Whites (RR = 1.72, p < 0.001). The results suggest that interventions that target the most at-risk MSM and their sex partners should account for specific sexual network mixing patterns by HIV status.
Controlling the surface‐mediated release of DNA using ‘mixed multilayers’
Appadoo, Visham; Carter, Matthew C. D.
2016-01-01
Abstract We report the design of erodible ‘mixed multilayer’ coatings fabricated using plasmid DNA and combinations of both hydrolytically degradable and charge‐shifting cationic polymer building blocks. Films fabricated layer‐by‐layer using combinations of a model poly(β‐amino ester) (polymer 1) and a model charge‐shifting polymer (polymer 2) exhibited DNA release profiles that were substantially different than those assembled using DNA and either polymer 1 or polymer 2 alone. In addition, the order in which layers of these two cationic polymers were deposited during assembly had a profound impact on DNA release profiles when these materials were incubated in physiological buffer. Mixed multilayers ∼225 nm thick fabricated by depositing layers of polymer 1/DNA onto films composed of polymer 2/DNA released DNA into solution over ∼60 days, with multi‐phase release profiles intermediate to and exhibiting some general features of polymer 1/DNA or polymer 2/DNA films (e.g., a period of rapid release, followed by a more extended phase). In sharp contrast, ‘inverted’ mixed multilayers fabricated by depositing layers of polymer 2/DNA onto films composed of polymer 1/DNA exhibited release profiles that were almost completely linear over ∼60‐80 days. These and other results are consistent with substantial interdiffusion and commingling (or mixing) among the individual components of these compound materials. Our results reveal this mixing to lead to new, unanticipated, and useful release profiles and provide guidance for the design of polymer‐based coatings for the local, surface‐mediated delivery of DNA from the surfaces of topologically complex interventional devices, such as intravascular stents, with predictable long‐term release profiles. PMID:27981243
Options for refractive index and viscosity matching to study variable density flows
NASA Astrophysics Data System (ADS)
Clément, Simon A.; Guillemain, Anaïs; McCleney, Amy B.; Bardet, Philippe M.
2018-02-01
Variable density flows are often studied by mixing two miscible aqueous solutions of different densities. To perform optical diagnostics in such environments, the refractive index of the fluids must be matched, which can be achieved by carefully choosing the two solutes and the concentration of the solutions. To separate the effects of buoyancy forces and viscosity variations, it is desirable to match the viscosity of the two solutions in addition to their refractive index. In this manuscript, several pairs of index matched fluids are compared in terms of viscosity matching, monetary cost, and practical use. Two fluid pairs are studied in detail, with two aqueous solutions (binary solutions of water and a salt or alcohol) mixed into a ternary solution. In each case: an aqueous solution of isopropanol mixed with an aqueous solution of sodium chloride (NaCl) and an aqueous solution of glycerol mixed with an aqueous solution of sodium sulfate (Na_2SO_4). The first fluid pair allows reaching high-density differences at low cost, but brings a large difference in dynamic viscosity. The second allows matching dynamic viscosity and refractive index simultaneously, at reasonable cost. For each of these four solutes, the density, kinematic viscosity, and refractive index are measured versus concentration and temperature, as well as wavelength for the refractive index. To investigate non-linear effects when two index-matched, binary solutions are mixed, the ternary solutions formed are also analyzed. Results show that density and refractive index follow a linear variation with concentration. However, the viscosity of the isopropanol and NaCl pair deviates from the linear law and has to be considered. Empirical correlations and their coefficients are given to create index-matched fluids at a chosen temperature and wavelength. Finally, the effectiveness of the refractive index matching is illustrated with particle image velocimetry measurements performed for a buoyant jet in a linearly stratified environment. The creation of the index-matched solutions and linear stratification in a large-scale experimental facility are detailed, as well as the practical challenges to obtain precise refractive index matching.
Food insecurity and linear growth of adolescents in Jimma Zone, Southwest Ethiopia.
Belachew, Tefera; Lindstrom, David; Hadley, Craig; Gebremariam, Abebe; Kasahun, Wondwosen; Kolsteren, Patrick
2013-05-02
Although many studies showed that adolescent food insecurity is a pervasive phenomenon in Southwest Ethiopia, its effect on the linear growth of adolescents has not been documented so far. This study therefore aimed to longitudinally examine the association between food insecurity and linear growth among adolescents. Data for this study were obtained from a longitudinal survey of adolescents conducted in Jimma Zone, which followed an initial sample of 2084 randomly selected adolescents aged 13-17 years. We used linear mixed effects model for 1431 adolescents who were interviewed in three survey rounds one year apart to compare the effect of food insecurity on linear growth of adolescents. Overall, 15.9% of the girls and 12.2% of the boys (P=0.018) were food insecure both at baseline and on the year 1 survey, while 5.5% of the girls and 4.4% of the boys (P=0.331) were food insecure in all the three rounds of the survey. In general, a significantly higher proportion of girls (40%) experienced food insecurity at least in one of the survey rounds compared with boys (36.6%) (P=0.045).The trend of food insecurity showed a very sharp increase over the follow period from the baseline 20.5% to 48.4% on the year 1 survey, which again came down to 27.1% during the year 2 survey.In the linear mixed effects model, after adjusting for other covariates, the mean height of food insecure girls was shorter by 0.87 cm (P<0.001) compared with food secure girls at baseline. However, during the follow up period on average, the heights of food insecure girls increased by 0.38 cm more per year compared with food secure girls (P<0.066). However, the mean height of food insecure boys was not significantly different from food secure boys both at baseline and over the follow up period. Over the follow-up period, adolescents who live in rural and semi-urban areas grew significantly more per year than those who live in the urban areas both for girls (P<0.01) and for boys (P<0.01). Food insecurity is negatively associated with the linear growth of adolescents, especially on girls. High rate of childhood stunting in Ethiopia compounded with lower height of food insecure adolescents compared with their food secure peers calls for the development of direct nutrition interventions targeting adolescents to promote catch-up growth and break the intergenerational cycle of malnutrition.
NASA Astrophysics Data System (ADS)
Manolakis, Dimitris G.
2004-10-01
The linear mixing model is widely used in hyperspectral imaging applications to model the reflectance spectra of mixed pixels in the SWIR atmospheric window or the radiance spectra of plume gases in the LWIR atmospheric window. In both cases it is important to detect the presence of materials or gases and then estimate their amount, if they are present. The detection and estimation algorithms available for these tasks are related but they are not identical. The objective of this paper is to theoretically investigate how the heavy tails observed in hyperspectral background data affect the quality of abundance estimates and how the F-test, used for endmember selection, is robust to the presence of heavy tails when the model fits the data.
Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B
2015-06-01
The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.
De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas
2015-03-01
Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu Tianzhou; Rassias, John Michael; Xu Wanxin
2010-09-15
We establish some stability results concerning the general mixed additive-cubic functional equation in non-Archimedean fuzzy normed spaces. In addition, we establish some results of approximately general mixed additive-cubic mappings in non-Archimedean fuzzy normed spaces. The results improve and extend some recent results.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
NASA Astrophysics Data System (ADS)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-01-01
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
A mixed model for the relationship between climate and human cranial form.
Katz, David C; Grote, Mark N; Weaver, Timothy D
2016-08-01
We expand upon a multivariate mixed model from quantitative genetics in order to estimate the magnitude of climate effects in a global sample of recent human crania. In humans, genetic distances are correlated with distances based on cranial form, suggesting that population structure influences both genetic and quantitative trait variation. Studies controlling for this structure have demonstrated significant underlying associations of cranial distances with ecological distances derived from climate variables. However, to assess the biological importance of an ecological predictor, estimates of effect size and uncertainty in the original units of measurement are clearly preferable to significance claims based on units of distance. Unfortunately, the magnitudes of ecological effects are difficult to obtain with distance-based methods, while models that produce estimates of effect size generally do not scale to high-dimensional data like cranial shape and form. Using recent innovations that extend quantitative genetics mixed models to highly multivariate observations, we estimate morphological effects associated with a climate predictor for a subset of the Howells craniometric dataset. Several measurements, particularly those associated with cranial vault breadth, show a substantial linear association with climate, and the multivariate model incorporating a climate predictor is preferred in model comparison. Previous studies demonstrated the existence of a relationship between climate and cranial form. The mixed model quantifies this relationship concretely. Evolutionary questions that require population structure and phylogeny to be disentangled from potential drivers of selection may be particularly well addressed by mixed models. Am J Phys Anthropol 160:593-603, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
Hwang, Yuh-Shyan; Kung, Che-Min; Lin, Ho-Cheng; Chen, Jiann-Jong
2009-02-01
A low-sensitivity, low-bounce, high-linearity current-controlled oscillator (CCO) suitable for a single-supply mixed-mode instrumentation system is designed and proposed in this paper. The designed CCO can be operated at low voltage (2 V). The power bounce and ground bounce generated by this CCO is less than 7 mVpp when the power-line parasitic inductance is increased to 100 nH to demonstrate the effect of power bounce and ground bounce. The power supply noise caused by the proposed CCO is less than 0.35% in reference to the 2 V supply voltage. The average conversion ratio KCCO is equal to 123.5 GHz/A. The linearity of conversion ratio is high and its tolerance is within +/-1.2%. The sensitivity of the proposed CCO is nearly independent of the power supply voltage, which is less than a conventional current-starved oscillator. The performance of the proposed CCO has been compared with the current-starved oscillator. It is shown that the proposed CCO is suitable for single-supply mixed-mode instrumentation systems.
Hurtado, Margarita; Yang, Manshu; Evensen, Christian; Windham, Amy; Ortiz, Gloria; Tracy, Rachel; Ivy, Edward Donnell
2014-01-01
Introduction Cardiovascular disease is the leading cause of death in the United States, and disparities in cardiovascular health exist among African Americans, American Indians, Hispanics, and Filipinos. The Community Health Worker Health Disparities Initiative of the National Heart, Lung, and Blood Institute (NHLBI) includes culturally tailored curricula taught by community health workers (CHWs) to improve knowledge and heart-healthy behaviors in these racial/ethnic groups. Methods We used data from 1,004 community participants in a 10-session curriculum taught by CHWs at 15 sites to evaluate the NHLBI’s health disparities initiative by using a 1-group pretest–posttest design. The curriculum addressed identification and management of cardiovascular disease risk factors. We used linear mixed effects and generalized linear mixed effects models to examine results. Results Average participant age was 48; 75% were female, 50% were Hispanic, 35% were African American, 8% were Filipino, and 7% were American Indian. Twenty-three percent reported a history of diabetes, and 37% reported a family history of heart disease. Correct pretest to posttest knowledge scores increased from 48% to 74% for heart healthy knowledge. The percentage of participants at the action or maintenance stage of behavior change increased from 41% to 85%. Conclusion Using the CHW model to implement community education with culturally tailored curricula may improve heart health knowledge and behaviors among minorities. Further studies should examine the influence of such programs on clinical risk factors for cardiovascular disease. PMID:24524426
Word skipping: effects of word length, predictability, spelling and reading skill.
Slattery, Timothy J; Yates, Mark
2017-08-31
Readers eyes often skip over words as they read. Skipping rates are largely determined by word length; short words are skipped more than long words. However, the predictability of a word in context also impacts skipping rates. Rayner, Slattery, Drieghe and Liversedge (2011) reported an effect of predictability on word skipping for even long words (10-13 characters) that extend beyond the word identification span. Recent research suggests that better readers and spellers have an enhanced perceptual span (Veldre & Andrews, 2014). We explored whether reading and spelling skill interact with word length and predictability to impact word skipping rates in a large sample (N=92) of average and poor adult readers. Participants read the items from Rayner et al. (2011) while their eye movements were recorded. Spelling skill (zSpell) was assessed using the dictation and recognition tasks developed by Sally Andrews and colleagues. Reading skill (zRead) was assessed from reading speed (words per minute) and accuracy of three 120 word passages each with 10 comprehension questions. We fit linear mixed models to the target gaze duration data and generalized linear mixed models to the target word skipping data. Target word gaze durations were significantly predicted by zRead while, the skipping likelihoods were significantly predicted by zSpell. Additionally, for gaze durations, zRead significantly interacted with word predictability as better readers relied less on context to support word processing. These effects are discussed in relation to the lexical quality hypothesis and eye movement models of reading.
Huang, X; Huang, T; Deng, W; Yan, G; Qiu, H; Huang, Y; Ke, S; Hou, Y; Zhang, Y; Zhang, Z; Fang, S; Zhou, L; Yang, B; Ren, J; Ai, H; Huang, L
2017-02-01
Prevalence of swine respiratory disease causes poor growth performance in and serious economic losses to the swine industry. In this study, a categorical trait of enzootic pneumonia-like (EPL) score representing the infection gradient of a respiratory disease, more likely enzootic pneumonia, was recorded in a herd of 332 Chinese Erhualian pigs. According to their EPL scores and the disease effect on weight gains, these pigs were grouped into controls (EPL score ≤ 1) and cases (EPL score > 1). The weight gain of the case group reduced significantly at days 180, 210, 240 and 300 as compared to the control group. The heritability of EPL score was estimated to be 0.24 based on the pedigree information using a linear mixed model. All 332 Erhualian pigs and their nine sire parents were genotyped with Illumina Porcine 60K SNP chips. Two genome-wide association studies were performed under a generalized linear mixed model and a case-control model respectively. In total, five loci surpassed the suggestive significance level (P = 2.98 × 10 -5 ) on chromosomes 2, 8, 12 and 14. CXCL6, CXCL8, KIT and CTBP2 were highlighted as candidate genes that might play important roles in determining resistance/susceptibility to swine EP-like respiratory disease. The findings advance understanding of the genetic basis of resistance/susceptibility to respiratory disease in pigs. © 2016 Stichting International Foundation for Animal Genetics.
Schuna, John M; Lauersdorf, Rebekah L; Behrens, Timothy K; Liguori, Gary; Liebert, Mina L
2013-02-01
After-school programs may provide valuable opportunities for children to accumulate healthful physical activity (PA). This study assessed the PA of third-, fourth-, and fifth-grade children in the Keep It Moving! (KIM) after-school PA program, which was implemented in an ethnically diverse and low socioeconomic status school district in Colorado Springs, Colorado. The PA of KIM participating children (N = 116) at 4 elementary schools was objectively assessed using ActiGraph accelerometers and the System for Observing Fitness Instruction Time (SOFIT). Linear mixed-effects models or generalized linear mixed-effects models were used to compare time spent in sedentary (SED) behaviors, light PA (LPA), moderate PA (MPA), vigorous PA (VPA), and moderate-to-vigorous PA (MVPA) between genders and weight status classifications during KIM sessions. Children accumulated 7.6 minutes of SED time, 26.9 minutes of LPA, and 22.2 minutes of MVPA during KIM sessions. Boys accumulated less SED time (p < .05) and LPA (p = .04) than girls, but accumulated more MPA (p = .04), VPA (p = .03), and MVPA (p = .03). Overweight/obese children accumulated more LPA (p = .04) and less VPA (p < .05) than nonoverweight children. SOFIT data indicated that children spent a considerable proportion of KIM sessions being very active (12.4%), walking (36.0%), or standing (40.3%). The KIM program provides opportunities for disadvantaged children to accumulate substantial amounts of MVPA (>20 minutes per session) in an effort to meet current PA guidelines. © 2013, American School Health Association.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.
de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael
2016-11-01
Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.
Gradients of fear: How perception influences fear generalization.
Struyf, Dieter; Zaman, Jonas; Hermans, Dirk; Vervliet, Bram
2017-06-01
The current experiment investigated whether overgeneralization of fear could be due to an inability to perceptually discriminate the initial fear-evoking stimulus from similar stimuli, as fear learning-induced perceptual impairments have been reported but their influence on generalization gradients remain to be elucidated. Three hundred and sixty-eight healthy volunteers participated in a differential fear conditioning paradigm with circles of different sizes as conditioned stimuli (CS), of which one was paired to an aversive IAPS picture. During generalization, each subject was presented with one of 10 different sized circles including the CSs, and were asked to categorize the stimulus as either a CS or as novel after fear responses were recorded. Linear mixed models were used to investigate differences in fear generalization gradients depending on the participant's perception of the test stimulus. We found that the incorrect perception of a novel stimulus as the initial fear-evoking stimulus strongly boosted fear responses. The current findings demonstrate that a significant number of novel stimuli used to assess generalization are incorrectly identified as the initial fear-evoking stimulus, providing a perceptual account for the observed overgeneralization in panic and anxiety disorders. Accordingly, enhancing perceptual processing may be a promising treatment for targeting excessive fear generalization. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
Cheng, Xiao-Fei; Shi, Pei-Jian; Hui, Cang; Wang, Fu-Sheng; Liu, Guo-Hua; Li, Bai-Lian
2015-04-01
Moso bamboos (Phyllostachys edulis) are important forestry plants in southern China, with substantial roles to play in regional economic and ecological systems. Mixing broad-leaved forests and moso bamboos is a common management practice in China, and it is fundamental to elucidate the interactions between broad-leaved trees and moso bamboos for ensuring the sustainable provision of ecosystem services. We examine how the proportion of broad-leaved forest in a mixed managed zone, topology, and soil profile affects the effective productivity of moso bamboos (i.e., those with significant economic value), using linear regression and generalized additive models. Bamboo's diameter at breast height follows a Weibull distribution. The importance of these variables to bamboo productivity is, respectively, slope (25.9%), the proportion of broad-leaved forest (24.8%), elevation (23.3%), gravel content by volume (16.6%), slope location (8.3%), and soil layer thickness (1.2%). Highest productivity is found on the 25° slope, with a 600-m elevation, and 30% broad-leaved forest. As such, broad-leaved forest in the upper slope can have a strong influence on the effective productivity of moso bamboo, ranking only after slope and before elevation. These factors can be considered in future management practice.
Casellas, J; Bach, R
2012-06-01
Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
NASA Astrophysics Data System (ADS)
Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.
2012-05-01
The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).
Low Serum Bicarbonate Predicts Residual Renal Function Loss in Peritoneal Dialysis Patients.
Chang, Tae Ik; Kang, Ea Wha; Kim, Hyung Woo; Ryu, Geun Woo; Park, Cheol Ho; Park, Jung Tak; Yoo, Tae-Hyun; Shin, Sug Kyun; Kang, Shin-Wook; Choi, Kyu Hun; Han, Dae Suk; Han, Seung Hyeok
2015-08-01
Low residual renal function (RRF) and serum bicarbonate are associated with adverse outcomes in peritoneal dialysis (PD) patients. However, a relationship between the 2 has not yet been determined in these patients. Therefore, this study aimed to investigate whether low serum bicarbonate has a deteriorating effect on RRF in PD patients.This prospective observational study included a total of 405 incident patients who started PD between January 2000 and December 2005. We determined risk factors for complete loss of RRF using competing risk methods and evaluated the effects of time-averaged serum bicarbonate (TA-Bic) on the decline of RRF over the first 3 years of dialysis treatment using generalized linear mixed models.During the first 3 years of dialysis, 95 (23.5%) patients became anuric. The mean time until patients became anuric was 20.8 ± 9.0 months. After adjusting for multiple potentially confounding covariates, an increase in TA-Bic level was associated with a significantly decreased risk of loss of RRF (hazard ratio per 1 mEq/L increase, 0.84; 0.75-0.93; P = 0.002), and in comparison to TA-Bic ≥ 24 mEq/L, TA-Bic < 24 mEq/L conferred a 2.62-fold higher risk of becoming anuric. Furthermore, the rate of RRF decline estimated by generalized linear mixed models was significantly greater in patients with TA-Bic < 24 mEq/L compared with those with TA-Bic ≥ 24 mEq/L (-0.16 vs -0.11 mL/min/mo/1.73 m, P < 0.001).In this study, a clear association was found between low serum bicarbonate and loss of RRF in PD patients. Nevertheless, whether correction of metabolic acidosis for this indication provides additional protection for preserving RRF in these patients is unknown. Future interventional studies should more appropriately address this question.
Verma, Sadhna; Sarkar, Saradwata; Young, Jason; Venkataraman, Rajesh; Yang, Xu; Bhavsar, Anil; Patil, Nilesh; Donovan, James; Gaitonde, Krishnanath
2016-05-01
The purpose of this study was to compare high b-value (b = 2000 s/mm(2)) acquired diffusion-weighted imaging (aDWI) with computed DWI (cDWI) obtained using four diffusion models-mono-exponential (ME), intra-voxel incoherent motion (IVIM), stretched exponential (SE), and diffusional kurtosis (DK)-with respect to lesion visibility, conspicuity, contrast, and ability to predict significant prostate cancer (PCa). Ninety four patients underwent 3 T MRI including acquisition of b = 2000 s/mm(2) aDWI and low b-value DWI. High b = 2000 s/mm(2) cDWI was obtained using ME, IVIM, SE, and DK models. All images were scored on quality independently by three radiologists. Lesions were identified on all images and graded for lesion conspicuity. For a subset of lesions for which pathological truth was established, lesion-to-background contrast ratios (LBCRs) were computed and binomial generalized linear mixed model analysis was conducted to compare clinically significant PCa predictive capabilities of all DWI. For all readers and all models, cDWI demonstrated higher ratings for image quality and lesion conspicuity than aDWI except DK (p < 0.001). The LBCRs of ME, IVIM, and SE were significantly higher than LBCR of aDWI (p < 0.001). Receiver Operating Characteristic curves obtained from binomial generalized linear mixed model analysis demonstrated higher Area Under the Curves for ME, SE, IVIM, and aDWI compared to DK or PSAD alone in predicting significant PCa. High b-value cDWI using ME, IVIM, and SE diffusion models provide better image quality, lesion conspicuity, and increased LBCR than high b-value aDWI. Using cDWI can potentially provide comparable sensitivity and specificity for detecting significant PCa as high b-value aDWI without increased scan times and image degradation artifacts.
A Re-appraisal of Olivine Sorting and Accumulation in Hawaiian Magmas.
NASA Astrophysics Data System (ADS)
Rhodes, J. M.
2002-12-01
Bowen never used the m-words (magma mixing) in his highly influential book "The Origin of the Igneous Rocks". Yet, in the past 20-30 years, magma mixing has been proposed as an important, almost ubiquitous, process at volcanoes in all tectonic environments ranging from oceanic basalts to large silicic magma bodies, and as the possible trigger of eruptions. Bowen regarded Hawaiian olivine basalts and picrites as the result of olivine accumulation in a lower MgO magma that was crystallizing and fractionating olivine. This, with variants, has been the party line ever since, the only debate being over the MgO content of the proposed parental magmas. Although magma mixing has been recognized as an important process in differentiated, low-MgO (below 7 percent), Hawaiian magmas, the wide range in MgO (7-30 percent) in Hawaiian olivine tholeiites and picrites is invariably attributed to olivine crystallization, fractionation and accumulation. In this paper I will re-evaluate this hypothesis using well-documented examples from Kilauea, Mauna Kea and Mauna Loa that exhibit well-defined, coherent linear trends of major oxides and trace elements with MgO . If olivine control is the only factor responsible for these trends, then the intersection of the regression lines for each trend should intersect olivine compositions at a common forsterite composition, corresponding to the average accumulated olivine in each of the magmas. In some cases (the ongoing Puu Oo eruption) this simple test holds and olivine fractionation and accumulation can clearly be shown to be the dominant process. In other examples from Mauna Kea and Mauna Loa (1852, 1868, 1950 eruptions, and Mauna Loa in general) the test does not hold, and a more complicated process is required. Additionally, for those magmas that fail the test, CaO/Al2O3 invariably decreases with decreasing MgO content. This should not happen if only olivine fractionation and accumulation are involved. The explanation for these linear trends that approach, but fail to intersect, appropriate olivine compositions is a combination of magma mixing accompanied by olivine crystallization and accumulation. One of the mixing components is a is a high-MgO (about13-15 percent) magma laden with olivine phenocrysts and xenocrysts and the other is a consanguineous low-MgO (about 7 percent) quasi "steady-state" magma, with a prior history of clinopyroxene and plagioclase fractionation.
Wang, S; Martinez-Lage, M; Sakai, Y; Chawla, S; Kim, S G; Alonso-Basanta, M; Lustig, R A; Brem, S; Mohan, S; Wolf, R L; Desai, A; Poptani, H
2016-01-01
Early assessment of treatment response is critical in patients with glioblastomas. A combination of DTI and DSC perfusion imaging parameters was evaluated to distinguish glioblastomas with true progression from mixed response and pseudoprogression. Forty-one patients with glioblastomas exhibiting enhancing lesions within 6 months after completion of chemoradiation therapy were retrospectively studied. All patients underwent surgery after MR imaging and were histologically classified as having true progression (>75% tumor), mixed response (25%-75% tumor), or pseudoprogression (<25% tumor). Mean diffusivity, fractional anisotropy, linear anisotropy coefficient, planar anisotropy coefficient, spheric anisotropy coefficient, and maximum relative cerebral blood volume values were measured from the enhancing tissue. A multivariate logistic regression analysis was used to determine the best model for classification of true progression from mixed response or pseudoprogression. Significantly elevated maximum relative cerebral blood volume, fractional anisotropy, linear anisotropy coefficient, and planar anisotropy coefficient and decreased spheric anisotropy coefficient were observed in true progression compared with pseudoprogression (P < .05). There were also significant differences in maximum relative cerebral blood volume, fractional anisotropy, planar anisotropy coefficient, and spheric anisotropy coefficient measurements between mixed response and true progression groups. The best model to distinguish true progression from non-true progression (pseudoprogression and mixed) consisted of fractional anisotropy, linear anisotropy coefficient, and maximum relative cerebral blood volume, resulting in an area under the curve of 0.905. This model also differentiated true progression from mixed response with an area under the curve of 0.901. A combination of fractional anisotropy and maximum relative cerebral blood volume differentiated pseudoprogression from nonpseudoprogression (true progression and mixed) with an area under the curve of 0.807. DTI and DSC perfusion imaging can improve accuracy in assessing treatment response and may aid in individualized treatment of patients with glioblastomas. © 2016 by American Journal of Neuroradiology.
A green vehicle routing problem with customer satisfaction criteria
NASA Astrophysics Data System (ADS)
Afshar-Bakeshloo, M.; Mehrabi, A.; Safari, H.; Maleki, M.; Jolai, F.
2016-12-01
This paper develops an MILP model, named Satisfactory-Green Vehicle Routing Problem. It consists of routing a heterogeneous fleet of vehicles in order to serve a set of customers within predefined time windows. In this model in addition to the traditional objective of the VRP, both the pollution and customers' satisfaction have been taken into account. Meanwhile, the introduced model prepares an effective dashboard for decision-makers that determines appropriate routes, the best mixed fleet, speed and idle time of vehicles. Additionally, some new factors evaluate the greening of each decision based on three criteria. This model applies piecewise linear functions (PLFs) to linearize a nonlinear fuzzy interval for incorporating customers' satisfaction into other linear objectives. We have presented a mixed integer linear programming formulation for the S-GVRP. This model enriches managerial insights by providing trade-offs between customers' satisfaction, total costs and emission levels. Finally, we have provided a numerical study for showing the applicability of the model.
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...
Superradiance Effects in the Linear and Nonlinear Optical Response of Quantum Dot Molecules
NASA Astrophysics Data System (ADS)
Sitek, A.; Machnikowski, P.
2008-11-01
We calculate the linear optical response from a single quantum dot molecule and the nonlinear, four-wave-mixing response from an inhomogeneously broadened ensemble of such molecules. We show that both optical signals are affected by the coupling-dependent superradiance effect and by optical interference between the two polarizations. As a result, the linear and nonlinear responses are not identical.
Decomposition of a Mixed-Valence [2Fe-2S] Cluster to Linear Tetra-Ferric and Ferrous Clusters
Saouma, Caroline T.; Kaminsky, Werner; Mayer, James M.
2012-01-01
Despite the ease of preparing di-ferric [2Fe-2S] clusters, preparing stable mixed-valence analogues remains a challenge, as these clusters have limited thermal stability. Herein we identify two decomposition products of the mixed-valence thiosalicylate-ligated [2Fe-2S] cluster, [Fe2S2(SArCOO)2]3− ((SArCOO)2− = thiosalicylate). PMID:23976815
The formation of the doubly stable stratification in the Mediterranean Outflow
NASA Astrophysics Data System (ADS)
Bormans, M.; Turner, J. S.
1990-11-01
The Mediterranean Outflow as it exits from the Strait of Gibraltar can be seen as a gravity current flowing down the slope and mixing with Atlantic Water until it reaches its own density level. Typical salinity and temperature profiles through the core region of a Meddy show that the bottom of the core is colder and saltier than the top, leading to a stably stratified core with respect to double-diffusive processes. The bottom of the core is also more enriched with Mediterranean Water than the top, and this behaviour can be explained by a reduced mixing of the source water with the environment close to the rigid bottom. Although the mechanism involved is different from the actual case, we have successfully produced these doubly stable gradient in some laboratory experiments which incorporate the "filling-box" mechanism. Salt and sugar were used as laboratory analogues of temperature and salt, respectively. The laboratory experiments consisted of supplying a dense input fluid at the surface of a linearly salt stratified environment. We suggest that req, the ratio of the initial volume flux at the source to the volume flux at the equilibrium level, is an important parameter, and that in our experiments this must be in general smaller than 0.1 in order to produce a doubly stable region of salt and sugar. The most relevant experiments had a mixed sugar/salt input which is the analogue of the Mediterranean Outflow as it mixes with Atlantic Water outside the Strait of Gibraltar.
Heat kernel for the elliptic system of linear elasticity with boundary conditions
NASA Astrophysics Data System (ADS)
Taylor, Justin; Kim, Seick; Brown, Russell
2014-10-01
We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.
Bayesian generalized linear mixed modeling of Tuberculosis using informative priors
Woldegerima, Woldegebriel Assefa
2017-01-01
TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437
General well function for pumping from a confined, leaky, or unconfined aquifer
NASA Astrophysics Data System (ADS)
Perina, Tomas; Lee, Tien-Chang
2006-02-01
A general well function for groundwater flow toward an extraction well with non-uniform radial flux along the screen and finite-thickness skin, partially penetrating an unconfined, leaky-boundary flux, or confined aquifer is derived via the Laplace and generalized finite Fourier transforms. The mixed boundary condition at the well face is solved as the discretized Fredholm integral equation. The general well function reduces to a uniform radial flux solution as a special case. In the Laplace domain, the relation between the drawdown in the extraction well and flowrate is linear and the formulations for specified flowrate or specified drawdown pumping are interchangeable. The deviation in drawdown of the uniform from non-uniform radial flux solutions depends on the relative positions of the extraction and observation well screens, aquifer properties, and time of observation. In an unconfined aquifer the maximum deviation occurs during the period of delayed drawdown when the effect of vertical flow is most apparent. The skin and wellbore storage in an observation well are included as model parameters. A separate solution is developed for a fully penetrating well with the radial flux being a continuous function of depth.
Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San
2017-01-01
The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers' technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers' technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008-2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4-66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers' practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations.
Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San
2017-01-01
The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers’ technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers’ technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008–2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4–66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers’ practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations. PMID:28900435
National trends in hospital length of stay for acute myocardial infarction in China.
Li, Qian; Lin, Zhenqiu; Masoudi, Frederick A; Li, Jing; Li, Xi; Hernández-Díaz, Sonia; Nuti, Sudhakar V; Li, Lingling; Wang, Qing; Spertus, John A; Hu, Frank B; Krumholz, Harlan M; Jiang, Lixin
2015-01-20
China is experiencing increasing burden of acute myocardial infarction (AMI) in the face of limited medical resources. Hospital length of stay (LOS) is an important indicator of resource utilization. We used data from the Retrospective AMI Study within the China Patient-centered Evaluative Assessment of Cardiac Events, a nationally representative sample of patients hospitalized for AMI during 2001, 2006, and 2011. Hospital-level variation in risk-standardized LOS (RS-LOS) for AMI, accounting for differences in case mix and year, was examined with two-level generalized linear mixed models. A generalized estimating equation model was used to evaluate hospital characteristics associated with LOS. Absolute differences in RS-LOS and 95% confidence intervals were reported. The weighted median and mean LOS were 13 and 14.6 days, respectively, in 2001 (n = 1,901), 11 and 12.6 days in 2006 (n = 3,553), and 11 and 11.9 days in 2011 (n = 7,252). There was substantial hospital level variation in RS-LOS across the 160 hospitals, ranging from 9.2 to 18.1 days. Hospitals in the Central regions had on average 1.6 days (p = 0.02) shorter RS-LOS than those in the Eastern regions. All other hospital characteristics relating to capacity for AMI treatment were not associated with LOS. Despite a marked decline over the past decade, the mean LOS for AMI in China in 2011 remained long compared with international standards. Inter-hospital variation is substantial even after adjusting for case mix. Further improvement of AMI care in Chinese hospitals is critical to further shorten LOS and reduce unnecessary hospital variation.
NASA Astrophysics Data System (ADS)
Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie
2018-05-01
Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.
USDA-ARS?s Scientific Manuscript database
Transformations to multiple trait mixed model equations (MME) which are intended to improve computational efficiency in best linear unbiased prediction (BLUP) and restricted maximum likelihood (REML) are described. It is shown that traits that are expected or estimated to have zero residual variance...
A Bayesian Semiparametric Latent Variable Model for Mixed Responses
ERIC Educational Resources Information Center
Fahrmeir, Ludwig; Raach, Alexander
2007-01-01
In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…
Robust control of systems with real parameter uncertainty and unmodelled dynamics
NASA Technical Reports Server (NTRS)
Chang, Bor-Chin; Fischl, Robert
1991-01-01
During this research period we have made significant progress in the four proposed areas: (1) design of robust controllers via H infinity optimization; (2) design of robust controllers via mixed H2/H infinity optimization; (3) M-delta structure and robust stability analysis for structured uncertainties; and (4) a study on controllability and observability of perturbed plant. It is well known now that the two-Riccati-equation solution to the H infinity control problem can be used to characterize all possible stabilizing optimal or suboptimal H infinity controllers if the optimal H infinity norm or gamma, an upper bound of a suboptimal H infinity norm, is given. In this research, we discovered some useful properties of these H infinity Riccati solutions. Among them, the most prominent one is that the spectral radius of the product of these two Riccati solutions is a continuous, nonincreasing, convex function of gamma in the domain of interest. Based on these properties, quadratically convergent algorithms are developed to compute the optimal H infinity norm. We also set up a detailed procedure for applying the H infinity theory to robust control systems design. The desire to design controllers with H infinity robustness but H(exp 2) performance has recently resulted in mixed H(exp 2) and H infinity control problem formulation. The mixed H(exp 2)/H infinity problem have drawn the attention of many investigators. However, solution is only available for special cases of this problem. We formulated a relatively realistic control problem with H(exp 2) performance index and H infinity robustness constraint into a more general mixed H(exp 2)/H infinity problem. No optimal solution yet is available for this more general mixed H(exp 2)/H infinity problem. Although the optimal solution for this mixed H(exp 2)/H infinity control has not yet been found, we proposed a design approach which can be used through proper choice of the available design parameters to influence both robustness and performance. For a large class of linear time-invariant systems with real parametric perturbations, the coefficient vector of the characteristic polynomial is a multilinear function of the real parameter vector. Based on this multilinear mapping relationship together with the recent developments for polytopic polynomials and parameter domain partition technique, we proposed an iterative algorithm for coupling the real structured singular value.
Refractive index of liquid mixtures: theory and experiment.
Reis, João Carlos R; Lampreia, Isabel M S; Santos, Angela F S; Moita, Maria Luísa C J; Douhéret, Gérard
2010-12-03
An innovative approach is presented to interpret the refractive index of binary liquid mixtures. The concept of refractive index "before mixing" is introduced and shown to be given by the volume-fraction mixing rule of the pure-component refractive indices (Arago-Biot formula). The refractive index of thermodynamically ideal liquid mixtures is demonstrated to be given by the volume-fraction mixing rule of the pure-component squared refractive indices (Newton formula). This theoretical formulation entails a positive change of refractive index upon ideal mixing, which is interpreted in terms of dissimilar London dispersion forces centred in the dissimilar molecules making up the mixture. For real liquid mixtures, the refractive index of mixing and the excess refractive index are introduced in a thermodynamic manner. Examples of mixtures are cited for which excess refractive indices and excess molar volumes show all of the four possible sign combinations, a fact that jeopardises the finding of a general equation linking these two excess properties. Refractive indices of 69 mixtures of water with the amphiphile (R,S)-1-propoxypropan-2-ol are reported at five temperatures in the range 283-303 K. The ideal and real refractive properties of this binary system are discussed. Pear-shaped plots of excess refractive indices against excess molar volumes show that extreme positive values of excess refractive index occur at a substantially lower mole fraction of the amphiphile than extreme negative values of excess molar volume. Analysis of these plots provides insights into the mixing schemes that occur in different composition segments. A nearly linear variation is found when Balankina's ratios between excess and ideal values of refractive indices are plotted against ratios between excess and ideal values of molar volumes. It is concluded that, when coupled with volumetric properties, the new thermodynamic functions defined for the analysis of refractive indices of liquid mixtures give important complementary information on the mixing process over the whole composition range.
Identification of single-input-single-output quantum linear systems
NASA Astrophysics Data System (ADS)
Levitt, Matthew; GuÅ£ǎ, Mǎdǎlin
2017-03-01
The purpose of this paper is to investigate system identification for single-input-single-output general (active or passive) quantum linear systems. For a given input we address the following questions: (1) Which parameters can be identified by measuring the output? (2) How can we construct a system realization from sufficient input-output data? We show that for time-dependent inputs, the systems which cannot be distinguished are related by symplectic transformations acting on the space of system modes. This complements a previous result of Guţă and Yamamoto [IEEE Trans. Autom. Control 61, 921 (2016), 10.1109/TAC.2015.2448491] for passive linear systems. In the regime of stationary quantum noise input, the output is completely determined by the power spectrum. We define the notion of global minimality for a given power spectrum, and characterize globally minimal systems as those with a fully mixed stationary state. We show that in the case of systems with a cascade realization, the power spectrum completely fixes the transfer function, so the system can be identified up to a symplectic transformation. We give a method for constructing a globally minimal subsystem direct from the power spectrum. Restricting to passive systems the analysis simplifies so that identifiability may be completely understood from the eigenvalues of a particular system matrix.
Size segregation in a granular bore
NASA Astrophysics Data System (ADS)
Edwards, A. N.; Vriend, N. M.
2016-10-01
We investigate the effect of particle-size segregation in an upslope propagating granular bore. A bidisperse mixture of particles, initially normally graded, flows down an inclined chute and impacts with a closed end. This impact causes the formation of a shock in flow thickness, known as a granular bore, to travel upslope, leaving behind a thick deposit. This deposit imprints the local segregated state featuring both pure and mixed regions of particles as a function of downstream position. The particle-size distribution through the depth is characterized by a thin purely small-particle layer at the base, a significant linear transition region, and a thick constant mixed-particle layer below the surface, in contrast to previously observed S-shaped steady-state concentration profiles. The experimental observations agree with recent progress that upward and downward segregation of large and small particles respectively is asymmetric. We incorporate the three-layer, experimentally observed, size-distribution profile into a depth-averaged segregation model to modify it accordingly. Numerical solutions of this model are able to match our experimental results and therefore motivate the use of a more general particle-size distribution profile.
Predictors of switch from depression to mania in bipolar disorder.
Niitsu, Tomihisa; Fabbri, Chiara; Serretti, Alessandro
2015-01-01
Manic switch is a relevant issue when treating bipolar depression. Some risk factors have been suggested, but unequivocal findings are lacking. We therefore investigated predictors of switch from depression to mania in the Systematic Treatment Enhancement Program for Bipolar Disorder (STEP-BD) sample. Manic switch was defined as a depressive episode followed by a (hypo)manic or mixed episode within the following 12 weeks. We assessed possible predictors of switch using generalized linear mixed models (GLMM). 8403 episodes without switch and 512 episodes with switch (1720 subjects) were included in the analysis. Several baseline variables were associated with a higher risk of switch. They were younger age, previous history of: rapid cycling, severe manic symptoms, suicide attempts, amphetamine use and some pharmacological and psychotherapeutic treatments. During the current depressive episode, the identified risk factors were: any possible mood elevation, multiple mania-associated symptoms with at least moderate severity, and comorbid panic attacks. In conclusion, our study suggests that both characteristics of the disease history and clinical features of the current depressive episode may be risk factors for manic switch. Copyright © 2015 Elsevier Ltd. All rights reserved.
Walking through the statistical black boxes of plant breeding.
Xavier, Alencar; Muir, William M; Craig, Bruce; Rainey, Katy Martin
2016-10-01
The main statistical procedures in plant breeding are based on Gaussian process and can be computed through mixed linear models. Intelligent decision making relies on our ability to extract useful information from data to help us achieve our goals more efficiently. Many plant breeders and geneticists perform statistical analyses without understanding the underlying assumptions of the methods or their strengths and pitfalls. In other words, they treat these statistical methods (software and programs) like black boxes. Black boxes represent complex pieces of machinery with contents that are not fully understood by the user. The user sees the inputs and outputs without knowing how the outputs are generated. By providing a general background on statistical methodologies, this review aims (1) to introduce basic concepts of machine learning and its applications to plant breeding; (2) to link classical selection theory to current statistical approaches; (3) to show how to solve mixed models and extend their application to pedigree-based and genomic-based prediction; and (4) to clarify how the algorithms of genome-wide association studies work, including their assumptions and limitations.
NASA Astrophysics Data System (ADS)
Sandhu, Amit
A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.
Steering, Entanglement, Nonlocality, and the EPR Paradox
NASA Astrophysics Data System (ADS)
Wiseman, Howard; Jones, Steve; Andrew, Doherty
2007-06-01
The concept of steering was introduced by Schroedinger in 1935 as a generalization of the EPR paradox for arbitrary pure bipartite entangled states and arbitrary measurements by one party. Until now, it has never been rigorously defined, so it has not been known (for example) what mixed states are steerable (that is, can be used to exhibit steering). We provide an operational definition, from which we prove (by considering Werner states and Isotropic states) that steerable states are a strict subset of the entangled states, and a strict superset of the states that can exhibit Bell-nonlocality. For arbitrary bipartite Gaussian states we derive a linear matrix inequality that decides the question of steerability via Gaussian measurements, and we relate this to the original EPR paradox.
An electromagnetism-like metaheuristic for open-shop problems with no buffer
NASA Astrophysics Data System (ADS)
Naderi, Bahman; Najafi, Esmaeil; Yazdani, Mehdi
2012-12-01
This paper considers open-shop scheduling with no intermediate buffer to minimize total tardiness. This problem occurs in many production settings, in the plastic molding, chemical, and food processing industries. The paper mathematically formulates the problem by a mixed integer linear program. The problem can be optimally solved by the model. The paper also develops a novel metaheuristic based on an electromagnetism algorithm to solve the large-sized problems. The paper conducts two computational experiments. The first includes small-sized instances by which the mathematical model and general performance of the proposed metaheuristic are evaluated. The second evaluates the metaheuristic for its performance to solve some large-sized instances. The results show that the model and algorithm are effective to deal with the problem.
Can observations look back to the beginning of inflation?
NASA Astrophysics Data System (ADS)
Wetterich, C.
2016-03-01
The cosmic microwave background can measure the inflaton potential only if inflation lasts sufficiently long before the time of horizon crossing of observable fluctuations, such that non-linear effects in the time evolution of Green's functions lead to a loss of memory of initial conditions for the ultraviolet tail of the spectrum. Within a derivative expansion of the quantum effective action for an interacting scalar field we discuss the most general solution for the correlation function, including arbitrary pure and mixed quantum states. In this approximation no loss of memory occurs - cosmic microwave observations see the initial spectrum at the beginning of inflation, processed only mildly by the scale-violating effects at horizon crossing induced by the inflaton potential.
Mixing of ultrasonic Lamb waves in thin plates with quadratic nonlinearity.
Li, Feilong; Zhao, Youxuan; Cao, Peng; Hu, Ning
2018-07-01
This paper investigates the propagation of Lamb waves in thin plates with quadratic nonlinearity by one-way mixing method using numerical simulations. It is shown that an A 0 -mode wave can be generated by a pair of S 0 and A 0 mode waves only when mixing condition is satisfied, and mixing wave signals are capable of locating the damage zone. Additionally, it is manifested that the acoustic nonlinear parameter increases linearly with quadratic nonlinearity but monotonously with the size of mixing zone. Furthermore, because of frequency deviation, the waveform of the mixing wave changes significantly from a regular diamond shape to toneburst trains. Copyright © 2018 Elsevier B.V. All rights reserved.
Radio Propagation Prediction Software for Complex Mixed Path Physical Channels
2006-08-14
63 4.4.6. Applied Linear Regression Analysis in the Frequency Range 1-50 MHz 69 4.4.7. Projected Scaling to...4.4.6. Applied Linear Regression Analysis in the Frequency Range 1-50 MHz In order to construct a comprehensive numerical algorithm capable of
Analysis of Operating Principles with S-system Models
Lee, Yun; Chen, Po-Wei; Voit, Eberhard O.
2011-01-01
Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady-states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature. PMID:21377479
NASA Technical Reports Server (NTRS)
Wrigley, Christopher James (Inventor); Hancock, Bruce R. (Inventor); Cunningham, Thomas J. (Inventor); Newton, Kenneth W. (Inventor)
2014-01-01
An analog-to-digital converter (ADC) converts pixel voltages from a CMOS image into a digital output. A voltage ramp generator generates a voltage ramp that has a linear first portion and a non-linear second portion. A digital output generator generates a digital output based on the voltage ramp, the pixel voltages, and comparator output from an array of comparators that compare the voltage ramp to the pixel voltages. A return lookup table linearizes the digital output values.
Iterative methods for mixed finite element equations
NASA Technical Reports Server (NTRS)
Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.
1985-01-01
Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.
Heavy neutrino mixing and single production at linear collider
NASA Astrophysics Data System (ADS)
Gluza, J.; Maalampi, J.; Raidal, M.; Zrałek, M.
1997-02-01
We study the single production of heavy neutrinos via the processes e- e+ -> νN and e- γ -> W- N at future linear colliders. As a base of our considerations we take a wide class of models, both with vanishing and non-vanishing left-handed Majorana neutrino mass matrix mL. We perform a model independent analyses of the existing experimental data and find connections between the characteristic of heavy neutrinos (masses, mixings, CP eigenvalues) and the mL parameters. We show that with the present experimental constraints heavy neutrino masses almost up to the collision energy can be tested in the future experiments.
Yang, James J; Williams, L Keoki; Buu, Anne
2017-08-24
A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen
2011-08-16
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less
Wankel, Scott D.; Kendall, Carol; Paytan, Adina
2009-01-01
Nitrate (NO-3 concentrations and dual isotopic composition (??15N and ??18O) were measured during various seasons and tidal conditions in Elkhorn Slough to evaluate mixing of sources of NO-3 within this California estuary. We found the isotopic composition of NO-3 was influenced most heavily by mixing of two primary sources with unique isotopic signatures, a marine (Monterey Bay) and terrestrial agricultural runoff source (Old Salinas River). However, our attempt to use a simple two end-member mixing model to calculate the relative contribution of these two NO-3 sources to the Slough was complicated by periods of nonconservative behavior and/or the presence of additional sources, particularly during the dry season when NO-3 concentrations were low. Although multiple linear regression generally yielded good fits to the observed data, deviations from conservative mixing were still evident. After consideration of potential alternative sources, we concluded that deviations from two end-member mixing were most likely derived from interactions with marsh sediments in regions of the Slough where high rates of NO-3 uptake and nitrification result in NO-3 with low ?? 15N and high ??18O values. A simple steady state dual isotope model is used to illustrate the impact of cycling processes in an estuarine setting which may play a primary role in controlling NO -3 isotopic composition when and where cycling rates and water residence times are high. This work expands our understanding of nitrogen and oxygen isotopes as biogeochemical tools for investigating NO -3 sources and cycling in estuaries, emphasizing the role that cycling processes may play in altering isotopic composition. Copyright 2009 by the American Geophysical Union.
Ramirez, Adriana G; Tracci, Margaret C; Stukenborg, George J; Turrentine, Florence E; Kozower, Benjamin D; Jones, R Scott
2016-01-01
Background The Hospital Value-Based Purchasing Program measures value of care provided by participating Medicare hospitals while creating financial incentives for quality improvement and fostering increased transparency. Limited information is available comparing hospital performance across healthcare business models. Study Design 2015 hospital Value-Based Purchasing Program results were used to examine hospital performance by business model. General linear modeling assessed differences in mean total performance score, hospital case mix index, and differences after adjustment for differences in hospital case mix index. Results Of 3089 hospitals with Total Performance Scores (TPS), categories of representative healthcare business models included 104 Physician-owned Surgical Hospitals (POSH), 111 University HealthSystem Consortium (UHC), 14 US News & World Report Honor Roll (USNWR) Hospitals, 33 Kaiser Permanente, and 124 Pioneer Accountable Care Organization affiliated hospitals. Estimated mean TPS for POSH (64.4, 95% CI 61.83, 66.38) and Kaiser (60.79, 95% CI 56.56, 65.03) were significantly higher compared to all remaining hospitals while UHC members (36.8, 95% CI 34.51, 39.17) performed below the mean (p < 0.0001). Significant differences in mean hospital case mix index included POSH (mean 2.32, p<0.0001), USNWR honorees (mean 2.24, p 0.0140) and UHC members (mean =1.99, p<0.0001) while Kaiser Permanente hospitals had lower case mix value (mean =1.54, p<0.0001). Re-estimation of TPS did not change the original results after adjustment for differences in hospital case mix index. Conclusions The Hospital Value-Based Purchasing Program revealed superior hospital performance associated with business model. Closer inspection of high-value hospitals may guide value improvement and policy-making decisions for all Medicare Value-Based Purchasing Program Hospitals. PMID:27502368
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
Transit-time and age distributions for nonlinear time-dependent compartmental systems.
Metzler, Holger; Müller, Markus; Sierra, Carlos A
2018-02-06
Many processes in nature are modeled using compartmental systems (reservoir/pool/box systems). Usually, they are expressed as a set of first-order differential equations describing the transfer of matter across a network of compartments. The concepts of age of matter in compartments and the time required for particles to transit the system are important diagnostics of these models with applications to a wide range of scientific questions. Until now, explicit formulas for transit-time and age distributions of nonlinear time-dependent compartmental systems were not available. We compute densities for these types of systems under the assumption of well-mixed compartments. Assuming that a solution of the nonlinear system is available at least numerically, we show how to construct a linear time-dependent system with the same solution trajectory. We demonstrate how to exploit this solution to compute transit-time and age distributions in dependence on given start values and initial age distributions. Furthermore, we derive equations for the time evolution of quantiles and moments of the age distributions. Our results generalize available density formulas for the linear time-independent case and mean-age formulas for the linear time-dependent case. As an example, we apply our formulas to a nonlinear and a linear version of a simple global carbon cycle model driven by a time-dependent input signal which represents fossil fuel additions. We derive time-dependent age distributions for all compartments and calculate the time it takes to remove fossil carbon in a business-as-usual scenario.
Resolving Mixed Algal Species in Hyperspectral Images
Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.
2014-01-01
We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451
NASA Astrophysics Data System (ADS)
Shvarts, D.; Oron, D.; Kartoon, D.; Rikanati, A.; Sadot, O.; Srebro, Y.; Yedvab, Y.; Ofer, D.; Levin, A.; Sarid, E.; Ben-Dor, G.; Erez, L.; Erez, G.; Yosef-Hai, A.; Alon, U.; Arazi, L.
2016-10-01
The late-time nonlinear evolution of the Rayleigh-Taylor (RT) and Richtmyer-Meshkov (RM) instabilities for random initial perturbations is investigated using a statistical mechanics model based on single-mode and bubble-competition physics at all Atwood numbers (A) and full numerical simulations in two and three dimensions. It is shown that the RT mixing zone bubble and spike fronts evolve as h ~ α · A · gt2 with different values of a for the bubble and spike fronts. The RM mixing zone fronts evolve as h ~ tθ with different values of θ for bubbles and spikes. Similar analysis yields a linear growth with time of the Kelvin-Helmholtz mixing zone. The dependence of the RT and RM scaling parameters on A and the dimensionality will be discussed. The 3D predictions are found to be in good agreement with recent Linear Electric Motor (LEM) experiments.
Linear mixing model applied to AVHRR LAC data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.
Designing overall stoichiometric conversions and intervening metabolic reactions
Chowdhury, Anupam; Maranas, Costas D.
2015-11-04
Existing computational tools for de novo metabolic pathway assembly, either based on mixed integer linear programming techniques or graph-search applications, generally only find linear pathways connecting the source to the target metabolite. The overall stoichiometry of conversion along with alternate co-reactant (or co-product) combinations is not part of the pathway design. Therefore, global carbon and energy efficiency is in essence fixed with no opportunities to identify more efficient routes for recycling carbon flux closer to the thermodynamic limit. Here, we introduce a two-stage computational procedure that both identifies the optimum overall stoichiometry (i.e., optStoic) and selects for (non-)native reactions (i.e.,more » minRxn/minFlux) that maximize carbon, energy or price efficiency while satisfying thermodynamic feasibility requirements. Implementation for recent pathway design studies identified non-intuitive designs with improved efficiencies. Specifically, multiple alternatives for non-oxidative glycolysis are generated and non-intuitive ways of co-utilizing carbon dioxide with methanol are revealed for the production of C 2+ metabolites with higher carbon efficiency.« less
Stationary Waves of the Ice Age Climate.
NASA Astrophysics Data System (ADS)
Cook, Kerry H.; Held, Isaac M.
1988-08-01
A linearized, steady state, primitive equation model is used to simulate the climatological zonal asymmetries (stationary eddies) in the wind and temperature fields of the 18 000 YBP climate during winter. We compare these results with the eddies simulated in the ice age experiments of Broccoli and Manabe, who used CLIMAP boundary conditions and reduced atmospheric CO2 in an atmospheric general circulation model (GCM) coupled with a static mixed layer ocean model. The agreement between the models is good, indicating that the linear model can be used to evaluate the relative influences of orography, diabatic heating, and transient eddy heat and momentum transports in generating stationary waves. We find that orographic forcing dominates in the ice age climate. The mechanical influence of the continental ice sheets on the atmosphere is responsible for most of the changes between the present day and ice age stationary eddies. This concept of the ice age climate is complicated by the sensitivity of the stationary eddies to the large increase in the magnitude of the zonal mean meridional temperature gradient simulated in the ice age GCM.
The Link Between UV Extinction and Infrared Cirrus
NASA Technical Reports Server (NTRS)
Hackwell, John A.; Hecht, James; Canterna, Ronald
1997-01-01
Low resolution spectra from the International Ultraviolet Explorer satellite were used to derive ultraviolet extinction curves for stars in four clusters away from the galactic plane. The extinction in three of the clusters is very similar to the general interstellar curve defined by Seaton. Stars in the fourth region, near the Rho Ophiuci dark cloud, have extinction curves that are characterized by a small "linear" term component. The star BD +36 deg 781 is unique amongst the 20 stars observed in that it shows evidence for extinction by diamond grains near 1700 angstroms. We used data from the final release of the IRAS Sky Survey Atlas (ISSA) to determine the 60 micron to 100 micron intensity ratio for the infrared cirrus. The ISSA data, which have been corrected for zodiacal light, gave intensity ratios that are more robust and self-consistent than for other data sets that we used. When the infrared and ultraviolet data are combined, we see a general trend for low values of the ultraviolet "linear term" (al) to correlate with high values of 60 micron/100 micron ratio. This implies that, in regions where the average dust temperature is hotter (high 60 micron/100 micron ratio), there is a relative absence of the small silicate grains that are responsible for the ultraviolet linear term. However, the new data do not bear out our earlier contention that the 60 micron and 100 micron emissions are poorly correlated spatially in regions where the 60 micron/100 micron ratio is low. Only NGC 1647 shows this result. It may be that the different dust types are particularly poorly mixed in this area.
NASA Astrophysics Data System (ADS)
Yamamoto, Masaru; Takahashi, Masaaki
2018-03-01
We derive simple dynamical relationships between wind speed magnitude and meridional temperature contrast. The relationship explains scatter plot distributions of time series of three variables (maximum zonal wind speed UMAX, meridional wind speed VMAX, and equator-pole temperature contrast dTMAX), which are obtained from a Venus general circulation model with equatorial Kelvin-wave forcing. Along with VMAX and dTMAX, UMAX likely increases with the phase velocity and amplitude of a forced wave. In the scatter diagram of UMAX versus dTMAX, points are plotted along a linear equation obtained from a thermal-wind relationship in the cloud layer. In the scatter diagram of VMAX versus UMAX, the apparent slope is somewhat steep in the high UMAX regime, compared with the low UMAX regime. The scatter plot distributions are qualitatively consistent with a quadratic equation obtained from a diagnostic equation of the stream function above the cloud top. The plotted points in the scatter diagrams form a linear cluster for weak wave forcing, whereas they form a small cluster for strong wave forcing. An interannual oscillation of the general circulation forming the linear cluster in the scatter diagram is apparent in the experiment of weak 5.5-day wave forcing. Although a pair of equatorial Kelvin and high-latitude Rossby waves with a same period (Kelvin-Rossby wave) produces equatorward heat and momentum fluxes in the region below 60 km, the equatorial wave does not contribute to the long-period oscillation. The interannual fluctuation of the high-latitude jet core leading to the time variation of UMAX is produced by growth and decay of a polar mixed Rossby-gravity wave with a 14-day period.
Linear mixed-effects modeling approach to FMRI group analysis
Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.
2013-01-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789
Blood biomarkers in male and female participants after an Ironman-distance triathlon
Danielsson, Tom; Carlsson, Jörg; Schreyer, Hendrik; Ahnesjö, Jonas; Ten Siethoff, Lasse; Ragnarsson, Thony; Tugetam, Åsa
2017-01-01
Background While overall physical activity is clearly associated with a better short-term and long-term health, prolonged strenuous physical activity may result in a rise in acute levels of blood-biomarkers used in clinical practice for diagnosis of various conditions or diseases. In this study, we explored the acute effects of a full Ironman-distance triathlon on biomarkers related to heart-, liver-, kidney- and skeletal muscle damage immediately post-race and after one week’s rest. We also examined if sex, age, finishing time and body composition influenced the post-race values of the biomarkers. Methods A sample of 30 subjects was recruited (50% women) to the study. The subjects were evaluated for body composition and blood samples were taken at three occasions, before the race (T1), immediately after (T2) and one week after the race (T3). Linear regression models were fitted to analyse the independent contribution of sex and finishing time controlled for weight, body fat percentage and age, on the biomarkers at the termination of the race (T2). Linear mixed models were fitted to examine if the biomarkers differed between the sexes over time (T1-T3). Results Being male was a significant predictor of higher post-race (T2) levels of myoglobin, CK, and creatinine levels and body weight was negatively associated with myoglobin. In general, the models were unable to explain the variation of the dependent variables. In the linear mixed models, an interaction between time (T1-T3) and sex was seen for myoglobin and creatinine, in which women had a less pronounced response to the race. Conclusion Overall women appear to tolerate the effects of prolonged strenuous physical activity better than men as illustrated by their lower values of the biomarkers both post-race as well as during recovery. PMID:28609447
Linear mixed-effects modeling approach to FMRI group analysis.
Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W
2013-06-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.
Holmes, George M; Pink, George H; Friedman, Sarah A
2013-01-01
To compare the financial performance of rural hospitals with Medicare payment provisions to those paid under prospective payment and to estimate the financial consequences of elimination of the Critical Access Hospital (CAH) program. Financial data for 2004-2010 were collected from the Healthcare Cost Reporting Information System (HCRIS) for rural hospitals. HCRIS data were used to calculate measures of the profitability, liquidity, capital structure, and financial strength of rural hospitals. Linear mixed models accounted for the method of Medicare reimbursement, time trends, hospital, and market characteristics. Simulations were used to estimate profitability of CAHs if they reverted to prospective payment. CAHs generally had lower unadjusted financial performance than other types of rural hospitals, but after adjustment for hospital characteristics, CAHs had generally higher financial performance. Special payment provisions by Medicare to rural hospitals are important determinants of financial performance. In particular, the financial condition of CAHs would be worse if they were paid under prospective payment. © 2012 National Rural Health Association.
WAKES: Wavelet Adaptive Kinetic Evolution Solvers
NASA Astrophysics Data System (ADS)
Mardirian, Marine; Afeyan, Bedros; Larson, David
2016-10-01
We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.
Rogers, Paul; Fisk, John E; Lowrie, Emma
2017-11-01
The present study examines the extent to which stronger belief in either extrasensory perception, psychokinesis or life-after-death is associated with a proneness to making conjunction errors (CEs). One hundred and sixty members of the UK public read eight hypothetical scenarios and for each estimated the likelihood that two constituent events alone plus their conjunction would occur. The impact of paranormal belief plus constituents' conditional relatedness type, estimates of the subjectively less likely and more likely constituents plus relevant interaction terms tested via three Generalized Linear Mixed Models. General qualification levels were controlled for. As expected, stronger PK beliefs and depiction of a positively conditionally related (verses conditionally unrelated) constituent pairs predicted higher CE generation. ESP and LAD beliefs had no impact with, surprisingly, higher estimates of the less likely constituent predicting fewer - not more - CEs. Theoretical implications, methodological issues and ideas for future research are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.
Quantum demolition filtering and optimal control of unstable systems.
Belavkin, V P
2012-11-28
A brief account of the quantum information dynamics and dynamical programming methods for optimal control of quantum unstable systems is given to both open loop and feedback control schemes corresponding respectively to deterministic and stochastic semi-Markov dynamics of stable or unstable systems. For the quantum feedback control scheme, we exploit the separation theorem of filtering and control aspects as in the usual case of quantum stable systems with non-demolition observation. This allows us to start with the Belavkin quantum filtering equation generalized to demolition observations and derive the generalized Hamilton-Jacobi-Bellman equation using standard arguments of classical control theory. This is equivalent to a Hamilton-Jacobi equation with an extra linear dissipative term if the control is restricted to Hamiltonian terms in the filtering equation. An unstable controlled qubit is considered as an example throughout the development of the formalism. Finally, we discuss optimum observation strategies to obtain a pure quantum qubit state from a mixed one.
Acculturative Stress and Diminishing Family Cohesion Among Recent Latino Immigrants
De La Rosa, Mario; Ibañez, Gladys E.
2012-01-01
This study investigates a theorized link between Latino immigrants’ experience of acculturative stress during their two initial years in the United States (US) and declines in family cohesion from pre- to post-immigration contexts. This retrospective cohort study included 405 adult participants. Baseline assessment occurred during participants’ first 12 months in the US. Follow-up assessment occurred during participants’ second year in the US. General linear mixed models were used to estimate change in family cohesion and sociocultural correlates of this change. Inverse associations were determined between acculturative stress during initial years in the US and declines in family cohesion from pre-immigration to post-immigration contexts. Participants with undocumented immigration status, those with lower education levels, and those without family in the US generally indicated lower family cohesion. Participants who experienced more acculturative stress and those without family in the US evidenced a greater decline in family cohesion. Results are promising in terms of implications for health services for recent Latino immigrants. PMID:22790880
Multivariate Longitudinal Analysis with Bivariate Correlation Test
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
Linear aerospike engine study. [for reusable launch vehicles
NASA Technical Reports Server (NTRS)
Diem, H. G.; Kirby, F. M.
1977-01-01
Parametric data on split-combustor linear engine propulsion systems are presented for use in mixed-mode single-stage-to-orbit (SSTO) vehicle studies. Preliminary design data for two selected engine systems are included. The split combustor was investigated for mixed-mode operations with oxygen/hydrogen propellants used in the inner combustor in Mode 2, and in conjunction with either oxygen/RP-1, oxygen/RJ-5, O2/CH4, or O2/H2 propellants in the outer combustor for Mode 1. Both gas generator and staged combustion power cycles were analyzed for providing power to the turbopumps of the inner and outer combustors. Numerous cooling circuits and cooling fluids (propellants) were analyzed and hydrogen was selected as the preferred coolant for both combustors and the linear aerospike nozzle. The maximum operating chamber pressure was determined to be limited by the availability of hydrogen coolant pressure drop in the coolant circuit.
Modelling of Asphalt Concrete Stiffness in the Linear Viscoelastic Region
NASA Astrophysics Data System (ADS)
Mazurek, Grzegorz; Iwański, Marek
2017-10-01
Stiffness modulus is a fundamental parameter used in the modelling of the viscoelastic behaviour of bituminous mixtures. On the basis of the master curve in the linear viscoelasticity range, the mechanical properties of asphalt concrete at different loading times and temperatures can be predicted. This paper discusses the construction of master curves under rheological mathematical models i.e. the sigmoidal function model (MEPDG), the fractional model, and Bahia and co-workers’ model in comparison to the results from mechanistic rheological models i.e. the generalized Huet-Sayegh model, the generalized Maxwell model and the Burgers model. For the purposes of this analysis, the reference asphalt concrete mix (denoted as AC16W) intended for the binder coarse layer and for traffic category KR3 (5×105
Linear models for sound from supersonic reacting mixing layers
NASA Astrophysics Data System (ADS)
Chary, P. Shivakanth; Samanta, Arnab
2016-12-01
We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.
LADES: a software for constructing and analyzing longitudinal designs in biomedical research.
Vázquez-Alcocer, Alan; Garzón-Cortes, Daniel Ladislao; Sánchez-Casas, Rosa María
2014-01-01
One of the most important steps in biomedical longitudinal studies is choosing a good experimental design that can provide high accuracy in the analysis of results with a minimum sample size. Several methods for constructing efficient longitudinal designs have been developed based on power analysis and the statistical model used for analyzing the final results. However, development of this technology is not available to practitioners through user-friendly software. In this paper we introduce LADES (Longitudinal Analysis and Design of Experiments Software) as an alternative and easy-to-use tool for conducting longitudinal analysis and constructing efficient longitudinal designs. LADES incorporates methods for creating cost-efficient longitudinal designs, unequal longitudinal designs, and simple longitudinal designs. In addition, LADES includes different methods for analyzing longitudinal data such as linear mixed models, generalized estimating equations, among others. A study of European eels is reanalyzed in order to show LADES capabilities. Three treatments contained in three aquariums with five eels each were analyzed. Data were collected from 0 up to the 12th week post treatment for all the eels (complete design). The response under evaluation is sperm volume. A linear mixed model was fitted to the results using LADES. The complete design had a power of 88.7% using 15 eels. With LADES we propose the use of an unequal design with only 14 eels and 89.5% efficiency. LADES was developed as a powerful and simple tool to promote the use of statistical methods for analyzing and creating longitudinal experiments in biomedical research.
Broughton, Heather M; Govender, Danny; Shikwambana, Purvance; Chappell, Patrick; Jolles, Anna
2017-06-01
The International Species Information System has set forth an extensive database of reference intervals for zoologic species, allowing veterinarians and game park officials to distinguish normal health parameters from underlying disease processes in captive wildlife. However, several recent studies comparing reference values from captive and free-ranging animals have found significant variation between populations, necessitating the development of separate reference intervals in free-ranging wildlife to aid in the interpretation of health data. Thus, this study characterizes reference intervals for six biochemical analytes, eleven hematologic or immune parameters, and three hormones using samples from 219 free-ranging African lions ( Panthera leo ) captured in Kruger National Park, South Africa. Using the original sample population, exclusion criteria based on physical examination were applied to yield a final reference population of 52 clinically normal lions. Reference intervals were then generated via 90% confidence intervals on log-transformed data using parametric bootstrapping techniques. In addition to the generation of reference intervals, linear mixed-effect models and generalized linear mixed-effect models were used to model associations of each focal parameter with the following independent variables: age, sex, and body condition score. Age and sex were statistically significant drivers for changes in hepatic enzymes, renal values, hematologic parameters, and leptin, a hormone related to body fat stores. Body condition was positively correlated with changes in monocyte counts. Given the large variation in reference values taken from captive versus free-ranging lions, it is our hope that this study will serve as a baseline for future clinical evaluations and biomedical research targeting free-ranging African lions.
Zhang, J; Feng, J-Y; Ni, Y-L; Wen, Y-J; Niu, Y; Tamba, C L; Yue, C; Song, Q; Zhang, Y-M
2017-06-01
Multilocus genome-wide association studies (GWAS) have become the state-of-the-art procedure to identify quantitative trait nucleotides (QTNs) associated with complex traits. However, implementation of multilocus model in GWAS is still difficult. In this study, we integrated least angle regression with empirical Bayes to perform multilocus GWAS under polygenic background control. We used an algorithm of model transformation that whitened the covariance matrix of the polygenic matrix K and environmental noise. Markers on one chromosome were included simultaneously in a multilocus model and least angle regression was used to select the most potentially associated single-nucleotide polymorphisms (SNPs), whereas the markers on the other chromosomes were used to calculate kinship matrix as polygenic background control. The selected SNPs in multilocus model were further detected for their association with the trait by empirical Bayes and likelihood ratio test. We herein refer to this method as the pLARmEB (polygenic-background-control-based least angle regression plus empirical Bayes). Results from simulation studies showed that pLARmEB was more powerful in QTN detection and more accurate in QTN effect estimation, had less false positive rate and required less computing time than Bayesian hierarchical generalized linear model, efficient mixed model association (EMMA) and least angle regression plus empirical Bayes. pLARmEB, multilocus random-SNP-effect mixed linear model and fast multilocus random-SNP-effect EMMA methods had almost equal power of QTN detection in simulation experiments. However, only pLARmEB identified 48 previously reported genes for 7 flowering time-related traits in Arabidopsis thaliana.
Physiological effects of diet mixing on consumer fitness: a meta-analysis.
Lefcheck, Jonathan S; Whalen, Matthew A; Davenport, Theresa M; Stone, Joshua P; Duffy, J Emmett
2013-03-01
The degree of dietary generalism among consumers has important consequences for population, community, and ecosystem processes, yet the effects on consumer fitness of mixing food types have not been examined comprehensively. We conducted a meta-analysis of 161 peer-reviewed studies reporting 493 experimental manipulations of prey diversity to test whether diet mixing enhances consumer fitness based on the intrinsic nutritional quality of foods and consumer physiology. Averaged across studies, mixed diets conferred significantly higher fitness than the average of single-species diets, but not the best single prey species. More than half of individual experiments, however, showed maximal growth and reproduction on mixed diets, consistent with the predicted benefits of a balanced diet. Mixed diets including chemically defended prey were no better than the average prey type, opposing the prediction that a diverse diet dilutes toxins. Finally, mixed-model analysis showed that the effect of diet mixing was stronger for herbivores than for higher trophic levels. The generally weak evidence for the nutritional benefits of diet mixing in these primarily laboratory experiments suggests that diet generalism is not strongly favored by the inherent physiological benefits of mixing food types, but is more likely driven by ecological and environmental influences on consumer foraging.
Optimal Facility Location Tool for Logistics Battle Command (LBC)
2015-08-01
64 Appendix B. VBA Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Appendix C. Story...should city planners have located emergency service facilities so that all households (the demand) had equal access to coverage?” The critical...programming language called Visual Basic for Applications ( VBA ). CPLEX is a commercial solver for linear, integer, and mixed integer linear programming problems
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies
ERIC Educational Resources Information Center
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-01-01
Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…
An Aptitude-Strategy Interaction in Linear Syllogistic Reading. Technical Report No. 15.
ERIC Educational Resources Information Center
Sternberg, Robert J.; Weil, Evelyn M.
An aptitude-strategy interaction in linear syllogistic reasoning was tested on 144 undergraduate and graduate students of both sexes. It was hypothesized that the efficiency of each of four alternative strategies--control, visual, algorithmic, and mixed--would depend upon the subjects' pattern of verbal and spatial abilities. Two tests of verbal…
NASA Astrophysics Data System (ADS)
Barra, Adriano; Contucci, Pierluigi; Sandell, Rickard; Vernia, Cecilia
2014-02-01
How does immigrant integration in a country change with immigration density? Guided by a statistical mechanics perspective we propose a novel approach to this problem. The analysis focuses on classical integration quantifiers such as the percentage of jobs (temporary and permanent) given to immigrants, mixed marriages, and newborns with parents of mixed origin. We find that the average values of different quantifiers may exhibit either linear or non-linear growth on immigrant density and we suggest that social action, a concept identified by Max Weber, causes the observed non-linearity. Using the statistical mechanics notion of interaction to quantitatively emulate social action, a unified mathematical model for integration is proposed and it is shown to explain both growth behaviors observed. The linear theory instead, ignoring the possibility of interaction effects would underestimate the quantifiers up to 30% when immigrant densities are low, and overestimate them as much when densities are high. The capacity to quantitatively isolate different types of integration mechanisms makes our framework a suitable tool in the quest for more efficient integration policies.
Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.
ERIC Educational Resources Information Center
Brant, Rollin
Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…
Lu, Jun; Li, Li-Ming; He, Ping-Ping; Cao, Wei-Hua; Zhan, Si-Yan; Hu, Yong-Hua
2004-06-01
To introduce the application of mixed linear model in the analysis of secular trend of blood pressure under antihypertensive treatment. A community-based postmarketing surveillance of benazepril was conducted in 1831 essential hypertensive patients (age range from 35 to 88 years) in Shanghai. Data of blood pressure was analyzed every 3 months with mixed linear model to describe the secular trend of blood pressure and changes of age-specific and gender-specific. The changing trends of systolic blood pressure (SBP) and diastolic blood pressure (DBP) were found to fit the curvilinear models. A piecewise model was fit for pulse pressure (PP), i.e., curvilinear model in the first 9 months and linear model after 9 months of taking medication. Both blood pressure and its velocity gradually slowed down. There were significant variation for the curve parameters of intercept, slope, and acceleration. Blood pressure in patients with higher initial levels was persistently declining in the 3-year-treatment. However blood pressures of patients with relatively low initial levels remained low when dropped down to some degree. Elderly patients showed high SBP but low DBP, so as with higher PP. The velocity and sizes of blood pressure reductions increased with the initial level of blood pressure. Mixed linear model is flexible and robust when applied to the analysis of longitudinal data but with missing values and can also make the maximum use of available information.
Analysis of lithology: Vegetation mixes in multispectral images
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M.; Adams, J. D.
1982-01-01
Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.
Accurate initial conditions in mixed dark matter-baryon simulations
NASA Astrophysics Data System (ADS)
Valkenburg, Wessel; Villaescusa-Navarro, Francisco
2017-06-01
We quantify the error in the results of mixed baryon-dark-matter hydrodynamic simulations, stemming from outdated approximations for the generation of initial conditions. The error at redshift 0 in contemporary large simulations is of the order of few to 10 per cent in the power spectra of baryons and dark matter, and their combined total-matter power spectrum. After describing how to properly assign initial displacements and peculiar velocities to multiple species, we review several approximations: (1) using the total-matter power spectrum to compute displacements and peculiar velocities of both fluids, (2) scaling the linear redshift-zero power spectrum back to the initial power spectrum using the Newtonian growth factor ignoring homogeneous radiation, (3) using a mix of general-relativistic gauges so as to approximate Newtonian gravity, namely longitudinal-gauge velocities with synchronous-gauge densities and (4) ignoring the phase-difference in the Fourier modes for the offset baryon grid, relative to the dark-matter grid. Three of these approximations do not take into account that dark matter and baryons experience a scale-dependent growth after photon decoupling, which results in directions of velocity that are not the same as their direction of displacement. We compare the outcome of hydrodynamic simulations with these four approximations to our reference simulation, all setup with the same random seed and simulated using gadget-III.
Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry
2013-08-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.
Kim, Yoonsang; Emery, Sherry
2013-01-01
Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415
NASA Astrophysics Data System (ADS)
Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong
2018-01-01
Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.
Longitudinal associations between children’s dental caries and risk factors
Chankanka, Oitip; Cavanaugh, Joseph E.; Levy, Steven M.; Marshall, Teresa A.; Warren, John J; Broffitt, Barbara; Kolker, Justine L.
2015-01-01
Dental caries is a common disease in children of all ages. It is desirable to know whether children with primary, mixed and permanent dentitions share risk factors for cavitated and non-cavitated caries. Objective To assess the longitudinal associations between caries outcomes and modifiable risk factors. Methods One hundred and fifty-six children in the Iowa Fluoride Study met inclusion criteria of three dental examinations and caries-related risk factor assessments preceding each examination. Surface-specific counts of new non-cavitated caries and cavitated caries at the primary (Exam 1: age 5), mixed (Exam 2: age 9) and permanent (Exam 3: age 13) dentition examinations were outcome variables. Explanatory variables were caries-related factors, including averaged beverage exposure frequencies, toothbrushing frequencies, and composite water fluoride levels collected from 3–5, 6–8, and 11–13 years, dentition category, socioeconomic status and gender. Generalized linear mixed models (GLMMs) were used to explore the relationships between new non-cavitated or cavitated caries and caries-related variables. Results Greater frequency of 100% juice exposure was significantly associated with fewer non-cavitated and cavitated caries surfaces. Greater toothbrushing frequency and high SES were significantly associated with fewer new non-cavitated caries. Children had significantly more new cavitated caries surfaces at the mixed dentition examination than at the primary and permanent dentition examinations. Conclusions There were common caries-related factors for more new non-cavitated caries across the three exams, including less frequent 100% juice exposure, lower toothbrushing frequency and lower socioeconomic status. Less frequent 100% juice exposures might be associated with higher exposures to several other cariogenic beverages. PMID:22320287
Hanley, Gillian E; Morgan, Steve; Reid, Robert J
2010-05-01
Given that prescription drugs have become a major financial component of health care, there is an increased need to explain variations in the use of and expenditure on medicines. Case-mix systems built from existing administrative datasets may prove very useful for such prediction. We estimated the concurrent and prospective predictive validity of the adjusted clinical groups (ACG) system in pharmaceutical research and compared the ACG system with the Charlson index of comorbidity. We ran a generalized linear models to examine the predictive validity of the ACG system and the Charlson index and report the correlation between the predicted and observed expenditures. We reported mean predictive ratios across medical condition and cost-defined groups. When predicting use of medicines, we used C-statistics to summarize the area under the receiver operating characteristic curve. The 3,908,533 British Columbia residents who were registered for the universal health care plan for 275+ days in the calendar years 2004 and 2005. Outcomes were total pharmaceutical expenditures, use of any medicines, and use of medicines from 4+ different therapeutic categories. The ACG case mix system predicted drug expenditures better than the Charlson index. The mean predictive ratios for the ACG system models were all within 4% of the actual costs when examining medical condition group and the C-stats for the 2 dichotomous outcomes were between 0.82 and 0.89. ACG case-mix adjusters are a valuable predictor of pharmaceutical use and expenditures with much higher predictive power than age, sex, and the Charlson index of comorbidity.
Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures
NASA Astrophysics Data System (ADS)
Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.
2016-05-01
Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the output component signatures is calculated using the spectral angle mapper. Results show that iterative methods significantly outperform the traditional methods under the given test conditions.
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
ERIC Educational Resources Information Center
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Mixed Integer Linear Programming model for Crude Palm Oil Supply Chain Planning
NASA Astrophysics Data System (ADS)
Sembiring, Pasukat; Mawengkang, Herman; Sadyadharma, Hendaru; Bu'ulolo, F.; Fajriana
2018-01-01
The production process of crude palm oil (CPO) can be defined as the milling process of raw materials, called fresh fruit bunch (FFB) into end products palm oil. The process usually through a series of steps producing and consuming intermediate products. The CPO milling industry considered in this paper does not have oil palm plantation, therefore the FFB are supplied by several public oil palm plantations. Due to the limited availability of FFB, then it is necessary to choose from which plantations would be appropriate. This paper proposes a mixed integer linear programming model the supply chain integrated problem, which include waste processing. The mathematical programming model is solved using neighborhood search approach.
Alternative mathematical programming formulations for FSS synthesis
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J. A.; Levis, C. A.
1986-01-01
A variety of mathematical programming models and two solution strategies are suggested for the problem of allocating orbital positions to (synthesizing) satellites in the Fixed Satellite Service. Mixed integer programming and almost linear programming formulations are presented in detail for each of two objectives: (1) positioning satellites as closely as possible to specified desired locations, and (2) minimizing the total length of the geostationary arc allocated to the satellites whose positions are to be determined. Computational results for mixed integer and almost linear programming models, with the objective of positioning satellites as closely as possible to their desired locations, are reported for three six-administration test problems and a thirteen-administration test problem.