Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-03-15
Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Separate-channel analysis of two-channel microarrays: recovering inter-spot information.
Smyth, Gordon K; Altman, Naomi S
2013-05-26
Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Investigating the Metallicity–Mixing-length Relation
NASA Astrophysics Data System (ADS)
Viani, Lucas S.; Basu, Sarbani; Joel Ong J., M.; Bonaca, Ana; Chaplin, William J.
2018-05-01
Stellar models typically use the mixing-length approximation as a way to implement convection in a simplified manner. While conventionally the value of the mixing-length parameter, α, used is the solar-calibrated value, many studies have shown that other values of α are needed to properly model stars. This uncertainty in the value of the mixing-length parameter is a major source of error in stellar models and isochrones. Using asteroseismic data, we determine the value of the mixing-length parameter required to properly model a set of about 450 stars ranging in log g, {T}eff}, and [{Fe}/{{H}}]. The relationship between the value of α required and the properties of the star is then investigated. For Eddington atmosphere, non-diffusion models, we find that the value of α can be approximated by a linear model, in the form of α /{α }ȯ =5.426{--}0.101 {log}(g)-1.071 {log}({T}eff}) +0.437([{Fe}/{{H}}]). This process is repeated using a variety of model physics, as well as compared with previous studies and results from 3D convective simulations.
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Lagrangian Mixing in an Axisymmetric Hurricane Model
2010-07-23
The MMR r is found by tak - ing the log of the time-series 6ρ(t)−A1, where A1 is 90% of the minimum value of6ρ(t), and the slope of the linear func...Advective mixing in a nondivergent barotropic hurricane model, Atmos. Chem. Phys., 10, 475 –497, doi:10.5194/acp-10- 475 -2010, 2010. Salman, H., Ide, K
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies
ERIC Educational Resources Information Center
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-01-01
Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…
Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.
2015-01-01
Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565
COSOLVENCY AND SOPRTION OF HYDROPHOBIC ORGANIC CHEMICALS
Sorption of hydrophobic organic chemicals (HOCs) by two soils was measured from mixed solvents containing water plus completely miscible organic solvents (CMOSs) and partially miscible organic solvents (PMOSs). The utility of the log-linear cosolvency model for predicting HOC sor...
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Posterior propriety for hierarchical models with log-likelihoods that have norm bounds
Michalak, Sarah E.; Morris, Carl N.
2015-07-17
Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less
Liang, Chao; Han, Shu-ying; Qiao, Jun-qin; Lian, Hong-zhen; Ge, Xin
2014-11-01
A strategy to utilize neutral model compounds for lipophilicity measurement of ionizable basic compounds by reversed-phase high-performance liquid chromatography is proposed in this paper. The applicability of the novel protocol was justified by theoretical derivation. Meanwhile, the linear relationships between logarithm of apparent n-octanol/water partition coefficients (logKow '') and logarithm of retention factors corresponding to the 100% aqueous fraction of mobile phase (logkw ) were established for a basic training set, a neutral training set and a mixed training set of these two. As proved in theory, the good linearity and external validation results indicated that the logKow ''-logkw relationships obtained from a neutral model training set were always reliable regardless of mobile phase pH. Afterwards, the above relationships were adopted to determine the logKow of harmaline, a weakly dissociable alkaloid. As far as we know, this is the first report on experimental logKow data for harmaline (logKow = 2.28 ± 0.08). Introducing neutral compounds into a basic model training set or using neutral model compounds alone is recommended to measure the lipophilicity of weakly ionizable basic compounds especially those with high hydrophobicity for the advantages of more suitable model compound choices and convenient mobile phase pH control. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.
Austin, Peter C
2017-08-01
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
Competing regression models for longitudinal data.
Alencar, Airlane P; Singer, Julio M; Rocha, Francisco Marcelo M
2012-03-01
The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretest-posttest longitudinal data. In particular, we consider log-normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE-based models may be preferable when the goal is to compare the marginal expected responses. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise
2017-08-25
The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites in IBSM studies appears to be a log-linear model fitted by individual and with the intercept estimated in the log-linear regression. Future studies should use this model to estimate parasite growth rates.
Functional mixed effects spectral analysis
KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG
2011-01-01
SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France
2016-10-01
Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Statistical method to compare massive parallel sequencing pipelines.
Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P
2017-03-01
Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.
An experimental loop design for the detection of constitutional chromosomal aberrations by array CGH
2009-01-01
Background Comparative genomic hybridization microarrays for the detection of constitutional chromosomal aberrations is the application of microarray technology coming fastest into routine clinical application. Through genotype-phenotype association, it is also an important technique towards the discovery of disease causing genes and genomewide functional annotation in human. When using a two-channel microarray of genomic DNA probes for array CGH, the basic setup consists in hybridizing a patient against a normal reference sample. Two major disadvantages of this setup are (1) the use of half of the resources to measure a (little informative) reference sample and (2) the possibility that deviating signals are caused by benign copy number variation in the "normal" reference instead of a patient aberration. Instead, we apply an experimental loop design that compares three patients in three hybridizations. Results We develop and compare two statistical methods (linear models of log ratios and mixed models of absolute measurements). In an analysis of 27 patients seen at our genetics center, we observed that the linear models of the log ratios are advantageous over the mixed models of the absolute intensities. Conclusion The loop design and the performance of the statistical analysis contribute to the quick adoption of array CGH as a routine diagnostic tool. They lower the detection limit of mosaicisms and improve the assignment of copy number variation for genetic association studies. PMID:19925645
Allemeersch, Joke; Van Vooren, Steven; Hannes, Femke; De Moor, Bart; Vermeesch, Joris Robert; Moreau, Yves
2009-11-19
Comparative genomic hybridization microarrays for the detection of constitutional chromosomal aberrations is the application of microarray technology coming fastest into routine clinical application. Through genotype-phenotype association, it is also an important technique towards the discovery of disease causing genes and genomewide functional annotation in human. When using a two-channel microarray of genomic DNA probes for array CGH, the basic setup consists in hybridizing a patient against a normal reference sample. Two major disadvantages of this setup are (1) the use of half of the resources to measure a (little informative) reference sample and (2) the possibility that deviating signals are caused by benign copy number variation in the "normal" reference instead of a patient aberration. Instead, we apply an experimental loop design that compares three patients in three hybridizations. We develop and compare two statistical methods (linear models of log ratios and mixed models of absolute measurements). In an analysis of 27 patients seen at our genetics center, we observed that the linear models of the log ratios are advantageous over the mixed models of the absolute intensities. The loop design and the performance of the statistical analysis contribute to the quick adoption of array CGH as a routine diagnostic tool. They lower the detection limit of mosaicisms and improve the assignment of copy number variation for genetic association studies.
Hakk, Heldur; Shappell, Nancy W; Lupton, Sara J; Shelver, Weilin L; Fanaselle, Wendy; Oryang, David; Yeung, Chi Yuen; Hoelzer, Karin; Ma, Yinqing; Gaalswyk, Dennis; Pouillot, Régis; Van Doren, Jane M
2016-01-13
Seven animal drugs [penicillin G (PENG), sulfadimethoxine (SDMX), oxytetracycline (OTET), erythromycin (ERY), ketoprofen (KETO), thiabendazole (THIA), and ivermectin (IVR)] were used to evaluate the drug distribution between milk fat and skim milk fractions of cow milk. More than 90% of the radioactivity was distributed into the skim milk fraction for ERY, KETO, OTET, PENG, and SDMX, approximately 80% for THIA, and 13% for IVR. The distribution of drug between milk fat and skim milk fractions was significantly correlated to the drug's lipophilicity (partition coefficient, log P, or distribution coefficient, log D, which includes ionization). Data were fit with linear mixed effects models; the best fit was obtained within this data set with log D versus observed drug distribution ratios. These candidate empirical models serve for assisting to predict the distribution and concentration of these drugs in a variety of milk and milk products.
Xu, Feng; Liang, Xinmiao; Lin, Bingcheng
2002-01-01
Research efforts dealing with chemical transportation in soils are needed to prevent damage to ground water. Methanol-containing solvents can increase the translocation of nonionic organic chemicals (NOCs). In this study, a general log-linear retention equation, log k' = log k'w - Sphi (Eq. [1]), was developed to describe the mobilities of NOCs in soil column chromatography (SCC). The term phi denotes the volume fraction of methanol in eluent, k' is the capacity factor of a solute at a certain phi value, and log k'w and -S are the intercept and slope of the log k' vs. phi plot. Two reference soils (GSE 17204 and GSE 17205) were used as packing materials, and were eluted by isocratic methanol-water mixtures. A model of linear solvation energy relationships (LSER) was applied to analyze the k' from molecular interactions. The most important factor determining the transportation was found to be the solute hydrophobic partition in soils, and the second-most important factor was the solute hydrogen-bond basicity (hydrogen-bond accepting ability), while the less important factor was the solute dipolarity-polarizability. The solute hydrogen-bond acidity (hydrogen-bond donating ability) was statistically unimportant and deletable. From the LSER model, one could also obtain Eq. [1]. The experimental k' data of 121 NOCs can be accurately explained by Eq. [1]. The equation is promising to estimate the solute mobility in pure water by extrapolating from lower-capacity factors obtained in methanol-water mixed eluents.
TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS
Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.
2017-01-01
Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971
Fujimoto, Kayo; Williams, Mark L
2015-06-01
Mixing patterns within sexual networks have been shown to have an effect on HIV transmission, both within and across groups. This study examined sexual mixing patterns involving HIV-unknown status and risky sexual behavior conditioned on assortative/dissortative mixing by race/ethnicity. The sample used for this study consisted of drug-using male sex workers and their male sex partners. A log-linear analysis of 257 most at-risk MSM and 3,072 sex partners was conducted. The analysis found two significant patterns. HIV-positive most at-risk Black MSM had a strong tendency to have HIV-unknown Black partners (relative risk, RR = 2.91, p < 0.001) and to engage in risky sexual behavior (RR = 2.22, p < 0.001). White most at-risk MSM with unknown HIV status also had a tendency to engage in risky sexual behavior with Whites (RR = 1.72, p < 0.001). The results suggest that interventions that target the most at-risk MSM and their sex partners should account for specific sexual network mixing patterns by HIV status.
Tsuji, Leonard J S; Wainman, Bruce C; Martin, Ian D; Weber, Jean-Philippe; Sutherland, Celine; Elliott, J Richard; Nieboer, Evert
2005-09-01
Abandoned radar line stations in the North American arctic and sub-arctic regions are point sources of contamination, especially for PCBs. Few data exist with respect to human body burden of organochlorines (OCs) in residents of communities located in close proximity to these radar line sites. We compared plasma OC concentration (unadjusted for total lipids) frequency distribution data using log-linear contingency modelling for Fort Albany First Nation, the site of an abandoned Mid-Canada Radar Line station, and two comparison populations (the neighbouring community of Kashechewan First Nation without such a radar installation, and Hamilton, a city in southern Ontario, Canada). This type of analysis is important as it allows for an initial investigation of contaminant data without imputing any values. The two-state log-linear model (employing both non-detectable and detectable concentration frequencies and applicable to PCB congeners 28 and 105 and cis-nonachlor) and the four-state log-linear model (using quartile concentration frequencies for Aroclor 1260, PCB congeners [99,118,138,153,156,170,180,183,187], beta-HCH, p,p'-DDT +p,p'-DDE, HCB, mirex, oxychlordane, and trans-nonachlor) revealed that the effects of subject gender were inconsequential. Significant differences (p < 0.05) between the groups examined were attributable to the effect of location on the frequency of detection of OCs or on their differential distribution among the concentration quartiles. In general, people from Hamilton had higher frequencies of non-detections and of concentrations in the first quartile (p < 0.05) for most OCs compared to people from Fort Albany and Kashechewan (who consume a traditional diet of wild meats that does not include marine mammals). An unexpected finding was that, for Kashechewan males, the frequency of many OCs was significantly higher (p < 0.05) in the 4th concentration quartile than that predicted by the four-state log-linear model, but significantly lower than expected in the 1st quartile for beta-HCH. The levels of PCBs found for women in Fort Albany and Kashechewan were greater than those reported for Dene (First Nation people) and Métis (mixed heritage) of the western Northwest Territories (NWT) who did not consume marine mammals, and for Inuit living in the central NWT (occasional consumers of marine mammals). Moreover, the levels of total p,p'-DDT were greater for Fort Albany and Kashechewan women compared to these same aboriginal groups.
Matos, Larissa A.; Bandyopadhyay, Dipankar; Castro, Luis M.; Lachos, Victor H.
2015-01-01
In biomedical studies on HIV RNA dynamics, viral loads generate repeated measures that are often subjected to upper and lower detection limits, and hence these responses are either left- or right-censored. Linear and non-linear mixed-effects censored (LMEC/NLMEC) models are routinely used to analyse these longitudinal data, with normality assumptions for the random effects and residual errors. However, the derived inference may not be robust when these underlying normality assumptions are questionable, especially the presence of outliers and thick-tails. Motivated by this, Matos et al. (2013b) recently proposed an exact EM-type algorithm for LMEC/NLMEC models using a multivariate Student’s-t distribution, with closed-form expressions at the E-step. In this paper, we develop influence diagnostics for LMEC/NLMEC models using the multivariate Student’s-t density, based on the conditional expectation of the complete data log-likelihood. This partially eliminates the complexity associated with the approach of Cook (1977, 1986) for censored mixed-effects models. The new methodology is illustrated via an application to a longitudinal HIV dataset. In addition, a simulation study explores the accuracy of the proposed measures in detecting possible influential observations for heavy-tailed censored data under different perturbation and censoring schemes. PMID:26190871
Density of large snags and logs in northern Arizona mixed-conifer and ponderosa pine forests
Joseph L. Ganey; Benjamin J. Bird; L. Scott Baggett; Jeffrey S. Jenness
2015-01-01
Large snags and logs provide important biological legacies and resources for native wildlife, yet data on populations of large snags and logs and factors influencing those populations are sparse. We monitored populations of large snags and logs in mixed-conifer and ponderosa pine (Pinus ponderosa) forests in northern Arizona from 1997 through 2012. We modeled density...
Assessing the relationship between groundwater nitrate and animal feeding operations in Iowa (USA)
Zirkle, Keith W.; Nolan, Bernard T.; Jones, Rena R.; Weyer, Peter J.; Ward, Mary H.; Wheeler, David C.
2016-01-01
Nitrate-nitrogen is a common contaminant of drinking water in many agricultural areas of the United States of America (USA). Ingested nitrate from contaminated drinking water has been linked to an increased risk of several cancers, specific birth defects, and other diseases. In this research, we assessed the relationship between animal feeding operations (AFOs) and groundwater nitrate in private wells in Iowa. We characterized AFOs by swine and total animal units and type (open, confined, or mixed), and we evaluated the number and spatial intensities of AFOs in proximity to private wells. The types of AFO indicate the extent to which a facility is enclosed by a roof. Using linear regression models, we found significant positive associations between the total number of AFOs within 2 km of a well (p trend < 0.001), number of open AFOs within 5 km of a well (p trend < 0.001), and number of mixed AFOs within 30 km of a well (p trend < 0.001) and the log nitrate concentration. Additionally, we found significant increases in log nitrate in the top quartiles for AFO spatial intensity, open AFO spatial intensity, and mixed AFO spatial intensity compared to the bottom quartile (0.171 log(mg/L), 0.319 log(mg/L), and 0.541 log(mg/L), respectively; all p < 0.001). We also explored the spatial distribution of nitrate-nitrogen in drinking wells and found significant spatial clustering of high-nitrate wells (> 5 mg/L) compared with low-nitrate (≤ 5 mg/L) wells (p = 0.001). A generalized additive model for high-nitrate status identified statistically significant areas of risk for high levels of nitrate. Adjustment for some AFO predictor variables explained a portion of the elevated nitrate risk. These results support a relationship between animal feeding operations and groundwater nitrate concentrations and differences in nitrate loss from confined AFOs vs. open or mixed types.
NASA Technical Reports Server (NTRS)
Asner, Gregory P.; Keller, Michael M.; Silva, Jose Natalino; Zweede, Johan C.; Pereira, Rodrigo, Jr.
2002-01-01
Major uncertainties exist regarding the rate and intensity of logging in tropical forests worldwide: these uncertainties severely limit economic, ecological, and biogeochemical analyses of these regions. Recent sawmill surveys in the Amazon region of Brazil show that the area logged is nearly equal to total area deforested annually, but conversion of survey data to forest area, forest structural damage, and biomass estimates requires multiple assumptions about logging practices. Remote sensing could provide an independent means to monitor logging activity and to estimate the biophysical consequences of this land use. Previous studies have demonstrated that the detection of logging in Amazon forests is difficult and no studies have developed either the quantitative physical basis or remote sensing approaches needed to estimate the effects of various logging regimes on forest structure. A major reason for these limitations has been a lack of sufficient, well-calibrated optical satellite data, which in turn, has impeded the development and use of physically-based, quantitative approaches for detection and structural characterization of forest logging regimes. We propose to use data from the EO-1 Hyperion imaging spectrometer to greatly increase our ability to estimate the presence and structural attributes of selective logging in the Amazon Basin. Our approach is based on four "biogeophysical indicators" not yet derived simultaneously from any satellite sensor: 1) green canopy leaf area index; 2) degree of shadowing; 3) presence of exposed soil and; 4) non-photosynthetic vegetation material. Airborne, field and modeling studies have shown that the optical reflectance continuum (400-2500 nm) contains sufficient information to derive estimates of each of these indicators. Our ongoing studies in the eastern Amazon basin also suggest that these four indicators are sensitive to logging intensity. Satellite-based estimates of these indicators should provide a means to quantify both the presence and degree of structural disturbance caused by various logging regimes. Our quantitative assessment of Hyperion hyperspectral and ALI multi-spectral data for the detection and structural characterization of selective logging in Amazonia will benefit from data collected through an ongoing project run by the Tropical Forest Foundation, within which we have developed a study of the canopy and landscape biophysics of conventional and reduced-impact logging. We will add to our base of forest structural information in concert with an EO-1 overpass. Using a photon transport model inversion technique that accounts for non-linear mixing of the four biogeophysical indicators, we will estimate these parameters across a gradient of selective logging intensity provided by conventional and reduced impact logging sites. We will also compare our physical ly-based approach to both conventional (e.g., NDVI) and novel (e.g., SWIR-channel) vegetation indices as well as to linear mixture modeling methods. We will cross-compare these approaches using Hyperion and ALI imagers to determine the strengths and limitations of these two sensors for applications of forest biophysics. This effort will yield the first physical ly-based, quantitative analysis of the detection and intensity of selective logging in Amazonia, comparing hyperspectral and improved multi-spectral approaches as well as inverse modeling, linear mixture modeling, and vegetation index techniques.
A method of improving sensitivity of carbon/oxygen well logging for low porosity formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Juntao; Zhang, Feng; Zhang, Quanying
Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less
A method of improving sensitivity of carbon/oxygen well logging for low porosity formation
Liu, Juntao; Zhang, Feng; Zhang, Quanying; ...
2016-12-01
Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications
Austin, Peter C.
2017-01-01
Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954
Log-Multiplicative Association Models as Item Response Models
ERIC Educational Resources Information Center
Anderson, Carolyn J.; Yu, Hsiu-Ting
2007-01-01
Log-multiplicative association (LMA) models, which are special cases of log-linear models, have interpretations in terms of latent continuous variables. Two theoretical derivations of LMA models based on item response theory (IRT) arguments are presented. First, we show that Anderson and colleagues (Anderson & Vermunt, 2000; Anderson & Bockenholt,…
ERIC Educational Resources Information Center
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
Rothenberg, Stephen J; Rothenberg, Jesse C
2005-09-01
Statistical evaluation of the dose-response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose-response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear-linear dose response) and natural-log-transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose-response relationship. We found that a log-linear lead-IQ relationship was a significantly better fit than was a linear-linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead-IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 microg/dL to 2.0 microg/dL) was 2.2 times (319 billion dollars) that calculated using a linear-linear dose-response function (149 billion dollars). The Centers for Disease Control and Prevention action limit of 10 microg/dL for children fails to protect against most damage and economic cost attributable to lead exposure.
Walking training and cortisol to DHEA-S ratio in postmenopause: An intervention study.
Di Blasio, Andrea; Izzicupo, Pascal; Di Baldassarre, Angela; Gallina, Sabina; Bucci, Ines; Giuliani, Cesidio; Di Santo, Serena; Di Iorio, Angelo; Ripari, Patrizio; Napolitano, Giorgio
2018-04-01
The literature indicates that the plasma cortisol-to-dehydroepiandrosterone-sulfate (DHEA-S) ratio is a marker of health status after menopause, when a decline in both estrogen and DHEA-S and an increase in cortisol occur. An increase in the cortisol-to-DHEA-S ratio has been positively correlated with metabolic syndrome, all-cause mortality, cancer, and other diseases. The aim of this study was to investigate the effects of a walking program on the plasma cortisol-to-DHEA-S ratio in postmenopausal women. Fifty-one postmenopausal women participated in a 13-week supervised walking program, in the metropolitan area of Pescara (Italy), from June to September 2013. Participants were evaluated in April-May and September-October of the same year. The linear mixed model showed that the variation of the log 10 Cortisol-to-log 10 DHEA-S ratio was associated with the volume of exercise (p = .03). Participants having lower adherence to the walking program did not have a significantly modified log 10 Cortisol or log 10 DHEA-S, while those having the highest adherence had a significant reduction in log 10 Cortisol (p = .016) and a nearly significant increase in log 10 DHEA-S (p = .084). Walking training appeared to reduce the plasma log 10 Cortisol-to-log 10 DHEA-S ratio, although a minimum level of training was necessary to achieve this significant reduction.
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
NASA Astrophysics Data System (ADS)
Haris, A.; Nafian, M.; Riyanto, A.
2017-07-01
Danish North Sea Fields consist of several formations (Ekofisk, Tor, and Cromer Knoll) that was started from the age of Paleocene to Miocene. In this study, the integration of seismic and well log data set is carried out to determine the chalk sand distribution in the Danish North Sea field. The integration of seismic and well log data set is performed by using the seismic inversion analysis and seismic multi-attribute. The seismic inversion algorithm, which is used to derive acoustic impedance (AI), is model-based technique. The derived AI is then used as external attributes for the input of multi-attribute analysis. Moreover, the multi-attribute analysis is used to generate the linear and non-linear transformation of among well log properties. In the case of the linear model, selected transformation is conducted by weighting step-wise linear regression (SWR), while for the non-linear model is performed by using probabilistic neural networks (PNN). The estimated porosity, which is resulted by PNN shows better suited to the well log data compared with the results of SWR. This result can be understood since PNN perform non-linear regression so that the relationship between the attribute data and predicted log data can be optimized. The distribution of chalk sand has been successfully identified and characterized by porosity value ranging from 23% up to 30%.
NASA Astrophysics Data System (ADS)
Vásquez Lavín, F. A.; Hernandez, J. I.; Ponce, R. D.; Orrego, S. A.
2017-07-01
During recent decades, water demand estimation has gained considerable attention from scholars. From an econometric perspective, the most used functional forms include log-log and linear specifications. Despite the advances in this field and the relevance for policymaking, little attention has been paid to the functional forms used in these estimations, and most authors have not provided justifications for their selection of functional forms. A discrete continuous choice model of the residential water demand is estimated using six functional forms (log-log, full-log, log-quadratic, semilog, linear, and Stone-Geary), and the expected consumption and price elasticity are evaluated. From a policy perspective, our results highlight the relevance of functional form selection for both the expected consumption and price elasticity.
INFLUENCE OF ORGANIC COSOLVENTS ON THE SORPTION KINETICS OF HYDROPHOBIC ORGANIC CHEMICALS
A quantitative examination of the kinetics of sorption of hydrophobic organic chemicals by soils from mixed solvents reveals that the reverse sorption rate constant (k2) increases log-linearly with increasing volume fraction of organic cosolvent (fc). This relationship was expec...
WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarpelli, M; Eickhoff, J; Perlman, S
Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less
Garcés-Vega, Francisco; Marks, Bradley P
2014-08-01
In the last 20 years, the use of microbial reduction models has expanded significantly, including inactivation (linear and nonlinear), survival, and transfer models. However, a major constraint for model development is the impossibility to directly quantify the number of viable microorganisms below the limit of detection (LOD) for a given study. Different approaches have been used to manage this challenge, including ignoring negative plate counts, using statistical estimations, or applying data transformations. Our objective was to illustrate and quantify the effect of negative plate count data management approaches on parameter estimation for microbial reduction models. Because it is impossible to obtain accurate plate counts below the LOD, we performed simulated experiments to generate synthetic data for both log-linear and Weibull-type microbial reductions. We then applied five different, previously reported data management practices and fit log-linear and Weibull models to the resulting data. The results indicated a significant effect (α = 0.05) of the data management practices on the estimated model parameters and performance indicators. For example, when the negative plate counts were replaced by the LOD for log-linear data sets, the slope of the subsequent log-linear model was, on average, 22% smaller than for the original data, the resulting model underpredicted lethality by up to 2.0 log, and the Weibull model was erroneously selected as the most likely correct model for those data. The results demonstrate that it is important to explicitly report LODs and related data management protocols, which can significantly affect model results, interpretation, and utility. Ultimately, we recommend using only the positive plate counts to estimate model parameters for microbial reduction curves and avoiding any data value substitutions or transformations when managing negative plate counts to yield the most accurate model parameters.
NASA Astrophysics Data System (ADS)
Li, Tanda; Bedding, Timothy R.; Huber, Daniel; Ball, Warrick H.; Stello, Dennis; Murphy, Simon J.; Bland-Hawthorn, Joss
2018-03-01
Stellar models rely on a number of free parameters. High-quality observations of eclipsing binary stars observed by Kepler offer a great opportunity to calibrate model parameters for evolved stars. Our study focuses on six Kepler red giants with the goal of calibrating the mixing-length parameter of convection as well as the asteroseismic surface term in models. We introduce a new method to improve the identification of oscillation modes that exploits theoretical frequencies to guide the mode identification (`peak-bagging') stage of the data analysis. Our results indicate that the convective mixing-length parameter (α) is ≈14 per cent larger for red giants than for the Sun, in agreement with recent results from modelling the APOGEE stars. We found that the asteroseismic surface term (i.e. the frequency offset between the observed and predicted modes) correlates with stellar parameters (Teff, log g) and the mixing-length parameter. This frequency offset generally decreases as giants evolve. The two coefficients a-1 and a3 for the inverse and cubic terms that have been used to describe the surface term correction are found to correlate linearly. The effect of the surface term is also seen in the p-g mixed modes; however, established methods for correcting the effect are not able to properly correct the g-dominated modes in late evolved stars.
Devos, Stefanie; Cox, Bianca; van Lier, Tom; Nawrot, Tim S; Putman, Koen
2016-09-01
We used log-linear and log-log exposure-response (E-R) functions to model the association between PM2.5 exposure and non-elective hospitalizations for pneumonia, and estimated the attributable hospital costs by using the effect estimates obtained from both functions. We used hospital discharge data on 3519 non-elective pneumonia admissions from UZ Brussels between 2007 and 2012 and we combined a case-crossover design with distributed lag models. The annual averted pneumonia hospitalization costs for a reduction in PM2.5 exposure from the mean (21.4μg/m(3)) to the WHO guideline for annual mean PM2.5 (10μg/m(3)) were estimated and extrapolated for Belgium. Non-elective hospitalizations for pneumonia were significantly associated with PM2.5 exposure in both models. Using a log-linear E-R function, the estimated risk reduction for pneumonia hospitalization associated with a decrease in mean PM2.5 exposure to 10μg/m(3) was 4.9%. The corresponding estimate for the log-log model was 10.7%. These estimates translate to an annual pneumonia hospital cost saving in Belgium of €15.5 million and almost €34 million for the log-linear and log-log E-R function, respectively. Although further research is required to assess the shape of the association between PM2.5 exposure and pneumonia hospitalizations, we demonstrated that estimates for health effects and associated costs heavily depend on the assumed E-R function. These results are important for policy making, as supra-linear E-R associations imply that significant health benefits may still be obtained from additional pollution control measures in areas where PM levels have already been reduced. Copyright © 2016 Elsevier Ltd. All rights reserved.
Deformation-Aware Log-Linear Models
NASA Astrophysics Data System (ADS)
Gass, Tobias; Deselaers, Thomas; Ney, Hermann
In this paper, we present a novel deformation-aware discriminative model for handwritten digit recognition. Unlike previous approaches our model directly considers image deformations and allows discriminative training of all parameters, including those accounting for non-linear transformations of the image. This is achieved by extending a log-linear framework to incorporate a latent deformation variable. The resulting model has an order of magnitude less parameters than competing approaches to handling image deformations. We tune and evaluate our approach on the USPS task and show its generalization capabilities by applying the tuned model to the MNIST task. We gain interesting insights and achieve highly competitive results on both tasks.
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Moran, John L; Solomon, Patricia J
2012-05-16
For the analysis of length-of-stay (LOS) data, which is characteristically right-skewed, a number of statistical estimators have been proposed as alternatives to the traditional ordinary least squares (OLS) regression with log dependent variable. Using a cohort of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 2008-2009, 12 different methods were used for estimation of intensive care (ICU) length of stay. These encompassed risk-adjusted regression analysis of firstly: log LOS using OLS, linear mixed model [LMM], treatment effects, skew-normal and skew-t models; and secondly: unmodified (raw) LOS via OLS, generalised linear models [GLMs] with log-link and 4 different distributions [Poisson, gamma, negative binomial and inverse-Gaussian], extended estimating equations [EEE] and a finite mixture model including a gamma distribution. A fixed covariate list and ICU-site clustering with robust variance were utilised for model fitting with split-sample determination (80%) and validation (20%) data sets, and model simulation was undertaken to establish over-fitting (Copas test). Indices of model specification using Bayesian information criterion [BIC: lower values preferred] and residual analysis as well as predictive performance (R2, concordance correlation coefficient (CCC), mean absolute error [MAE]) were established for each estimator. The data-set consisted of 111663 patients from 131 ICUs; with mean(SD) age 60.6(18.8) years, 43.0% were female, 40.7% were mechanically ventilated and ICU mortality was 7.8%. ICU length-of-stay was 3.4(5.1) (median 1.8, range (0.17-60)) days and demonstrated marked kurtosis and right skew (29.4 and 4.4 respectively). BIC showed considerable spread, from a maximum of 509801 (OLS-raw scale) to a minimum of 210286 (LMM). R2 ranged from 0.22 (LMM) to 0.17 and the CCC from 0.334 (LMM) to 0.149, with MAE 2.2-2.4. Superior residual behaviour was established for the log-scale estimators. There was a general tendency for over-prediction (negative residuals) and for over-fitting, the exception being the GLM negative binomial estimator. The mean-variance function was best approximated by a quadratic function, consistent with log-scale estimation; the link function was estimated (EEE) as 0.152(0.019, 0.285), consistent with a fractional-root function. For ICU length of stay, log-scale estimation, in particular the LMM, appeared to be the most consistently performing estimator(s). Neither the GLM variants nor the skew-regression estimators dominated.
Effect of stimulus configuration on crowding in strabismic amblyopia.
Norgett, Yvonne; Siderov, John
2017-11-01
Foveal vision in strabismic amblyopia can show increased levels of crowding, akin to typical peripheral vision. Target-flanker similarity and visual-acuity test configuration may cause the magnitude of crowding to vary in strabismic amblyopia. We used custom-designed visual acuity tests to investigate crowding in observers with strabismic amblyopia. LogMAR was measured monocularly in both eyes of 11 adults with strabismic or mixed strabismic/anisometropic amblyopia using custom-designed letter tests. The tests used single-letter and linear formats with either bar or letter flankers to introduce crowding. Tests were presented monocularly on a high-resolution display at a test distance of 4 m, using standardized instructions. For each condition, five letters of each size were shown; testing continued until three letters of a given size were named incorrectly. Uncrowded logMAR was subtracted from logMAR in each of the crowded tests to highlight the crowding effect. Repeated-measures ANOVA showed that letter flankers and linear presentation individually resulted in poorer performance in the amblyopic eyes (respectively, mean normalized logMAR = 0.29, SE = 0.07, mean normalized logMAR = 0.27, SE = 0.07; p < 0.05) and together had an additive effect (mean = 0.42, SE = 0.09, p < 0.001). There was no difference across the tests in the fellow eyes (p > 0.05). Both linear presentation and letter rather than bar flankers increase crowding in the amblyopic eyes of people with strabismic amblyopia. These results suggest the influence of more than one mechanism contributing to crowding in linear visual-acuity charts with letter flankers.
Evaluation of third-degree and fourth-degree laceration rates as quality indicators.
Friedman, Alexander M; Ananth, Cande V; Prendergast, Eri; D'Alton, Mary E; Wright, Jason D
2015-04-01
To examine the patterns and predictors of third-degree and fourth-degree laceration in women undergoing vaginal delivery. We identified a population-based cohort of women in the United States who underwent a vaginal delivery between 1998 and 2010 using the Nationwide Inpatient Sample. Multivariable log-linear regression models were developed to account for patient, obstetric, and hospital factors related to lacerations. Between-hospital variability of laceration rates was calculated using generalized log-linear mixed models. Among 7,096,056 women who underwent vaginal delivery in 3,070 hospitals, 3.3% (n=232,762) had a third-degree laceration and 1.1% (n=76,347) had a fourth-degree laceration. In an adjusted model for fourth-degree lacerations, important risk factors included shoulder dystocia and forceps and vacuum deliveries with and without episiotomy. Other demographic, obstetric, medical, and hospital variables, although statistically significant, were not major determinants of lacerations. Risk factors in a multivariable model for third-degree lacerations were similar to those in the fourth-degree model. Regression analysis of hospital rates (n=3,070) of lacerations demonstrated limited between-hospital variation. Risk of third-degree and fourth-degree laceration was most strongly related to operative delivery and shoulder dystocia. Between-hospital variation was limited. Given these findings and that the most modifiable practice related to lacerations would be reduction in operative vaginal deliveries (and a possible increase in cesarean delivery), third-degree and fourth-degree laceration rates may be a quality metric of limited utility.
MIXOR: a computer program for mixed-effects ordinal regression analysis.
Hedeker, D; Gibbons, R D
1996-03-01
MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.
Recognition of facial expressions of mixed emotions in school-age children exposed to terrorism.
Scrimin, Sara; Moscardino, Ughetta; Capello, Fabia; Altoè, Gianmarco; Axia, Giovanna
2009-09-01
This exploratory study aims at investigating the effects of terrorism on children's ability to recognize emotions. A sample of 101 exposed and 102 nonexposed children (mean age = 11 years), balanced for age and gender, were assessed 20 months after a terrorist attack in Beslan, Russia. Two trials controlled for children's ability to match a facial emotional stimulus with an emotional label and their ability to match an emotional label with an emotional context. The experimental trial evaluated the relation between exposure to terrorism and children's free labeling of mixed emotion facial stimuli created by morphing between 2 prototypical emotions. Repeated measures analyses of covariance revealed that exposed children correctly recognized pure emotions. Four log-linear models were performed to explore the association between exposure group and category of answer given in response to different mixed emotion facial stimuli. Model parameters indicated that, compared with nonexposed children, exposed children (a) labeled facial expressions containing anger and sadness significantly more often than expected as anger, and (b) produced fewer correct answers in response to stimuli containing sadness as a target emotion.
Validation of ACG Case-mix for equitable resource allocation in Swedish primary health care.
Zielinski, Andrzej; Kronogård, Maria; Lenhoff, Håkan; Halling, Anders
2009-09-18
Adequate resource allocation is an important factor to ensure equity in health care. Previous reimbursement models have been based on age, gender and socioeconomic factors. An explanatory model based on individual need of primary health care (PHC) has not yet been used in Sweden to allocate resources. The aim of this study was to examine to what extent the ACG case-mix system could explain concurrent costs in Swedish PHC. Diagnoses were obtained from electronic PHC records of inhabitants in Blekinge County (approx. 150,000) listed with public PHC (approx. 120,000) for three consecutive years, 2004-2006. The inhabitants were then classified into six different resource utilization bands (RUB) using the ACG case-mix system. The mean costs for primary health care were calculated for each RUB and year. Using linear regression models and log-cost as dependent variable the adjusted R2 was calculated in the unadjusted model (gender) and in consecutive models where age, listing with specific PHC and RUB were added. In an additional model the ACG groups were added. Gender, age and listing with specific PHC explained 14.48-14.88% of the variance in individual costs for PHC. By also adding information on level of co-morbidity, as measured by the ACG case-mix system, to specific PHC the adjusted R2 increased to 60.89-63.41%. The ACG case-mix system explains patient costs in primary care to a high degree. Age and gender are important explanatory factors, but most of the variance in concurrent patient costs was explained by the ACG case-mix system.
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
NASA Astrophysics Data System (ADS)
Alam, N. M.; Sharma, G. C.; Moreira, Elsa; Jana, C.; Mishra, P. K.; Sharma, N. K.; Mandal, D.
2017-08-01
Markov chain and 3-dimensional log-linear models were attempted to model drought class transitions derived from the newly developed drought index the Standardized Precipitation Evapotranspiration Index (SPEI) at a 12 month time scale for six major drought prone areas of India. Log-linear modelling approach has been used to investigate differences relative to drought class transitions using SPEI-12 time series derived form 48 yeas monthly rainfall and temperature data. In this study, the probabilities of drought class transition, the mean residence time, the 1, 2 or 3 months ahead prediction of average transition time between drought classes and the drought severity class have been derived. Seasonality of precipitation has been derived for non-homogeneous Markov chains which could be used to explain the effect of the potential retreat of drought. Quasi-association and Quasi-symmetry log-linear models have been fitted to the drought class transitions derived from SPEI-12 time series. The estimates of odds along with their confidence intervals were obtained to explain the progression of drought and estimation of drought class transition probabilities. For initial months as the drought severity increases the calculated odds shows lower value and the odds decreases for the succeeding months. This indicates that the ratio of expected frequencies of occurrence of transition from drought class to the non-drought class decreases as compared to transition to any drought class when the drought severity of the present class increases. From 3-dimensional log-linear model it is clear that during the last 24 years the drought probability has increased for almost all the six regions. The findings from the present study will immensely help to assess the impact of drought on the gross primary production and to develop future contingent planning in similar regions worldwide.
The mathematical formulation of a generalized Hooke's law for blood vessels.
Zhang, Wei; Wang, Chong; Kassab, Ghassan S
2007-08-01
It is well known that the stress-strain relationship of blood vessels is highly nonlinear. To linearize the relationship, the Hencky strain tensor is generalized to a logarithmic-exponential (log-exp) strain tensor to absorb the nonlinearity. A quadratic nominal strain potential is proposed to derive the second Piola-Kirchhoff stresses by differentiating the potential with respect to the log-exp strains. The resulting constitutive equation is a generalized Hooke's law. Ten material constants are needed for the three-dimensional orthotropic model. The nondimensional constant used in the log-exp strain definition is interpreted as a nonlinearity parameter. The other nine constants are the elastic moduli with respect to the log-exp strains. In this paper, the proposed linear stress-strain relation is shown to represent the pseudoelastic Fung model very well.
NASA Astrophysics Data System (ADS)
Rounaghi, G. H.; Dolatshahi, S.; Tarahomi, S.
2014-12-01
The stoichiometry, stability and the thermodynamic parameters of complex formation between cerium(III) cation and cryptand 222 (4,7,13,16,21,24-hexaoxa-1,10-diazabycyclo[8.8.8]-hexacosane) were studied by conductometric titration method in some binary solvent mixtures of dimethylformamide (DMF), 1,2-dichloroethane (DCE), ethyl acetate (EtOAc) and methyl acetate (MeOAc) with methanol (MeOH), at 288, 298, 308, and 318 K. A model based on 1: 1 stoichiometry has been used to analyze the conductivity data. The data have been fitted according to a non-linear least-squares analysis that provide the stability constant, K f, for the cation-ligand inclusion complex. The results revealed that the stability order of [Ce(cryptand 222)]3+ complex changes with the nature and composition of the solvent system. A non-linear relationship was observed between the stability constant (log K f) of [Ce(cryptand 222)]3+ complex versus the composition of the binary mixed solvent. Standard thermodynamic values were obtained from temperature dependence of the stability constant of the complex, show that the studied complexation process is mainly entropy governed and are influenced by the nature and composition of the binary mixed solvent solutions.
Linear and nonlinear methods in modeling the aqueous solubility of organic compounds.
Catana, Cornel; Gao, Hua; Orrenius, Christian; Stouten, Pieter F W
2005-01-01
Solubility data for 930 diverse compounds have been analyzed using linear Partial Least Square (PLS) and nonlinear PLS methods, Continuum Regression (CR), and Neural Networks (NN). 1D and 2D descriptors from MOE package in combination with E-state or ISIS keys have been used. The best model was obtained using linear PLS for a combination between 22 MOE descriptors and 65 ISIS keys. It has a correlation coefficient (r2) of 0.935 and a root-mean-square error (RMSE) of 0.468 log molar solubility (log S(w)). The model validated on a test set of 177 compounds not included in the training set has r2 0.911 and RMSE 0.475 log S(w). The descriptors were ranked according to their importance, and at the top of the list have been found the 22 MOE descriptors. The CR model produced results as good as PLS, and because of the way in which cross-validation has been done it is expected to be a valuable tool in prediction besides PLS model. The statistics obtained using nonlinear methods did not surpass those got with linear ones. The good statistic obtained for linear PLS and CR recommends these models to be used in prediction when it is difficult or impossible to make experimental measurements, for virtual screening, combinatorial library design, and efficient leads optimization.
The word frequency effect during sentence reading: A linear or nonlinear effect of log frequency?
White, Sarah J; Drieghe, Denis; Liversedge, Simon P; Staub, Adrian
2016-10-20
The effect of word frequency on eye movement behaviour during reading has been reported in many experimental studies. However, the vast majority of these studies compared only two levels of word frequency (high and low). Here we assess whether the effect of log word frequency on eye movement measures is linear, in an experiment in which a critical target word in each sentence was at one of three approximately equally spaced log frequency levels. Separate analyses treated log frequency as a categorical or a continuous predictor. Both analyses showed only a linear effect of log frequency on the likelihood of skipping a word, and on first fixation duration. Ex-Gaussian analyses of first fixation duration showed similar effects on distributional parameters in comparing high- and medium-frequency words, and medium- and low-frequency words. Analyses of gaze duration and the probability of a refixation suggested a nonlinear pattern, with a larger effect at the lower end of the log frequency scale. However, the nonlinear effects were small, and Bayes Factor analyses favoured the simpler linear models for all measures. The possible roles of lexical and post-lexical factors in producing nonlinear effects of log word frequency during sentence reading are discussed.
A Linearized Model for Flicker and Contrast Thresholds at Various Retinal Illuminances
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Watson, Andrew
2015-01-01
We previously proposed a flicker visibility metric for bright displays, based on psychophysical data collected at a high mean luminance. Here we extend the metric to other mean luminances. This extension relies on a linear relation between log sensitivity and critical fusion frequency, and a linear relation between critical fusion frequency and log retina lilluminance. Consistent with our previous metric, the extended flicker visibility metric is measured in just-noticeable differences (JNDs).
Log-Linear Modeling of Agreement among Expert Exposure Assessors
Hunt, Phillip R.; Friesen, Melissa C.; Sama, Susan; Ryan, Louise; Milton, Donald
2015-01-01
Background: Evaluation of expert assessment of exposure depends, in the absence of a validation measurement, upon measures of agreement among the expert raters. Agreement is typically measured using Cohen’s Kappa statistic, however, there are some well-known limitations to this approach. We demonstrate an alternate method that uses log-linear models designed to model agreement. These models contain parameters that distinguish between exact agreement (diagonals of agreement matrix) and non-exact associations (off-diagonals). In addition, they can incorporate covariates to examine whether agreement differs across strata. Methods: We applied these models to evaluate agreement among expert ratings of exposure to sensitizers (none, likely, high) in a study of occupational asthma. Results: Traditional analyses using weighted kappa suggested potential differences in agreement by blue/white collar jobs and office/non-office jobs, but not case/control status. However, the evaluation of the covariates and their interaction terms in log-linear models found no differences in agreement with these covariates and provided evidence that the differences observed using kappa were the result of marginal differences in the distribution of ratings rather than differences in agreement. Differences in agreement were predicted across the exposure scale, with the likely moderately exposed category more difficult for the experts to differentiate from the highly exposed category than from the unexposed category. Conclusions: The log-linear models provided valuable information about patterns of agreement and the structure of the data that were not revealed in analyses using kappa. The models’ lack of dependence on marginal distributions and the ease of evaluating covariates allow reliable detection of observational bias in exposure data. PMID:25748517
Bowen, Stephen R; Chappell, Richard J; Bentzen, Søren M; Deveau, Michael A; Forrest, Lisa J; Jeraj, Robert
2012-01-01
Purpose To quantify associations between pre-radiotherapy and post-radiotherapy PET parameters via spatially resolved regression. Materials and methods Ten canine sinonasal cancer patients underwent PET/CT scans of [18F]FDG (FDGpre), [18F]FLT (FLTpre), and [61Cu]Cu-ATSM (Cu-ATSMpre). Following radiotherapy regimens of 50 Gy in 10 fractions, veterinary patients underwent FDG PET/CT scans at three months (FDGpost). Regression of standardized uptake values in baseline FDGpre, FLTpre and Cu-ATSMpre tumour voxels to those in FDGpost images was performed for linear, log-linear, generalized-linear and mixed-fit linear models. Goodness-of-fit in regression coefficients was assessed by R2. Hypothesis testing of coefficients over the patient population was performed. Results Multivariate linear model fits of FDGpre to FDGpost were significantly positive over the population (FDGpost~0.17 FDGpre, p=0.03), and classified slopes of RECIST non-responders and responders to be different (0.37 vs. 0.07, p=0.01). Generalized-linear model fits related FDGpre to FDGpost by a linear power law (FDGpost~FDGpre0.93, p<0.001). Univariate mixture model fits of FDGpre improved R2 from 0.17 to 0.52. Neither baseline FLT PET nor Cu-ATSM PET uptake contributed statistically significant multivariate regression coefficients. Conclusions Spatially resolved regression analysis indicates that pre-treatment FDG PET uptake is most strongly associated with three-month post-treatment FDG PET uptake in this patient population, though associations are histopathology-dependent. PMID:22682748
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
Crowther, Michael J; Look, Maxime P; Riley, Richard D
2014-09-28
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Ker, H. W.
2014-01-01
Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…
An R2 statistic for fixed effects in the linear mixed model.
Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver
2008-12-20
Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.
Linearly Supporting Feature Extraction for Automated Estimation of Stellar Atmospheric Parameters
NASA Astrophysics Data System (ADS)
Li, Xiangru; Lu, Yu; Comte, Georges; Luo, Ali; Zhao, Yongheng; Wang, Yongjun
2015-05-01
We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)bs; third, estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both the wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate {{T}{\\tt{eff} }}, 62 features for log g, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Parameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log {{T}{\\tt{eff} }} (83 K for {{T}{\\tt{eff} }}), 0.2345 dex for log g, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log {{T}{\\tt{eff} }} (32 K for {{T}{\\tt{eff} }}), 0.0337 dex for log g, and 0.0268 dex for [Fe/H].
NASA Technical Reports Server (NTRS)
Merenyi, E.; Miller, J. S.; Singer, R. B.
1992-01-01
The linear mixing model approach was successfully applied to data sets of various natures. In these sets, the measured radiance could be assumed to be a linear combination of radiance contributions. The present work is an attempt to analyze a spectral image of Mars with linear mixing modeling.
Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally
2018-02-01
1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE < 10.3%. The external data evaluation showed that the models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.
Vucicevic, J; Popovic, M; Nikolic, K; Filipic, S; Obradovic, D; Agbaba, D
2017-03-01
For this study, 31 compounds, including 16 imidazoline/α-adrenergic receptor (IRs/α-ARs) ligands and 15 central nervous system (CNS) drugs, were characterized in terms of the retention factors (k) obtained using biopartitioning micellar and classical reversed phase chromatography (log k BMC and log k wRP , respectively). Based on the retention factor (log k wRP ) and slope of the linear curve (S) the isocratic parameter (φ 0 ) was calculated. Obtained retention factors were correlated with experimental log BB values for the group of examined compounds. High correlations were obtained between logarithm of biopartitioning micellar chromatography (BMC) retention factor and effective permeability (r(log k BMC /log BB): 0.77), while for RP-HPLC system the correlations were lower (r(log k wRP /log BB): 0.58; r(S/log BB): -0.50; r(φ 0 /P e ): 0.61). Based on the log k BMC retention data and calculated molecular parameters of the examined compounds, quantitative structure-permeability relationship (QSPR) models were developed using partial least squares, stepwise multiple linear regression, support vector machine and artificial neural network methodologies. A high degree of structural diversity of the analysed IRs/α-ARs ligands and CNS drugs provides wide applicability domain of the QSPR models for estimation of blood-brain barrier penetration of the related compounds.
Rothenberg, Stephen J.; Rothenberg, Jesse C.
2005-01-01
Statistical evaluation of the dose–response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose–response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear–linear dose response) and natural-log–transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose–response relationship. We found that a log-linear lead–IQ relationship was a significantly better fit than was a linear–linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead–IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 μg/dL to 2.0 μg/dL) was 2.2 times ($319 billion) that calculated using a linear–linear dose–response function ($149 billion). The Centers for Disease Control and Prevention action limit of 10 μg/dL for children fails to protect against most damage and economic cost attributable to lead exposure. PMID:16140626
Log-linear human chorionic gonadotropin elimination in cases of retained placenta percreta.
Stitely, Michael L; Gerard Jackson, M; Holls, William H
2014-02-01
To describe the human chorionic gonadotropin (hCG) elimination rate in patients with intentionally retained placenta percreta. Medical records for cases of placenta percreta with intentional retention of the placenta were reviewed. The natural log of the hCG levels were plotted versus time and then the elimination rate equations were derived. The hCG elimination rate equations were log-linear in three cases individually (R (2) = 0.96-0.99) and in aggregate R (2) = 0.92). The mean half-life of hCG elimination was 146.3 h (6.1 days). The elimination of hCG in patients with intentionally retained placenta percreta is consistent with a two-compartment elimination model. The hCG elimination in retained placenta percreta is predictable in a log-linear manner that is similar to other reports of retained abnormally adherent placentae treated with or without methotrexate.
Breivik, Cathrine Nansdal; Nilsen, Roy Miodini; Myrseth, Erling; Pedersen, Paal Henning; Varughese, Jobin K; Chaudhry, Aqeel Asghar; Lund-Johansen, Morten
2013-07-01
There are few reports about the course of vestibular schwannoma (VS) patients following gamma knife radiosurgery (GKRS) compared with the course following conservative management (CM). In this study, we present prospectively collected data of 237 patients with unilateral VS extending outside the internal acoustic canal who received either GKRS (113) or CM (124). The aim was to measure the effect of GKRS compared with the natural course on tumor growth rate and hearing loss. Secondary end points were postinclusion additional treatment, quality of life (QoL), and symptom development. The patients underwent magnetic resonance imaging scans, clinical examination, and QoL assessment by SF-36 questionnaire. Statistics were performed by using Spearman correlation coefficient, Kaplan-Meier plot, Poisson regression model, mixed linear regression models, and mixed logistic regression models. Mean follow-up time was 55.0 months (26.1 standard deviation, range 10-132). Thirteen patients were lost to follow-up. Serviceable hearing was lost in 54 of 71 (76%) (CM) and 34 of 53 (64%) (GKRS) patients during the study period (not significant, log-rank test). There was a significant reduction in tumor volume over time in the GKRS group. The need for treatment following initial GKRS or CM differed at highly significant levels (log-rank test, P < .001). Symptom and QoL development did not differ significantly between the groups. In VS patients, GKRS reduces the tumor growth rate and thereby the incidence rate of new treatment about tenfold. Hearing is lost at similar rates in both groups. Symptoms and QoL seem not to be significantly affected by GKRS.
Convex set and linear mixing model
NASA Technical Reports Server (NTRS)
Xu, P.; Greeley, R.
1993-01-01
A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.
Fernández, María del Pilar; Cecere, María Carla; Cohen, Joel E.
2017-01-01
Human sleeping quarters (domiciles) and chicken coops are key source habitats of Triatoma infestans—the principal vector of the infection that causes Chagas disease—in rural communities in northern Argentina. Here we investigated the links among individual bug bloodmeal contents (BMC, mg), female fecundity, body length (L, mm), host blood sources and habitats. We tested whether L, habitat and host blood conferred relative fitness advantages using generalized linear mixed-effects models and a multimodel inference approach with model averaging. The data analyzed include 769 late-stage triatomines collected in 120 sites from six habitats in 87 houses in Figueroa, Santiago del Estero, during austral spring. L correlated positively with other body-size surrogates and was modified by habitat type, bug stage and recent feeding. Bugs from chicken coops were significantly larger than pig-corral and kitchen bugs. The best-fitting model of log BMC included habitat, a recent feeding, bug stage, log Lc (mean-centered log L) and all two-way interactions including log Lc. Human- and chicken-fed bugs had significantly larger BMC than bugs fed on other hosts whereas goat-fed bugs ranked last, in consistency with average blood-feeding rates. Fecundity was maximal in chicken-fed bugs from chicken coops, submaximal in human- and pig-fed bugs, and minimal in goat-fed bugs. This study is the first to reveal the allometric effects of body-size surrogates on BMC and female fecundity in a large set of triatomine populations occupying multiple habitats, and discloses the links between body size, microsite temperatures and various fitness components that affect the risks of transmission of Trypanosoma cruzi. PMID:29211791
Gürtler, Ricardo E; Fernández, María Del Pilar; Cecere, María Carla; Cohen, Joel E
2017-12-01
Human sleeping quarters (domiciles) and chicken coops are key source habitats of Triatoma infestans-the principal vector of the infection that causes Chagas disease-in rural communities in northern Argentina. Here we investigated the links among individual bug bloodmeal contents (BMC, mg), female fecundity, body length (L, mm), host blood sources and habitats. We tested whether L, habitat and host blood conferred relative fitness advantages using generalized linear mixed-effects models and a multimodel inference approach with model averaging. The data analyzed include 769 late-stage triatomines collected in 120 sites from six habitats in 87 houses in Figueroa, Santiago del Estero, during austral spring. L correlated positively with other body-size surrogates and was modified by habitat type, bug stage and recent feeding. Bugs from chicken coops were significantly larger than pig-corral and kitchen bugs. The best-fitting model of log BMC included habitat, a recent feeding, bug stage, log Lc (mean-centered log L) and all two-way interactions including log Lc. Human- and chicken-fed bugs had significantly larger BMC than bugs fed on other hosts whereas goat-fed bugs ranked last, in consistency with average blood-feeding rates. Fecundity was maximal in chicken-fed bugs from chicken coops, submaximal in human- and pig-fed bugs, and minimal in goat-fed bugs. This study is the first to reveal the allometric effects of body-size surrogates on BMC and female fecundity in a large set of triatomine populations occupying multiple habitats, and discloses the links between body size, microsite temperatures and various fitness components that affect the risks of transmission of Trypanosoma cruzi.
Minimizing bias in biomass allometry: Model selection and log transformation of data
Joseph Mascaro; undefined undefined; Flint Hughes; Amanda Uowolo; Stefan A. Schnitzer
2011-01-01
Nonlinear regression is increasingly used to develop allometric equations for forest biomass estimation (i.e., as opposed to the raditional approach of log-transformation followed by linear regression). Most statistical software packages, however, assume additive errors by default, violating a key assumption of allometric theory and possibly producing spurious models....
Fujisawa, Seiichiro; Kadoma, Yoshinori
2012-01-01
We investigated the quantitative structure-activity relationships between hemolytic activity (log 1/H(50)) or in vivo mouse intraperitoneal (ip) LD(50) using reported data for α,β-unsaturated carbonyl compounds such as (meth)acrylate monomers and their (13)C-NMR β-carbon chemical shift (δ). The log 1/H(50) value for methacrylates was linearly correlated with the δC(β) value. That for (meth)acrylates was linearly correlated with log P, an index of lipophilicity. The ipLD(50) for (meth)acrylates was linearly correlated with δC(β) but not with log P. For (meth)acrylates, the δC(β) value, which is dependent on the π-electron density on the β-carbon, was linearly correlated with PM3-based theoretical parameters (chemical hardness, η; electronegativity, χ; electrophilicity, ω), whereas log P was linearly correlated with heat of formation (HF). Also, the interaction between (meth)acrylates and DPPC liposomes in cell membrane molecular models was investigated using (1)H-NMR spectroscopy and differential scanning calorimetry (DSC). The log 1/H(50) value was related to the difference in chemical shift (ΔδHa) (Ha: H (trans) attached to the β-carbon) between the free monomer and the DPPC liposome-bound monomer. Monomer-induced DSC phase transition properties were related to HF for monomers. NMR chemical shifts may represent a valuable parameter for investigating the biological mechanisms of action of (meth)acrylates.
Fujisawa, Seiichiro; Kadoma, Yoshinori
2012-01-01
We investigated the quantitative structure-activity relationships between hemolytic activity (log 1/H50) or in vivo mouse intraperitoneal (ip) LD50 using reported data for α,β-unsaturated carbonyl compounds such as (meth)acrylate monomers and their 13C-NMR β-carbon chemical shift (δ). The log 1/H50 value for methacrylates was linearly correlated with the δCβ value. That for (meth)acrylates was linearly correlated with log P, an index of lipophilicity. The ipLD50 for (meth)acrylates was linearly correlated with δCβ but not with log P. For (meth)acrylates, the δCβ value, which is dependent on the π-electron density on the β-carbon, was linearly correlated with PM3-based theoretical parameters (chemical hardness, η; electronegativity, χ; electrophilicity, ω), whereas log P was linearly correlated with heat of formation (HF). Also, the interaction between (meth)acrylates and DPPC liposomes in cell membrane molecular models was investigated using 1H-NMR spectroscopy and differential scanning calorimetry (DSC). The log 1/H50 value was related to the difference in chemical shift (ΔδHa) (Ha: H (trans) attached to the β-carbon) between the free monomer and the DPPC liposome-bound monomer. Monomer-induced DSC phase transition properties were related to HF for monomers. NMR chemical shifts may represent a valuable parameter for investigating the biological mechanisms of action of (meth)acrylates. PMID:22312284
Friesen, Melissa C; Demers, Paul A; Spinelli, John J; Lorenzi, Maria F; Le, Nhu D
2007-04-01
The association between coal tar-derived substances, a complex mixture of polycyclic aromatic hydrocarbons, and cancer is well established. However, the specific aetiological agents are unknown. To compare the dose-response relationships for two common measures of coal tar-derived substances, benzene-soluble material (BSM) and benzo(a)pyrene (BaP), and to evaluate which among these is more strongly related to the health outcomes. The study population consisted of 6423 men with > or =3 years of work experience at an aluminium smelter (1954-97). Three health outcomes identified from national mortality and cancer databases were evaluated: incidence of bladder cancer (n = 90), incidence of lung cancer (n = 147) and mortality due to acute myocardial infarction (AMI, n = 184). The shape, magnitude and precision of the dose-response relationships and cumulative exposure levels for BSM and BaP were evaluated. Two model structures were assessed, where 1n(relative risk) increased with cumulative exposure (log-linear model) or with log-transformed cumulative exposure (log-log model). The BaP and BSM cumulative exposure metrics were highly correlated (r = 0.94). The increase in model precision using BaP over BSM was 14% for bladder cancer and 5% for lung cancer; no difference was observed for AMI. The log-linear BaP model provided the best fit for bladder cancer. The log-log dose-response models, where risk of disease plateaus at high exposure levels, were the best-fitting models for lung cancer and AMI. BaP and BSM were both strongly associated with bladder and lung cancer and modestly associated with AMI. Similar conclusions regarding the associations could be made regardless of the exposure metric.
USING LINEAR AND POLYNOMIAL MODELS TO EXAMINE THE ENVIRONMENTAL STABILITY OF VIRUSES
The article presents the development of model equations for describing the fate of viral infectivity in environmental samples. Most of the models were based upon the use of a two-step linear regression approach. The first step employs regression of log base 10 transformed viral t...
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
A log-linear model approach to estimation of population size using the line-transect sampling method
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1978-01-01
The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
Pina-Pérez, M C; Silva-Angulo, A B; Rodrigo, D; Martínez-López, A
2009-04-15
With a view to extending the shelf-life and enhancing the safety of liquid whole egg/skim milk (LWE-SM) mixed beverages, a study was conducted with Bacillus cereus vegetative cells inoculated in skim milk (SM) and LWE-SM beverages, with or without antimicrobial cocoa powder. The beverages were treated with Pulsed Electric Field (PEF) technology and then stored at 5 degrees C for 15 days. The kinetic results were modeled with the Bigelow model, Weibull distribution function, modified Gompertz equation, and Log-logistic models. Maximum inactivation registered a reduction of around 3 log cycles at 40 kV/cm, 360 micros, 20 degrees C in both the SM and LWE-SM beverages. By contrast, in the beverages supplemented with the aforementioned antimicrobial compound, higher inactivation levels were obtained under the same treatment conditions, reaching a 3.30 log(10) cycle reduction. The model affording the best fit for all four beverages was the four-parameter Log-logistic model. After 15 days of storage, the antimicrobial compound lowered Bacillus cereus survival rates in the samples supplemented with CocoanOX 12% by a 4 log cycle reduction, as compared to the untreated samples without CocoanOX 12%. This could indicate that the PEF-antimicrobial combination has a synergistic effect on the bacterial cells under study, increasing their sensitivity to subsequent refrigerated storage.
Economic policy optimization based on both one stochastic model and the parametric control theory
NASA Astrophysics Data System (ADS)
Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit
2016-06-01
A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Sun, Lili; Zhou, Liping; Yu, Yu; Lan, Yukun; Li, Zhiliang
2007-01-01
Polychlorinated diphenyl ethers (PCDEs) have received more and more concerns as a group of ubiquitous potential persistent organic pollutants (POPs). By using molecular electronegativity distance vector (MEDV-4), multiple linear regression (MLR) models are developed for sub-cooled liquid vapor pressures (P(L)), n-octanol/water partition coefficients (K(OW)) and sub-cooled liquid water solubilities (S(W,L)) of 209 PCDEs and diphenyl ether. The correlation coefficients (R) and the leave-one-out cross-validation (LOO) correlation coefficients (R(CV)) of all the 6-descriptor models for logP(L), logK(OW) and logS(W,L) are more than 0.98. By using stepwise multiple regression (SMR), the descriptors are selected and the resulting models are 5-descriptor model for logP(L), 4-descriptor model for logK(OW), and 6-descriptor model for logS(W,L), respectively. All these models exhibit excellent estimate capabilities for internal sample set and good predictive capabilities for external samples set. The consistency between observed and estimated/predicted values for logP(L) is the best (R=0.996, R(CV)=0.996), followed by logK(OW) (R=0.992, R(CV)=0.992) and logS(W,L) (R=0.983, R(CV)=0.980). By using MEDV-4 descriptors, the QSPR models can be used for prediction and the model predictions can hence extend the current database of experimental values.
NASA Astrophysics Data System (ADS)
Pan, Chengbin; Miranda, Enrique; Villena, Marco A.; Xiao, Na; Jing, Xu; Xie, Xiaoming; Wu, Tianru; Hui, Fei; Shi, Yuanyuan; Lanza, Mario
2017-06-01
Despite the enormous interest raised by graphene and related materials, recent global concern about their real usefulness in industry has raised, as there is a preoccupying lack of 2D materials based electronic devices in the market. Moreover, analytical tools capable of describing and predicting the behavior of the devices (which are necessary before facing mass production) are very scarce. In this work we synthesize a resistive random access memory (RRAM) using graphene/hexagonal-boron-nitride/graphene (G/h-BN/G) van der Waals structures, and we develop a compact model that accurately describes its functioning. The devices were fabricated using scalable methods (i.e. CVD for material growth and shadow mask for electrode patterning), and they show reproducible resistive switching (RS). The measured characteristics during the forming, set and reset processes were fitted using the model developed. The model is based on the nonlinear Landauer approach for mesoscopic conductors, in this case atomic-sized filaments formed within the 2D materials system. Besides providing excellent overall fitting results (which have been corroborated in log-log, log-linear and linear-linear plots), the model is able to explain the dispersion of the data obtained from cycle-to-cycle in terms of the particular features of the filamentary paths, mainly their confinement potential barrier height.
Aircraft Airframe Cost Estimation Using a Random Coefficients Model
1979-12-01
approach will also be used here. 2 Model Formulation Several different types of equations could be used for the basic form of the CER, such as linear ...5) Marcotte developed several CER’s for fighter aircraft airframes using the log- linear model . A plot of the residuals from the CER for recurring...of the natural logarithm. Ordinary Least Squares The ordinary least squares procedure starts with the equation for the general linear model . The
Wason, Jay W; Dovciak, Martin
2017-08-01
Climate change is expected to lead to upslope shifts in tree species distributions, but the evidence is mixed partly due to land-use effects and individualistic species responses to climate. We examined how individual tree species demography varies along elevational climatic gradients across four states in the northeastern United States to determine whether species elevational distributions and their potential upslope (or downslope) shifts were controlled by climate, land-use legacies (past logging), or soils. We characterized tree demography, microclimate, land-use legacies, and soils at 83 sites stratified by elevation (~500 to ~1200 m above sea level) across 12 mountains containing the transition from northern hardwood to spruce-fir forests. We modeled elevational distributions of tree species saplings and adults using logistic regression to test whether sapling distributions suggest ongoing species range expansion upslope (or contraction downslope) relative to adults, and we used linear mixed models to determine the extent to which climate, land use, and soil variables explain these distributions. Tree demography varied with elevation by species, suggesting a potential upslope shift only for American beech, downslope shifts for red spruce (more so in cool regions) and sugar maple, and no change with elevation for balsam fir. While soils had relatively minor effects, climate was the dominant predictor for most species and more so for saplings than adults of red spruce, sugar maple, yellow birch, cordate birch, and striped maple. On the other hand, logging legacies were positively associated with American beech, sugar maple, and yellow birch, and negatively with red spruce and balsam fir - generally more so for adults than saplings. All species exhibited individualistic rather than synchronous demographic responses to climate and land use, and the return of red spruce to lower elevations where past logging originally benefited northern hardwood species indicates that land use may mask species range shifts caused by changing climate. © 2016 John Wiley & Sons Ltd.
Farsa, Oldřich
2013-01-01
The log BB parameter is the logarithm of the ratio of a compound's equilibrium concentrations in the brain tissue versus the blood plasma. This parameter is a useful descriptor in assessing the ability of a compound to permeate the blood-brain barrier. The aim of this study was to develop a Hansch-type linear regression QSAR model that correlates the parameter log BB and the retention time of drugs and other organic compounds on a reversed-phase HPLC containing an embedded amide moiety. The retention time was expressed by the capacity factor log k'. The second aim was to estimate the brain's absorption of 2-(azacycloalkyl)acetamidophenoxyacetic acids, which are analogues of piracetam, nefiracetam, and meclofenoxate. Notably, these acids may be novel nootropics. Two simple regression models that relate log BB and log k' were developed from an assay performed using a reversed-phase HPLC that contained an embedded amide moiety. Both the quadratic and linear models yielded statistical parameters comparable to previously published models of log BB dependence on various structural characteristics. The models predict that four members of the substituted phenoxyacetic acid series have a strong chance of permeating the barrier and being absorbed in the brain. The results of this study show that a reversed-phase HPLC system containing an embedded amide moiety is a functional in vitro surrogate of the blood-brain barrier. These results suggest that racetam-type nootropic drugs containing a carboxylic moiety could be more poorly absorbed than analogues devoid of the carboxyl group, especially if the compounds penetrate the barrier by a simple diffusion mechanism.
Organizational and client determinants of cost in outpatient substance abuse treatment.
Beaston-Blaakman, Aaron; Shepard, Donald; Horgan, Constance; Ritter, Grant
2007-03-01
Understanding variation in the cost of outpatient substance abuse treatment is important for improving the delivery and financing of care. Studies that examine how the cost of treatment relates to treatment program and client characteristics can provide important data about variables that affect unit costs of treatment. Such analyses can inform those who are responsible for setting appropriate reimbursement rates and can give important cost data to program directors responsible for delivering cost-effective treatment. The aim of this study is to describe the results from cost function analyses of outpatient substance abuse treatment programs sampled in the Alcohol and Drug Services Study (ADSS). The ADSS is a national study conducted in the late 1990s to collect organizational, client, and cost data of the specialty sector. The authors examined how organizational and client characteristics affect the cost per episode and the cost per enrollment day of outpatient care. The analysis incorporates organizational variables such ownership, average length of stay, and visits per enrollment day, as well as client characteristics such as gender, age, and primary drug of choice. For further applicability for current treatment policy, the ADSS cost data were inflated from 1997 to 2005 dollars. Mixed model regressions using log-log and log-linear relationships were developed. Several organizational characteristics have statistically significant coefficients in the model estimating cost per episode, including log of point prevalence (-0.53, p<.01), log of average length of stay (0.73, p<.01), log of visits per enrollment day (0.45, p<.01), log of labor index (0.50, p<.01), proportion of counselor time spent in direct counseling (-0.52, p<.01), and location outside a metropolitan area (-0.19. p<.05). None of the client variables are statistically significant in this model. The analysis of cost per enrollment day indicates diseconomies of scope for programs that provide a broader array of ancillary services. Findings suggest there exist increasing returns to scale in outpatient substance abuse treatment. Mergers of substance abuse treatment programs may be economically beneficial. Other major determinants of cost include the average length of stay, wage rates, visits per enrollment day, and direct client contact time. Increased efficiency may enable programs to control costs in these areas. In addition, many of the patterns identified in the model represent the way in which outpatient substance abuse treatment facilities are reimbursed for services. As these patterns become more specified for client conditions, client factors may become statistically significant in determining costs. The potential problem of endogeneity is addressed. Limitations of the study include possible inaccuracies in non-personnel cost data, changes in the treatment system unaccounted for in the model, and limited market area information with regard to input prices. If further research indicates economies of scale, policymakers might consider supporting the merging of treatment programs. Also, further research into the optimal-mix of ancillary and treatment services would provide useful data for treatment programs seeking to balance resource constraints while providing important clinical and support activities. Lastly, research is needed to understand the relationship between treatment costs and service reimbursement.
Kinetics of Hydrothermal Inactivation of Endotoxins ▿
Li, Lixiong; Wilbur, Chris L.; Mintz, Kathryn L.
2011-01-01
A kinetic model was established for the inactivation of endotoxins in water at temperatures ranging from 210°C to 270°C and a pressure of 6.2 × 106 Pa. Data were generated using a bench scale continuous-flow reactor system to process feed water spiked with endotoxin standard (Escherichia coli O113:H10). Product water samples were collected and quantified by the Limulus amebocyte lysate assay. At 250°C, 5-log endotoxin inactivation was achieved in about 1 s of exposure, followed by a lower inactivation rate. This non-log-linear pattern is similar to reported trends in microbial survival curves. Predictions and parameters of several non-log-linear models are presented. In the fast-reaction zone (3- to 5-log reduction), the Arrhenius rate constant fits well at temperatures ranging from 120°C to 250°C on the basis of data from this work and the literature. Both biphasic and modified Weibull models are comparable to account for both the high and low rates of inactivation in terms of prediction accuracy and the number of parameters used. A unified representation of thermal resistance curves for a 3-log reduction and a 3 D value associated with endotoxin inactivation and microbial survival, respectively, is presented. PMID:21193667
Hair Manganese as an Exposure Biomarker among Welders.
Reiss, Boris; Simpson, Christopher D; Baker, Marissa G; Stover, Bert; Sheppard, Lianne; Seixas, Noah S
2016-03-01
Quantifying exposure and dose to manganese (Mn) containing airborne particles in welding fume presents many challenges. Common biological markers such as Mn in blood or Mn in urine have not proven to be practical biomarkers even in studies where positive associations were observed. However, hair Mn (MnH) as a biomarker has the advantage over blood and urine that it is less influenced by short-term variability of Mn exposure levels because of its slow growth rate. The objective of this study was to determine whether hair can be used as a biomarker for welders exposed to manganese. Hair samples (1cm) were collected from 47 welding school students and individual air Mn (MnA) exposures were measured for each subject. MnA levels for all days were estimated with a linear mixed model using welding type as a predictor. A 30-day time-weighted average MnA (MnA30d) exposure level was calculated for each hair sample. The association between MnH and MnA30d levels was then assessed. A linear relationship was observed between log-transformed MnA30d and log-transformed MnH. Doubling MnA30d exposure levels yields a 20% (95% confidence interval: 11-29%) increase in MnH. The association was similar for hair washed following two different wash procedures designed to remove external contamination. Hair shows promise as a biomarker for inhaled Mn exposure given the presence of a significant linear association between MnH and MnA30d levels. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Hair Manganese as an Exposure Biomarker among Welders
Reiss, Boris; Simpson, Christopher D.; Baker, Marissa G.; Stover, Bert; Sheppard, Lianne; Seixas, Noah S.
2016-01-01
Quantifying exposure and dose to manganese (Mn) containing airborne particles in welding fume presents many challenges. Common biological markers such as Mn in blood or Mn in urine have not proven to be practical biomarkers even in studies where positive associations were observed. However, hair Mn (MnH) as a biomarker has the advantage over blood and urine that it is less influenced by short-term variability of Mn exposure levels because of its slow growth rate. The objective of this study was to determine whether hair can be used as a biomarker for welders exposed to manganese. Hair samples (1cm) were collected from 47 welding school students and individual air Mn (MnA) exposures were measured for each subject. MnA levels for all days were estimated with a linear mixed model using welding type as a predictor. A 30-day time-weighted average MnA (MnA30d) exposure level was calculated for each hair sample. The association between MnH and MnA30d levels was then assessed. A linear relationship was observed between log-transformed MnA30d and log-transformed MnH. Doubling MnA30d exposure levels yields a 20% (95% confidence interval: 11–29%) increase in MnH. The association was similar for hair washed following two different wash procedures designed to remove external contamination. Hair shows promise as a biomarker for inhaled Mn exposure given the presence of a significant linear association between MnH and MnA30d levels. PMID:26409267
NASA Astrophysics Data System (ADS)
Mert, Bayram Ali; Dag, Ahmet
2017-12-01
In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.
Spatial and temporal behavioural responses of wild cattle to tropical forest degradation
Goossens, Benoît; Goon Ee Wern, Jocelyn; Kretzschmar, Petra; Bohm, Torsten; Vaughan, Ian P.
2018-01-01
Identifying the consequences of tropical forest degradation is essential to mitigate its effects upon forest fauna. Large forest-dwelling mammals are often highly sensitive to environmental perturbation through processes such as fragmentation, simplification of habitat structure, and abiotic changes including increased temperatures where the canopy is cleared. Whilst previous work has focused upon species richness and rarity in logged forest, few look at spatial and temporal behavioural responses to forest degradation. Using camera traps, we explored the relationships between diel activity, behavioural expression, habitat use and ambient temperature to understand how the wild free-ranging Bornean banteng (Bos javanicus lowi) respond to logging and regeneration. Three secondary forests in Sabah, Malaysian Borneo were studied, varying in the time since last logging (6–23 years). A combination of generalised linear mixed models and generalised linear models were constructed using >36,000 trap-nights. Temperature had no significant effect on activity, however it varied markedly between forests, with the period of intense heat shortening as forest regeneration increased over the years. Bantengs regulated activity, with a reduction during the wet season in the most degraded forest (z = -2.6, Std. Error = 0.13, p = 0.01), and reductions during midday hours in forest with limited regeneration, however after >20 years of regrowth, activity was more consistent throughout the day. Foraging and use of open canopy areas dominated the activity budget when regeneration was limited. As regeneration advanced, this was replaced by greater investment in travelling and using a closed canopy. Forest degradation modifies the ambient temperature, and positively influences flooding and habitat availability during the wet season. Retention of a mosaic of mature forest patches within commercial forests could minimise these effects and also provide refuge, which is key to heat dissipation and the prevention of thermal stress, whilst retention of degraded forest could provide forage. PMID:29649279
Madison, Matthew J; Bradshaw, Laine P
2015-06-01
Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other multidimensional measurement models. A priori specifications of which latent characteristics or attributes are measured by each item are a core element of the diagnostic assessment design. This item-attribute alignment, expressed in a Q-matrix, precedes and supports any inference resulting from the application of the diagnostic classification model. This study investigates the effects of Q-matrix design on classification accuracy for the log-linear cognitive diagnosis model. Results indicate that classification accuracy, reliability, and convergence rates improve when the Q-matrix contains isolated information from each measured attribute.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso
Kong, Shengchun; Nan, Bin
2013-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328
Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
Wallace, Michael P; Stewart, Catherine E; Moseley, Merrick J; Stephens, David A; Fielder, Alistair R
2016-12-01
To generate a statistical model for personalizing a patient's occlusion therapy regimen. Statistical modelling was undertaken on a combined data set of the Monitored Occlusion Treatment of Amblyopia Study (MOTAS) and the Randomized Occlusion Treatment of Amblyopia Study (ROTAS). This exercise permits the calculation of future patients' total effective dose (TED)-that predicted to achieve their best attainable visual acuity. Daily patching regimens (hours/day) can be calculated from the TED. Occlusion data for 149 study participants with amblyopia (anisometropic in 50, strabismic in 43, and mixed in 56) were analyzed. Median time to best observed visual acuity was 63 days (25% and 75% quartiles; 28 and 91 days). Median visual acuity in the amblyopic eye at start of occlusion was 0.40 logMAR (quartiles 0.22 and 0.68 logMAR) and at end of occlusion was 0.12 (quartiles 0.025 and 0.32 logMAR). Median lower and upper estimates of TED were 120 hours (quartiles 34 and 242 hours), and 176 hours (quartiles 84 and 316 hours). The data suggest a piecewise linear relationship (P = 0.008) between patching dose-rate (hours/day) and TED with a single breakpoint estimated at 2.16 (standard error 0.51) hours/day, suggesting doses below 2.16 hours/day are less effective. We introduce the concept of TED of occlusion. Predictors for TED are visual acuity deficit, amblyopia type, and age at start of occlusion therapy. Dose-rates prescribed within the model range from 2.5 to 12 hours/day and can be revised dynamically throughout treatment in response to recorded patient compliance: a personalized dosing strategy.
On the equivalence of case-crossover and time series methods in environmental epidemiology.
Lu, Yun; Zeger, Scott L
2007-04-01
The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.
Peltenburg, Hester; Timmer, Niels; Bosman, Ingrid J; Hermens, Joop L M; Droge, Steven T J
2016-05-20
The mixed-mode (C18/strong cation exchange-SCX) solid-phase microextraction (SPME) fiber has recently been shown to have increased sensitivity for ionic compounds compared to more conventional sampler coatings such as polyacrylate and polydimethylsiloxane (PDMS). However, data for structurally diverse compounds to this (prototype) sampler coating are too limited to define its structural limitations. We determined C18/SCX fiber partitioning coefficients of nineteen cationic structures without hydrogen bonding capacity besides the charged group, stretching over a wide hydrophobicity range (including amphetamine, amitriptyline, promazine, chlorpromazine, triflupromazine, difenzoquat), and eight basic pharmaceutical and illicit drugs (pKa>8.86) with additional hydrogen bonding moieties (MDMA, atenolol, alprenolol, metoprolol, morphine, nicotine, tramadol, verapamil). In addition, sorption data for three neutral benzodiazepines (diazepam, temazepam, and oxazepam) and the anionic NSAID diclofenac were collected to determine the efficiency to sample non-basic drugs. All tested compounds showed nonlinear isotherms above 1mmol/L coating, and linear isotherms below 1mmol/L. The affinity for C18/SCX-SPME for tested organic cations without Hbond capacities increased with longer alkyl chains, ranging from logarithmic fiber-water distribution coefficients (log Dfw) of 1.8 (benzylamine) to 5.8 (triflupromazine). Amines smaller than benzylamine may thus have limited detection levels, while cationic surfactants with alkyl chain lengths >12 carbon atoms may sorb too strong to the C18/SCX sampler which hampers calibration of the fiber-water relationship in the linear range. The log Dfw for these simple cation structures closely correlates with the octanol-water partition coefficient of the neutral form (Kow,N), and decreases with increased branching and presence of multiple aromatic rings. Oxygen moieties in organic cations decreased the affinity for C18/SCX-SPME. Log Dfw values of neutral benzodiazepines were an order of magnitude higher than their log Kow,N. Results for anionic diclofenac species (logKow,N 4.5, pKa 4.0, log Dfw 2.9) indicate that the C18-SCX fiber might also be useful for sampling of organic anions. This data supports our theory that C18-based coatings are able to sorb ionized compounds through adsorption and demonstrates the applicability of C18-based SPME in the measurement of freely dissolved concentrations of a wide range of ionizable compounds. Copyright © 2016 Elsevier B.V. All rights reserved.
Estimating Pressure Reactivity Using Noninvasive Doppler-Based Systolic Flow Index.
Zeiler, Frederick A; Smielewski, Peter; Donnelly, Joseph; Czosnyka, Marek; Menon, David K; Ercole, Ari
2018-04-05
The study objective was to derive models that estimate the pressure reactivity index (PRx) using the noninvasive transcranial Doppler (TCD) based systolic flow index (Sx_a) and mean flow index (Mx_a), both based on mean arterial pressure, in traumatic brain injury (TBI). Using a retrospective database of 347 patients with TBI with intracranial pressure and TCD time series recordings, we derived PRx, Sx_a, and Mx_a. We first derived the autocorrelative structure of PRx based on: (A) autoregressive integrative moving average (ARIMA) modeling in representative patients, and (B) within sequential linear mixed effects (LME) models with various embedded ARIMA error structures for PRx for the entire population. Finally, we performed sequential LME models with embedded PRx ARIMA modeling to find the best model for estimating PRx using Sx_a and Mx_a. Model adequacy was assessed via normally distributed residual density. Model superiority was assessed via Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log likelihood (LL), and analysis of variance testing between models. The most appropriate ARIMA structure for PRx in this population was (2,0,2). This was applied in sequential LME modeling. Two models were superior (employing random effects in the independent variables and intercept): (A) PRx ∼ Sx_a, and (B) PRx ∼ Sx_a + Mx_a. Correlation between observed and estimated PRx with these two models was: (A) 0.794 (p < 0.0001, 95% confidence interval (CI) = 0.788-0.799), and (B) 0.814 (p < 0.0001, 95% CI = 0.809-0.819), with acceptable agreement on Bland-Altman analysis. Through using linear mixed effects modeling and accounting for the ARIMA structure of PRx, one can estimate PRx using noninvasive TCD-based indices. We have described our first attempts at such modeling and PRx estimation, establishing the strong link between two aspects of cerebral autoregulation: measures of cerebral blood flow and those of pulsatile cerebral blood volume. Further work is required to validate.
Calinger, Kellen; Calhoon, Elisabeth; Chang, Hsiao-chi; Whitacre, James; Wenzel, John; Comita, Liza; Queenborough, Simon
2015-01-01
Anthropogenic disturbances often change ecological communities and provide opportunities for non-native species invasion. Understanding the impacts of disturbances on species invasion is therefore crucial for invasive species management. We used generalized linear mixed effects models to explore the influence of land-use history and distance to roads on the occurrence and abundance of two invasive plant species (Rosa multiflora and Berberis thunbergii) in a 900-ha deciduous forest in the eastern U.S.A., the Powdermill Nature Reserve. Although much of the reserve has been continuously forested since at least 1939, aerial photos revealed a variety of land-uses since then including agriculture, mining, logging, and development. By 2008, both R. multiflora and B. thunbergii were widespread throughout the reserve (occurring in 24% and 13% of 4417 10-m diameter regularly-placed vegetation plots, respectively) with occurrence and abundance of each varying significantly with land-use history. Rosa multiflora was more likely to occur in historically farmed, mined, logged or developed plots than in plots that remained forested, (log odds of 1.8 to 3.0); Berberis thunbergii was more likely to occur in plots with agricultural, mining, or logging history than in plots without disturbance (log odds of 1.4 to 2.1). Mining, logging, and agriculture increased the probability that R. multiflora had >10% cover while only past agriculture was related to cover of B. thunbergii. Proximity to roads was positively correlated with the occurrence of R. multiflora (a 0.26 increase in the log odds for every 1-m closer) but not B. thunbergii, and roads had no impact on the abundance of either species. Our results indicated that a wide variety of disturbances may aid the introduction of invasive species into new habitats, while high-impact disturbances such as agriculture and mining increase the likelihood of high abundance post-introduction. PMID:26046534
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Use of Log-Linear Models in Classification Problems.
1981-12-01
polynomials. The second example involves infant hypoxic trauma, and many cells are empty. The existence conditions are used to find a model for which esti...mates of cell frequencies exist and are in good agreement with the ob- served data. Key Words: Classification problem, log-difference models, minimum 8...variates define k states, which are labeled consecutively. Thus, while MB define cells in their tables by an I-vector Z, we simply take Z to be a
Broughton, Heather M; Govender, Danny; Shikwambana, Purvance; Chappell, Patrick; Jolles, Anna
2017-06-01
The International Species Information System has set forth an extensive database of reference intervals for zoologic species, allowing veterinarians and game park officials to distinguish normal health parameters from underlying disease processes in captive wildlife. However, several recent studies comparing reference values from captive and free-ranging animals have found significant variation between populations, necessitating the development of separate reference intervals in free-ranging wildlife to aid in the interpretation of health data. Thus, this study characterizes reference intervals for six biochemical analytes, eleven hematologic or immune parameters, and three hormones using samples from 219 free-ranging African lions ( Panthera leo ) captured in Kruger National Park, South Africa. Using the original sample population, exclusion criteria based on physical examination were applied to yield a final reference population of 52 clinically normal lions. Reference intervals were then generated via 90% confidence intervals on log-transformed data using parametric bootstrapping techniques. In addition to the generation of reference intervals, linear mixed-effect models and generalized linear mixed-effect models were used to model associations of each focal parameter with the following independent variables: age, sex, and body condition score. Age and sex were statistically significant drivers for changes in hepatic enzymes, renal values, hematologic parameters, and leptin, a hormone related to body fat stores. Body condition was positively correlated with changes in monocyte counts. Given the large variation in reference values taken from captive versus free-ranging lions, it is our hope that this study will serve as a baseline for future clinical evaluations and biomedical research targeting free-ranging African lions.
Farsa, Oldřich
2013-01-01
The log BB parameter is the logarithm of the ratio of a compound’s equilibrium concentrations in the brain tissue versus the blood plasma. This parameter is a useful descriptor in assessing the ability of a compound to permeate the blood-brain barrier. The aim of this study was to develop a Hansch-type linear regression QSAR model that correlates the parameter log BB and the retention time of drugs and other organic compounds on a reversed-phase HPLC containing an embedded amide moiety. The retention time was expressed by the capacity factor log k′. The second aim was to estimate the brain’s absorption of 2-(azacycloalkyl)acetamidophenoxyacetic acids, which are analogues of piracetam, nefiracetam, and meclofenoxate. Notably, these acids may be novel nootropics. Two simple regression models that relate log BB and log k′ were developed from an assay performed using a reversed-phase HPLC that contained an embedded amide moiety. Both the quadratic and linear models yielded statistical parameters comparable to previously published models of log BB dependence on various structural characteristics. The models predict that four members of the substituted phenoxyacetic acid series have a strong chance of permeating the barrier and being absorbed in the brain. The results of this study show that a reversed-phase HPLC system containing an embedded amide moiety is a functional in vitro surrogate of the blood-brain barrier. These results suggest that racetam-type nootropic drugs containing a carboxylic moiety could be more poorly absorbed than analogues devoid of the carboxyl group, especially if the compounds penetrate the barrier by a simple diffusion mechanism. PMID:23641330
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Joseph L. Ganey; Scott C. Vojta
2012-01-01
Down logs provide important ecosystem services in forests and affect surface fuel loads and fire behavior. Amounts and kinds of logs are influenced by factors such as forest type, disturbance regime, forest man-agement, and climate. To quantify potential short-term changes in log populations during a recent global- climate-change type drought, we sampled logs in mixed-...
MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)
We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...
Reliability Analysis of the Gradual Degradation of Semiconductor Devices.
1983-07-20
under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation
NASA Astrophysics Data System (ADS)
Kevorkyants, S. S.
2018-03-01
For theoretically studying the intensity of the influence exerted by the polarization of the rocks on the results of direct current (DC) well logging, a solution is suggested for the direct inner problem of the DC electric logging in the polarizable model of plane-layered medium containing a heterogeneity by the example of the three-layer model of the hosting medium. Initially, the solution is presented in the form of a traditional vector volume-integral equation of the second kind (IE2) for the electric current density vector. The vector IE2 is solved by the modified iteration-dissipation method. By the transformations, the initial IE2 is reduced to the equation with the contraction integral operator for an axisymmetric model of electrical well-logging of the three-layer polarizable medium intersected by an infinitely long circular cylinder. The latter simulates the borehole with a zone of penetration where the sought vector consists of the radial J r and J z axial (relative to the cylinder's axis) components. The decomposition of the obtained vector IE2 into scalar components and the discretization in the coordinates r and z lead to a heterogeneous system of linear algebraic equations with a block matrix of the coefficients representing 2x2 matrices whose elements are the triple integrals of the mixed derivatives of the second-order Green's function with respect to the parameters r, z, r', and z'. With the use of the analytical transformations and standard integrals, the integrals over the areas of the partition cells and azimuthal coordinate are reduced to single integrals (with respect to the variable t = cos ϕ on the interval [-1, 1]) calculated by the Gauss method for numerical integration. For estimating the effective coefficient of polarization of the complex medium, it is suggested to use the Siegel-Komarov formula.
Pedroza, Claudia; Truong, Van Thi Thanh
2017-11-02
Analyses of multicenter studies often need to account for center clustering to ensure valid inference. For binary outcomes, it is particularly challenging to properly adjust for center when the number of centers or total sample size is small, or when there are few events per center. Our objective was to evaluate the performance of generalized estimating equation (GEE) log-binomial and Poisson models, generalized linear mixed models (GLMMs) assuming binomial and Poisson distributions, and a Bayesian binomial GLMM to account for center effect in these scenarios. We conducted a simulation study with few centers (≤30) and 50 or fewer subjects per center, using both a randomized controlled trial and an observational study design to estimate relative risk. We compared the GEE and GLMM models with a log-binomial model without adjustment for clustering in terms of bias, root mean square error (RMSE), and coverage. For the Bayesian GLMM, we used informative neutral priors that are skeptical of large treatment effects that are almost never observed in studies of medical interventions. All frequentist methods exhibited little bias, and the RMSE was very similar across the models. The binomial GLMM had poor convergence rates, ranging from 27% to 85%, but performed well otherwise. The results show that both GEE models need to use small sample corrections for robust SEs to achieve proper coverage of 95% CIs. The Bayesian GLMM had similar convergence rates but resulted in slightly more biased estimates for the smallest sample sizes. However, it had the smallest RMSE and good coverage across all scenarios. These results were very similar for both study designs. For the analyses of multicenter studies with a binary outcome and few centers, we recommend adjustment for center with either a GEE log-binomial or Poisson model with appropriate small sample corrections or a Bayesian binomial GLMM with informative priors.
On the validity of travel-time based nonlinear bioreactive transport models in steady-state flow.
Sanz-Prat, Alicia; Lu, Chuanhe; Finkel, Michael; Cirpka, Olaf A
2015-01-01
Travel-time based models simplify the description of reactive transport by replacing the spatial coordinates with the groundwater travel time, posing a quasi one-dimensional (1-D) problem and potentially rendering the determination of multidimensional parameter fields unnecessary. While the approach is exact for strictly advective transport in steady-state flow if the reactive properties of the porous medium are uniform, its validity is unclear when local-scale mixing affects the reactive behavior. We compare a two-dimensional (2-D), spatially explicit, bioreactive, advective-dispersive transport model, considered as "virtual truth", with three 1-D travel-time based models which differ in the conceptualization of longitudinal dispersion: (i) neglecting dispersive mixing altogether, (ii) introducing a local-scale longitudinal dispersivity constant in time and space, and (iii) using an effective longitudinal dispersivity that increases linearly with distance. The reactive system considers biodegradation of dissolved organic carbon, which is introduced into a hydraulically heterogeneous domain together with oxygen and nitrate. Aerobic and denitrifying bacteria use the energy of the microbial transformations for growth. We analyze six scenarios differing in the variance of log-hydraulic conductivity and in the inflow boundary conditions (constant versus time-varying concentration). The concentrations of the 1-D models are mapped to the 2-D domain by means of the kinematic (for case i), and mean groundwater age (for cases ii & iii), respectively. The comparison between concentrations of the "virtual truth" and the 1-D approaches indicates extremely good agreement when using an effective, linearly increasing longitudinal dispersivity in the majority of the scenarios, while the other two 1-D approaches reproduce at least the concentration tendencies well. At late times, all 1-D models give valid approximations of two-dimensional transport. We conclude that the conceptualization of nonlinear bioreactive transport in complex multidimensional domains by quasi 1-D travel-time models is valid for steady-state flow fields if the reactants are introduced over a wide cross-section, flow is at quasi steady state, and dispersive mixing is adequately parametrized. Copyright © 2015 Elsevier B.V. All rights reserved.
Permeability-porosity relationships in sedimentary rocks
Nelson, Philip H.
1994-01-01
In many consolidated sandstone and carbonate formations, plots of core data show that the logarithm of permeability (k) is often linearly proportional to porosity (??). The slope, intercept, and degree of scatter of these log(k)-?? trends vary from formation to formation, and these variations are attributed to differences in initial grain size and sorting, diagenetic history, and compaction history. In unconsolidated sands, better sorting systematically increases both permeability and porosity. In sands and sandstones, an increase in gravel and coarse grain size content causes k to increase even while decreasing ??. Diagenetic minerals in the pore space of sandstones, such as cement and some clay types, tend to decrease log(k) proportionately as ?? decreases. Models to predict permeability from porosity and other measurable rock parameters fall into three classes based on either grain, surface area, or pore dimension considerations. (Models that directly incorporate well log measurements but have no particular theoretical underpinnings from a fourth class.) Grain-based models show permeability proportional to the square of grain size times porosity raised to (roughly) the fifth power, with grain sorting as an additional parameter. Surface-area models show permeability proportional to the inverse square of pore surface area times porosity raised to (roughly) the fourth power; measures of surface area include irreducible water saturation and nuclear magnetic resonance. Pore-dimension models show permeability proportional to the square of a pore dimension times porosity raised to a power of (roughly) two and produce curves of constant pore size that transgress the linear data trends on a log(k)-?? plot. The pore dimension is obtained from mercury injection measurements and is interpreted as the pore opening size of some interconnected fraction of the pore system. The linear log(k)-?? data trends cut the curves of constant pore size from the pore-dimension models, which shows that porosity reduction is always accompanied by a reduction in characteristic pore size. The high powers of porosity of the grain-based and surface-area models are required to compensate for the inclusion of the small end of the pore size spectrum.
Rajeswaran, Jeevanantham; Blackstone, Eugene H
2017-02-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time-varying coefficients.
Paediatric case mix in a rural clinical school is relevant to future practice.
Wright, Helen M; Maley, Moira A L; Playford, Denese E; Nicol, Pam; Evans, Sharon F
2017-11-29
Exposure to a representative case mix is essential for clinical learning, with logbooks established as a way of demonstrating patient contacts. Few studies have reported the paediatric case mix available to geographically distributed students within the same medical school. Given international interest in expanding medical teaching locations to rural contexts, equitable case exposure in rural relative to urban settings is topical. The Rural Clinical School of Western Australia locates students up to 3500 km from the urban university for an academic year. There is particular need to examine paediatric case mix as a study reported Australian graduates felt unprepared for paediatric rotations. We asked: Does a rural clinical school provide a paediatric case mix relevant to future practice? How does the paediatric case mix as logged by rural students compare with that by urban students? The 3745 logs of 76 urban and 76 rural consenting medical students were categorised by presenting symptoms and compared to the Australian Institute of Health and Welfare (AIHW) database Major Diagnostic Categories (MDCs). Rural and urban students logged core paediatric cases, in similar order, despite the striking difference in geographic locations. The pattern of overall presenting problems closely corresponded to Australian paediatric hospital admissions. Rural students logged 91% of cases in secondary healthcare settings; urban students logged 90% of cases in tertiary settings. The top four presenting problems were ENT/respiratory, gastrointestinal/urogenital, neurodevelopmental and musculoskeletal; these made up 60% of all cases. Rural and urban students logged similar proportions of infants, children and adolescents, with a variety of case morbidity. Rural clinical school students logged a mix of core paediatric cases relevant to illnesses of Australian children admitted to public hospitals, with similar order and pattern by age group to urban students, despite major differences in clinical settings. Logged cases met the curriculum learning outcomes of graduates. Minor variations were readily addressed via recommendations about logging. This paper provides evidence of the legitimacy of student logs as useful tools in affirming appropriate paediatric case mix. It validates the rural clinical school context as appropriate for medical students to prepare for future clinical paediatric practice.
Decomposition and model selection for large contingency tables.
Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter
2010-04-01
Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.
Using Smart Devices to Measure Intermittent Noise in the Workplace
Roberts, Benjamin; Neitzel, Richard Lee
2017-01-01
Purpose: To determine the accuracy of smart devices (iPods) to measure intermittent noise and integrate a noise dose in the workplace. Materials and Methods: In experiment 1, four iPods were each paired with a Larson Davis Spark dosimeter and exposed to randomly fluctuating pink noise in a reverberant sound chamber. Descriptive statistics and the mean difference between the iPod and its paired dosimeter were calculated for the 1-s data logged measurements. The calculated time weighted average (TWA) was also compared between the devices. In experiment 2, 15 maintenance workers and 14 office workers wore an iPod and dosimeter during their work-shift for a maximum of five workdays. A mixed effects linear regression model was used to control for repeated measures and to determine the effect of the device type on the projected 8-h TWA. Results: In experiment 1, a total of 315,306 1-s data logged measurements were made. The interquartile range of the mean difference fell within ±2.0 A-weighted decibels (dBA), which is the standard used by the American National Standards Institute to classify a type 2 sound level meter. The mean difference of the calculated TWA was within ±0.5 dBA except for one outlier. In experiment 2, the results of the mixed effects model found that, on average, iPods measured an 8-h TWA 1.7 dBA higher than their paired dosimeters. Conclusion: This study shows that iPods have the ability to make reasonably accurate noise measurements in the workplace, but they are not as accurate as traditional noise dosimeters. PMID:29192614
Log-Linear Models for Gene Association
Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.
2009-01-01
We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032
Dimension yields from short logs of low-quality hardwood trees.
Howard N. Rosen; Harold A. Stewart; David J. Polak
1980-01-01
Charts are presented for determining yields of 4/4 dimension cuttings from short hardwood logs of aspen, soft maple, black cherry, yellow-poplar, and black walnut for several cutting grades and bolt sizes. Cost comparisons of short log and standard grade mixes show sizes. Cost comparisons of short log and standard grade mixes show the estimated least expensive...
Gallium Arsenide and Related Compounds, 1986.
1986-01-01
AFMRI.1U8 d7 -18 6o 60AM F PERORMING ORGANIZATIN ,1b OFICE SYMBOL. 7a. NAME OF MONITORING ORGANIZATION Of appkiie) Unvriyof Illinois AFOSRINE 6C...effect is shown in the log I vs. V characteristics in figure 5. Both devices exhibit good logarithmic behaviour , but it is clear that the ideality of the...effects at the surface. As also shown in Fig. 5, a 200 nm thick n-doped ion implanted and activated layer shows a "mixed" behaviour , namely a linear
Baysal, Ayse Handan; Molva, Celenk; Unluturk, Sevcan
2013-09-16
In the present study, the effect of short wave ultraviolet light (UV-C) on the inactivation of Alicyclobacillus acidoterrestris DSM 3922 spores in commercial pasteurized white grape and apple juices was investigated. The inactivation of A. acidoterrestris spores in juices was examined by evaluating the effects of UV light intensity (1.31, 0.71 and 0.38 mW/cm²) and exposure time (0, 3, 5, 7, 10, 12 and 15 min) at constant depth (0.15 cm). The best reduction (5.5-log) was achieved in grape juice when the UV intensity was 1.31 mW/cm². The maximum inactivation was approximately 2-log CFU/mL in apple juice under the same conditions. The results showed that first-order kinetics were not suitable for the estimation of spore inactivation in grape juice treated with UV-light. Since tailing was observed in the survival curves, the log-linear plus tail and Weibull models were compared. The results showed that the log-linear plus tail model was satisfactorily fitted to estimate the reductions. As a non-thermal technology, UV-C treatment could be an alternative to thermal treatment for grape juices or combined with other preservation methods for the pasteurization of apple juice. © 2013 Elsevier B.V. All rights reserved.
Yu, S; Gao, S; Gan, Y; Zhang, Y; Ruan, X; Wang, Y; Yang, L; Shi, J
2016-04-01
Quantitative structure-property relationship modelling can be a valuable alternative method to replace or reduce experimental testing. In particular, some endpoints such as octanol-water (KOW) and organic carbon-water (KOC) partition coefficients of polychlorinated biphenyls (PCBs) are easier to predict and various models have been already developed. In this paper, two different methods, which are multiple linear regression based on the descriptors generated using Dragon software and hologram quantitative structure-activity relationships, were employed to predict suspended particulate matter (SPM) derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of 209 PCBs. The predictive ability of the derived models was validated using a test set. The performances of all these models were compared with EPI Suite™ software. The results indicated that the proposed models were robust and satisfactory, and could provide feasible and promising tools for the rapid assessment of the SPM derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of PCBs.
Duan, Zhi; Hansen, Terese Holst; Hansen, Tina Beck; Dalgaard, Paw; Knøchel, Susanne
2016-08-02
With low temperature long time (LTLT) cooking it can take hours for meat to reach a final core temperature above 53°C and germination followed by growth of Clostridium perfringens is a concern. Available and new growth data in meats including 154 lag times (tlag), 224 maximum specific growth rates (μmax) and 25 maximum population densities (Nmax) were used to developed a model to predict growth of C. perfringens during the coming-up time of LTLT cooking. New data were generate in 26 challenge tests with chicken (pH6.8) and pork (pH5.6) at two different slowly increasing temperature (SIT) profiles (10°C to 53°C) followed by 53°C in up to 30h in total. Three inoculum types were studied including vegetative cells, non-heated spores and heat activated (75°C, 20min) spores of C. perfringens strain 790-94. Concentrations of vegetative cells in chicken increased 2 to 3logCFU/g during the SIT profiles. Similar results were found for non-heated and heated spores in chicken, whereas in pork C. perfringens 790-94 increased less than 1logCFU/g. At 53°C C. perfringens 790-94 was log-linearly inactivated. Observed and predicted concentrations of C. perfringens, at the time when 53°C (log(N53)) was reached, were used to evaluate the new growth model and three available predictive models previously published for C. perfringens growth during cooling rather than during SIT profiles. Model performance was evaluated by using mean deviation (MD), mean absolute deviation (MAD) and the acceptable simulation zone (ASZ) approach with a zone of ±0.5logCFU/g. The new model showed best performance with MD=0.27logCFU/g, MAD=0.66logCFU/g and ASZ=67%. The two growth models that performed best, were used together with a log-linear inactivation model and D53-values from the present study to simulate the behaviour of C. perfringens under the fast and slow SIT profiles investigated in the present study. Observed and predicted concentrations were compared using a new fail-safe acceptable zone (FSAZ) method. FSAZ was defined as the predicted concentration of C. perfringens plus 0.5logCFU/g. If at least 85% of the observed log-counts were below the FSAZ, the model was considered fail-safe. The two models showed similar performance but none of them performed satisfactorily for all conditions. It is recommended to use the models without a lag phase until more precise lag time models become available. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Nelson, Ross; Margolis, Hank; Montesano, Paul; Sun, Guoqing; Cook, Bruce; Corp, Larry; Andersen, Hans-Erik; DeJong, Ben; Pellat, Fernando Paz; Fickel, Thaddeus;
2016-01-01
Existing national forest inventory plots, an airborne lidar scanning (ALS) system, and a space profiling lidar system (ICESat-GLAS) are used to generate circa 2005 estimates of total aboveground dry biomass (AGB) in forest strata, by state, in the continental United States (CONUS) and Mexico. The airborne lidar is used to link ground observations of AGB to space lidar measurements. Two sets of models are generated, the first relating ground estimates of AGB to airborne laser scanning (ALS) measurements and the second set relating ALS estimates of AGB (generated using the first model set) to GLAS measurements. GLAS then, is used as a sampling tool within a hybrid estimation framework to generate stratum-, state-, and national-level AGB estimates. A two-phase variance estimator is employed to quantify GLAS sampling variability and, additively, ALS-GLAS model variability in this current, three-phase (ground-ALS-space lidar) study. The model variance component characterizes the variability of the regression coefficients used to predict ALS-based estimates of biomass as a function of GLAS measurements. Three different types of predictive models are considered in CONUS to determine which produced biomass totals closest to ground-based national forest inventory estimates - (1) linear (LIN), (2) linear-no-intercept (LNI), and (3) log-linear. For CONUS at the national level, the GLAS LNI model estimate (23.95 +/- 0.45 Gt AGB), agreed most closely with the US national forest inventory ground estimate, 24.17 +/- 0.06 Gt, i.e., within 1%. The national biomass total based on linear ground-ALS and ALS-GLAS models (25.87 +/- 0.49 Gt) overestimated the national ground-based estimate by 7.5%. The comparable log-linear model result (63.29 +/-1.36 Gt) overestimated ground results by 261%. All three national biomass GLAS estimates, LIN, LNI, and log-linear, are based on 241,718 pulses collected on 230 orbits. The US national forest inventory (ground) estimates are based on 119,414 ground plots. At the US state level, the average absolute value of the deviation of LNI GLAS estimates from the comparable ground estimate of total biomass was 18.8% (range: Oregon,-40.8% to North Dakota, 128.6%). Log-linear models produced gross overestimates in the continental US, i.e., N2.6x, and the use of this model to predict regional biomass using GLAS data in temperate, western hemisphere forests is not appropriate. The best model form, LNI, is used to produce biomass estimates in Mexico. The average biomass density in Mexican forests is 53.10 +/- 0.88 t/ha, and the total biomass for the country, given a total forest area of 688,096 sq km, is 3.65 +/- 0.06 Gt. In Mexico, our GLAS biomass total underestimated a 2005 FAO estimate (4.152 Gt) by 12% and overestimated a 2007/8 radar study's figure (3.06 Gt) by 19%.
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Generating log-normal mock catalog of galaxies in redshift space
NASA Astrophysics Data System (ADS)
Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro
2017-10-01
We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.
Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes
Sánchez-Pérez, Inma; Ibern, Pere; Coderch, Jordi; Inoriza, José María
2016-01-01
Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain) for the year 2012 (N = 92,498 individuals). A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG) patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals. PMID:28316542
Regional variability among nonlinear chlorophyll-phosphorus relationships in lakes
Filstrup, Christopher T.; Wagner, Tyler; Soranno, Patricia A.; Stanley, Emily H.; Stow, Craig A.; Webster, Katherine E.; Downing, John A.
2014-01-01
The relationship between chlorophyll a (Chl a) and total phosphorus (TP) is a fundamental relationship in lakes that reflects multiple aspects of ecosystem function and is also used in the regulation and management of inland waters. The exact form of this relationship has substantial implications on its meaning and its use. We assembled a spatially extensive data set to examine whether nonlinear models are a better fit for Chl a—TP relationships than traditional log-linear models, whether there were regional differences in the form of the relationships, and, if so, which regional factors were related to these differences. We analyzed a data set from 2105 temperate lakes across 35 ecoregions by fitting and comparing two different nonlinear models and one log-linear model. The two nonlinear models fit the data better than the log-linear model. In addition, the parameters for the best-fitting model varied among regions: the maximum and lower Chl aasymptotes were positively and negatively related to percent regional pasture land use, respectively, and the rate at which chlorophyll increased with TP was negatively related to percent regional wetland cover. Lakes in regions with more pasture fields had higher maximum chlorophyll concentrations at high TP concentrations but lower minimum chlorophyll concentrations at low TP concentrations. Lakes in regions with less wetland cover showed a steeper Chl a—TP relationship than wetland-rich regions. Interpretation of Chl a—TP relationships depends on regional differences, and theory and management based on a monolithic relationship may be inaccurate.
A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation
Rajeswaran, Jeevanantham; Blackstone, Eugene H.
2014-01-01
In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
An experimental study of miscible viscous fingering of annular ring
NASA Astrophysics Data System (ADS)
Nagatsu, Yuichiro; Othman, Hamirul Bin; Mishra, Manoranjan
2017-11-01
Understanding the viscous fingering (VF) dynamics of finite width sample is important in the fields especially such as liquid chromatography and groundwater contamination and mixing in microfluidics. In this paper, we experimentally investigate such hydrodynamical morphology of VF using a Hele-Shaw flow system in which a miscible annular ring of fluid is displaced radially. Experiments are performed to investigate the effects of the sample volume, the effects of dispersion and log mobility ratio R on the dynamics of VF pattern and onset of such instability. Depending whether the finite width ring is more or less viscous than the carrier fluid, the log mobility ratio R becomes positive or negative respectively. The experiments are successfully conducted to obtain the VF patterns for R>0 and R<0, of the finite annular ring at the inner and outer radial interfaces, respectively. It is found that in the radial displacement, the inward finger moves slower than the outward finger. The experimental results are found to be qualitatively in good agreement with the corresponding linear stability analysis and non-linear simulations results available in the literature.
Pedraza-Flechas, Ana María; Lope, Virginia; Moreo, Pilar; Ascunce, Nieves; Miranda-García, Josefa; Vidal, Carmen; Sánchez-Contador, Carmen; Santamariña, Carmen; Pedraz-Pingarrón, Carmen; Llobet, Rafael; Aragonés, Nuria; Salas-Trejo, Dolores; Pollán, Marina; Pérez-Gómez, Beatriz
2017-05-01
We explored the relationship between sleep patterns and sleep disorders and mammographic density (MD), a marker of breast cancer risk. Participants in the DDM-Spain/var-DDM study, which included 2878 middle-aged Spanish women, were interviewed via telephone and asked questions on sleep characteristics. Two radiologists assessed MD in their left craneo-caudal mammogram, assisted by a validated semiautomatic-computer tool (DM-scan). We used log-transformed percentage MD as the dependent variable and fitted mixed linear regression models, including known confounding variables. Our results showed that neither sleeping patterns nor sleep disorders were associated with MD. However, women with frequent changes in their bedtime due to anxiety or depression had higher MD (e β :1.53;95%CI:1.04-2.26). Copyright © 2017 Elsevier B.V. All rights reserved.
Response Strength in Extreme Multiple Schedules
McLean, Anthony P; Grace, Randolph C; Nevin, John A
2012-01-01
Four pigeons were trained in a series of two-component multiple schedules. Reinforcers were scheduled with random-interval schedules. The ratio of arranged reinforcer rates in the two components was varied over 4 log units, a much wider range than previously studied. When performance appeared stable, prefeeding tests were conducted to assess resistance to change. Contrary to the generalized matching law, logarithms of response ratios in the two components were not a linear function of log reinforcer ratios, implying a failure of parameter invariance. Over a 2 log unit range, the function appeared linear and indicated undermatching, but in conditions with more extreme reinforcer ratios, approximate matching was observed. A model suggested by McLean (1991), originally for local contrast, predicts these changes in sensitivity to reinforcer ratios somewhat better than models by Herrnstein (1970) and by Williams and Wixted (1986). Prefeeding tests of resistance to change were conducted at each reinforcer ratio, and relative resistance to change was also a nonlinear function of log reinforcer ratios, again contrary to conclusions from previous work. Instead, the function suggests that resistance to change in a component may be determined partly by the rate of reinforcement and partly by the ratio of reinforcers to responses. PMID:22287804
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
We investigated the use of output from Bayesian stable isotope mixing models as constraints for a linear inverse food web model of a temperate intertidal seagrass system in the Marennes-Oléron Bay, France. Linear inverse modeling (LIM) is a technique that estimates a complete net...
ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems.
Improving linear accelerator service response with a real- time electronic event reporting system.
Hoisak, Jeremy D P; Pawlicki, Todd; Kim, Gwe-Ya; Fletcher, Richard; Moore, Kevin L
2014-09-08
To track linear accelerator performance issues, an online event recording system was developed in-house for use by therapists and physicists to log the details of technical problems arising on our institution's four linear accelerators. In use since October 2010, the system was designed so that all clinical physicists would receive email notification when an event was logged. Starting in October 2012, we initiated a pilot project in collaboration with our linear accelerator vendor to explore a new model of service and support, in which event notifications were also sent electronically directly to dedicated engineers at the vendor's technical help desk, who then initiated a response to technical issues. Previously, technical issues were reported by telephone to the vendor's call center, which then disseminated information and coordinated a response with the Technical Support help desk and local service engineers. The purpose of this work was to investigate the improvements to clinical operations resulting from this new service model. The new and old service models were quantitatively compared by reviewing event logs and the oncology information system database in the nine months prior to and after initiation of the project. Here, we focus on events that resulted in an inoperative linear accelerator ("down" machine). Machine downtime, vendor response time, treatment cancellations, and event resolution were evaluated and compared over two equivalent time periods. In 389 clinical days, there were 119 machine-down events: 59 events before and 60 after introduction of the new model. In the new model, median time to service response decreased from 45 to 8 min, service engineer dispatch time decreased 44%, downtime per event decreased from 45 to 20 min, and treatment cancellations decreased 68%. The decreased vendor response time and reduced number of on-site visits by a service engineer resulted in decreased downtime and decreased patient treatment cancellations.
Spontaneous repulsion in the A +B →0 reaction on coupled networks
NASA Astrophysics Data System (ADS)
Lazaridis, Filippos; Gross, Bnaya; Maragakis, Michael; Argyrakis, Panos; Bonamassa, Ivan; Havlin, Shlomo; Cohen, Reuven
2018-04-01
We study the transient dynamics of an A +B →0 process on a pair of randomly coupled networks, where reactants are initially separated. We find that, for sufficiently small fractions q of cross couplings, the concentration of A (or B ) particles decays linearly in a first stage and crosses over to a second linear decrease at a mixing time tx. By numerical and analytical arguments, we show that for symmetric and homogeneous structures tx∝(
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2015-10-01
In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
USDA-ARS?s Scientific Manuscript database
The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...
Three-parameter modeling of the soil sorption of acetanilide and triazine herbicide derivatives.
Freitas, Mirlaine R; Matias, Stella V B G; Macedo, Renato L G; Freitas, Matheus P; Venturin, Nelson
2014-02-01
Herbicides have widely variable toxicity and many of them are persistent soil contaminants. Acetanilide and triazine family of herbicides have widespread use, but increasing interest for the development of new herbicides has been rising to increase their effectiveness and to diminish environmental hazard. The environmental risk of new herbicides can be accessed by estimating their soil sorption (logKoc), which is usually correlated to the octanol/water partition coefficient (logKow). However, earlier findings have shown that this correlation is not valid for some acetanilide and triazine herbicides. Thus, easily accessible quantitative structure-property relationship models are required to predict logKoc of analogues of the these compounds. Octanol/water partition coefficient, molecular weight and volume were calculated and then regressed against logKoc for two series of acetanilide and triazine herbicides using multiple linear regression, resulting in predictive and validated models.
Chen, Bo-Ching; Lai, Hung-Yu; Juang, Kai-Wei
2012-06-01
To better understand the ability of switchgrass (Panicum virgatum L.), a perennial grass often relegated to marginal agricultural areas with minimal inputs, to remove cadmium, chromium, and zinc by phytoextraction from contaminated sites, the relationship between plant metal content and biomass yield is expressed in different models to predict the amount of metals switchgrass can extract. These models are reliable in assessing the use of switchgrass for phytoremediation of heavy-metal-contaminated sites. In the present study, linear and exponential decay models are more suitable for presenting the relationship between plant cadmium and dry weight. The maximum extractions of cadmium using switchgrass, as predicted by the linear and exponential decay models, approached 40 and 34 μg pot(-1), respectively. The log normal model was superior in predicting the relationship between plant chromium and dry weight. The predicted maximum extraction of chromium by switchgrass was about 56 μg pot(-1). In addition, the exponential decay and log normal models were better than the linear model in predicting the relationship between plant zinc and dry weight. The maximum extractions of zinc by switchgrass, as predicted by the exponential decay and log normal models, were about 358 and 254 μg pot(-1), respectively. To meet the maximum removal of Cd, Cr, and Zn, one can adopt the optimal timing of harvest as plant Cd, Cr, and Zn approach 450 and 526 mg kg(-1), 266 mg kg(-1), and 3022 and 5000 mg kg(-1), respectively. Due to the well-known agronomic characteristics of cultivation and the high biomass production of switchgrass, it is practicable to use switchgrass for the phytoextraction of heavy metals in situ. Copyright © 2012 Elsevier Inc. All rights reserved.
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
Karnoe, Astrid; Furstrand, Dorthe; Christensen, Karl Bang; Norgaard, Ole; Kayser, Lars
2018-05-10
To achieve full potential in user-oriented eHealth projects, we need to ensure a match between the eHealth technology and the user's eHealth literacy, described as knowledge and skills. However, there is a lack of multifaceted eHealth literacy assessment tools suitable for screening purposes. The objective of our study was to develop and validate an eHealth literacy assessment toolkit (eHLA) that assesses individuals' health literacy and digital literacy using a mix of existing and newly developed scales. From 2011 to 2015, scales were continuously tested and developed in an iterative process, which led to 7 tools being included in the validation study. The eHLA validation version consisted of 4 health-related tools (tool 1: "functional health literacy," tool 2: "health literacy self-assessment," tool 3: "familiarity with health and health care," and tool 4: "knowledge of health and disease") and 3 digitally-related tools (tool 5: "technology familiarity," tool 6: "technology confidence," and tool 7: "incentives for engaging with technology") that were tested in 475 respondents from a general population sample and an outpatient clinic. Statistical analyses examined floor and ceiling effects, interitem correlations, item-total correlations, and Cronbach coefficient alpha (CCA). Rasch models (RM) examined the fit of data. Tools were reduced in items to secure robust tools fit for screening purposes. Reductions were made based on psychometrics, face validity, and content validity. Tool 1 was not reduced in items; it consequently consists of 10 items. The overall fit to the RM was acceptable (Anderson conditional likelihood ratio, CLR=10.8; df=9; P=.29), and CCA was .67. Tool 2 was reduced from 20 to 9 items. The overall fit to a log-linear RM was acceptable (Anderson CLR=78.4, df=45, P=.002), and CCA was .85. Tool 3 was reduced from 23 to 5 items. The final version showed excellent fit to a log-linear RM (Anderson CLR=47.7, df=40, P=.19), and CCA was .90. Tool 4 was reduced from 12 to 6 items. The fit to a log-linear RM was acceptable (Anderson CLR=42.1, df=18, P=.001), and CCA was .59. Tool 5 was reduced from 20 to 6 items. The fit to the RM was acceptable (Anderson CLR=30.3, df=17, P=.02), and CCA was .94. Tool 6 was reduced from 5 to 4 items. The fit to a log-linear RM taking local dependency (LD) into account was acceptable (Anderson CLR=26.1, df=21, P=.20), and CCA was .91. Tool 7 was reduced from 6 to 4 items. The fit to a log-linear RM taking LD and differential item functioning into account was acceptable (Anderson CLR=23.0, df=29, P=.78), and CCA was .90. The eHLA consists of 7 short, robust scales that assess individual's knowledge and skills related to digital literacy and health literacy. ©Astrid Karnoe, Dorthe Furstrand, Karl Bang Christensen, Ole Norgaard, Lars Kayser. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.05.2018.
Characterizing Sleep Structure Using the Hypnogram
Swihart, Bruce J.; Caffo, Brian; Bandeen-Roche, Karen; Punjabi, Naresh M.
2008-01-01
Objectives: Research on the effects of sleep-disordered breathing (SDB) on sleep structure has traditionally been based on composite sleep-stage summaries. The primary objective of this investigation was to demonstrate the utility of log-linear and multistate analysis of the sleep hypnogram in evaluating differences in nocturnal sleep structure in subjects with and without SDB. Methods: A community-based sample of middle-aged and older adults with and without SDB matched on age, sex, race, and body mass index was identified from the Sleep Heart Health Study. Sleep was assessed with home polysomnography and categorized into rapid eye movement (REM) and non-REM (NREM) sleep. Log-linear and multistate survival analysis models were used to quantify the frequency and hazard rates of transitioning, respectively, between wakefulness, NREM sleep, and REM sleep. Results: Whereas composite sleep-stage summaries were similar between the two groups, subjects with SDB had higher frequencies and hazard rates for transitioning between the three states. Specifically, log-linear models showed that subjects with SDB had more wake-to-NREM sleep and NREM sleep-to-wake transitions, compared with subjects without SDB. Multistate survival models revealed that subjects with SDB transitioned more quickly from wake-to-NREM sleep and NREM sleep-to-wake than did subjects without SDB. Conclusions: The description of sleep continuity with log-linear and multistate analysis of the sleep hypnogram suggests that such methods can identify differences in sleep structure that are not evident with conventional sleep-stage summaries. Detailed characterization of nocturnal sleep evolution with event history methods provides additional means for testing hypotheses on how specific conditions impact sleep continuity and whether sleep disruption is associated with adverse health outcomes. Citation: Swihart BJ; Caffo B; Bandeen-Roche K; Punjabi NM. Characterizing sleep structure using the hypnogram. J Clin Sleep Med 2008;4(4):349–355. PMID:18763427
Demonstration of the Web-based Interspecies Correlation Estimation (Web-ICE) modeling application
The Web-based Interspecies Correlation Estimation (Web-ICE) modeling application is available to the risk assessment community through a user-friendly internet platform (http://epa.gov/ceampubl/fchain/webice/). ICE models are log-linear least square regressions that predict acute...
Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.
Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C
2014-12-01
D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.
A break-even analysis for dementia care collaboration: Partners in Dementia Care.
Morgan, Robert O; Bass, David M; Judge, Katherine S; Liu, C F; Wilson, Nancy; Snow, A Lynn; Pirraglia, Paul; Garcia-Maldonado, Maurilio; Raia, Paul; Fouladi, N N; Kunik, Mark E
2015-06-01
Dementia is a costly disease. People with dementia, their families, and their friends are affected on personal, emotional, and financial levels. Prior work has shown that the "Partners in Dementia Care" (PDC) intervention addresses unmet needs and improves psychosocial outcomes and satisfaction with care. We examined whether PDC reduced direct Veterans Health Administration (VHA) health care costs compared with usual care. This study was a cost analysis of the PDC intervention in a 30-month trial involving five VHA medical centers. Study subjects were veterans (N = 434) 50 years of age and older with dementia and their caregivers at two intervention (N = 269) and three comparison sites (N = 165). PDC is a telephone-based care coordination and support service for veterans with dementia and their caregivers, delivered through partnerships between VHA medical centers and local Alzheimer's Association chapters. We tested for differences in total VHA health care costs, including hospital, emergency department, nursing home, outpatient, and pharmacy costs, as well as program costs for intervention participants. Covariates included caregiver reports of veterans' cognitive impairment, behavior problems, and personal care dependencies. We used linear mixed model regression to model change in log total cost post-baseline over a 1-year follow-up period. Intervention participants showed higher VHA costs than usual-care participants both before and after the intervention but did not differ significantly regarding change in log costs from pre- to post-baseline periods. Pre-baseline log cost (p ≤ 0.001), baseline cognitive impairment (p ≤ 0.05), number of personal care dependencies (p ≤ 0.01), and VA service priority (p ≤ 0.01) all predicted change in log total cost. These analyses show that PDC meets veterans' needs without significantly increasing VHA health care costs. PDC addresses the priority area of care coordination in the National Plan to Address Alzheimer's Disease, offering a low-cost, structured, protocol-driven, evidence-based method for effectively delivering care coordination.
Acquah, Gifty E.; Via, Brian K.; Billor, Nedret; Fasina, Oladiran O.; Eckhardt, Lori G.
2016-01-01
As new markets, technologies and economies evolve in the low carbon bioeconomy, forest logging residue, a largely untapped renewable resource will play a vital role. The feedstock can however be variable depending on plant species and plant part component. This heterogeneity can influence the physical, chemical and thermochemical properties of the material, and thus the final yield and quality of products. Although it is challenging to control compositional variability of a batch of feedstock, it is feasible to monitor this heterogeneity and make the necessary changes in process parameters. Such a system will be a first step towards optimization, quality assurance and cost-effectiveness of processes in the emerging biofuel/chemical industry. The objective of this study was therefore to qualitatively classify forest logging residue made up of different plant parts using both near infrared spectroscopy (NIRS) and Fourier transform infrared spectroscopy (FTIRS) together with linear discriminant analysis (LDA). Forest logging residue harvested from several Pinus taeda (loblolly pine) plantations in Alabama, USA, were classified into three plant part components: clean wood, wood and bark and slash (i.e., limbs and foliage). Five-fold cross-validated linear discriminant functions had classification accuracies of over 96% for both NIRS and FTIRS based models. An extra factor/principal component (PC) was however needed to achieve this in FTIRS modeling. Analysis of factor loadings of both NIR and FTIR spectra showed that, the statistically different amount of cellulose in the three plant part components of logging residue contributed to their initial separation. This study demonstrated that NIR or FTIR spectroscopy coupled with PCA and LDA has the potential to be used as a high throughput tool in classifying the plant part makeup of a batch of forest logging residue feedstock. Thus, NIR/FTIR could be employed as a tool to rapidly probe/monitor the variability of forest biomass so that the appropriate online adjustments to parameters can be made in time to ensure process optimization and product quality. PMID:27618901
A FORTRAN program for multivariate survival analysis on the personal computer.
Mulder, P G
1988-01-01
In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.
Including operational data in QMRA model: development and impact of model inputs.
Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle
2009-03-01
A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations < DL. The selection of process performance distributions for modelling the performance of treatment (filtration and ozonation) influences the estimated risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).
Killiches, Matthias; Czado, Claudia
2018-03-22
We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
Generating log-normal mock catalog of galaxies in redshift space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, Aniket; Makiya, Ryu; Saito, Shun
We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less
Ratanapob, Niorn; VanLeeuwen, John; McKenna, Shawn; Wichtel, Maureen; Rodriguez-Lecompte, Juan C; Menzies, Paula; Wichtel, Jeffrey
2018-06-01
Late-gestation ewes are susceptible to ketonemia resulting from high energy requirement for fetal growth during the last few weeks of pregnancy. High lamb mortality is a possible consequence of effects of ketonemia on both ewes and lambs. Determining risk factors to ketonemia is a fundamental step to identify ewes at risk, in order to avoid losses caused by ketonemia. Serum β-hydroxybutyrate (BHBA) concentrations of 384 late-gestation ewe samples were determined. Physical examinations, including body condition, FAMACHA © and hygiene scoring, were performed. Udders and teeth were also examined. Fecal floatation was performed to detect gastrointestinal helminth eggs of the ewe fecal samples. General feeding management practices and season at sampling were recorded. Litter sizes were retrieved from lambing records. Factors associated with log serum BHBA concentration were determined using a linear mixed model, with flock and lambing groups as random effects. The mean serum BHBA concentration was 545.8 (±453.3) μmol/l. Ewes with a body condition score (BCS) of 2.5-3.5 had significantly lower log BHBA concentrations than ewes with a BCS of ≤2.0, by 19.7% (p = 0.035). Ewes with a BCS of >3.5 had a trend toward higher log BHBA concentrations compared to ewes with a BCS of 2.5-3.5. Ewes with a FAMACHA © score of 3 had significantly higher log BHBA concentrations than ewes with a FAMACHA © score of 1 or 2, by 12.1% (p = 0.049). Ewes in which gastrointestinal helminth eggs were detected had significantly higher log BHBA concentrations than ewes in which helminth eggs were not detected, by 12.3% (p = 0.040). An increased litter size was associated with higher log BHBA concentration (p ≤ 0.003), with the log BHBA concentrations of ewes having twins, triplets, and quadruplets or quintuplets were higher than those of ewes having singleton by 19.2%, 30.4%, and 85.2%, respectively. Season at sampling confounded the association between log BHBA concentration and FAMACHA © score, and therefore was retained in the final model even though it was not statistically significant. Intra-class correlation coefficients at the flock and lambing group levels were 0.14 and 0.32, respectively. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Linear Equations with the Euler Totient Function
2007-02-13
unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 2 FLORIAN LUCA, PANTELIMON STĂNICĂ...of positive integers n such that φ(n) = φ(n+ 1), and that the set of Phibonacci numbers is A(1,1,−1) + 2. Theorem 2.1. Let C (t, a) = t3 logH(a). Then...the estimate #Aa(x) C (t, a) x log log log x√ log log x LINEAR EQUATIONS WITH THE EULER TOTIENT FUNCTION 3 holds uniformly in a and 1 ≤ t < y. Note
Bayesian Model Comparison for the Order Restricted RC Association Model
ERIC Educational Resources Information Center
Iliopoulos, G.; Kateri, M.; Ntzoufras, I.
2009-01-01
Association models constitute an attractive alternative to the usual log-linear models for modeling the dependence between classification variables. They impose special structure on the underlying association by assigning scores on the levels of each classification variable, which can be fixed or parametric. Under the general row-column (RC)…
Global determinants of mortality in under 5s: 10 year worldwide longitudinal study.
Hanf, Matthieu; Nacher, Mathieu; Guihenneuc, Chantal; Tubert-Bitter, Pascale; Chavance, Michel
2013-11-08
To assess at country level the association of mortality in under 5s with a large set of determinants. Longitudinal study. 193 United Nations member countries, 2000-09. Yearly data between 2000 and 2009 based on 12 world development indicators were used in a multivariable general additive mixed model allowing for non-linear relations and lag effects. National rate of deaths in under 5s per 1000 live births The model retained the variables: gross domestic product per capita; percentage of the population having access to improved water sources, having access to improved sanitation facilities, and living in urban areas; adolescent fertility rate; public health expenditure per capita; prevalence of HIV; perceived level of corruption and of violence; and mean number of years in school for women of reproductive age. Most of these variables exhibited non-linear behaviours and lag effects. By providing a unified framework for mortality in under 5s, encompassing both high and low income countries this study showed non-linear behaviours and lag effects of known or suspected determinants of mortality in this age group. Although some of the determinants presented a linear action on log mortality indicating that whatever the context, acting on them would be a pertinent strategy to effectively reduce mortality, others had a threshold based relation potentially mediated by lag effects. These findings could help designing efficient strategies to achieve maximum progress towards millennium development goal 4, which aims to reduce mortality in under 5s by two thirds between 1990 and 2015.
The Umov effect in application to an optically thin two-component cloud of cosmic dust
NASA Astrophysics Data System (ADS)
Zubko, Evgenij; Videen, Gorden; Zubko, Nataliya; Shkuratov, Yuriy
2018-04-01
The Umov effect is an inverse correlation between linear polarization of the sunlight scattered by an object and its geometric albedo. The Umov effect has been observed in particulate surfaces, such as planetary regoliths, and recently it also was found in single-scattering small dust particles. Using numerical modeling, we study the Umov effect in a two-component mixture of small irregularly shaped particles. Such a complex chemical composition is suggested in cometary comae and other types of optically thin clouds of cosmic dust. We find that the two-component mixtures of small particles also reveal the Umov effect regardless of the chemical composition of their end-member components. The interrelation between log(Pmax) and log(A) in a two-component mixture of small irregularly shaped particles appears either in a straight linear form or in a slightly curved form. This curvature tends to decrease while the index n in a power-law size distribution r-n grows; at n > 2.5, the log(Pmax)-log(A) diagrams are almost straight linear in appearance. The curvature also noticeably decreases with the packing density of constituent material in irregularly shaped particles forming the mixture. That such a relation exists suggest the Umov effect may also be observed in more complex mixtures.
The Umov effect in application to an optically thin two-component cloud of cosmic dust
NASA Astrophysics Data System (ADS)
Zubko, Evgenij; Videen, Gorden; Zubko, Nataliya; Shkuratov, Yuriy
2018-07-01
The Umov effect is an inverse correlation between linear polarization of the sunlight scattered by an object and its geometric albedo. The Umov effect has been observed in particulate surfaces, such as planetary regoliths, and recently it also was found in single-scattering small dust particles. Using numerical modelling, we study the Umov effect in a two-component mixture of small irregularly shaped particles. Such a complex chemical composition is suggested in cometary comae and other types of optically thin clouds of cosmic dust. We find that the two-component mixtures of small particles also reveal the Umov effect regardless of the chemical composition of their end-member components. The interrelation between log(Pmax) and log(A) in a two-component mixture of small irregularly shaped particles appears either in a straight linear form or in a slightly curved form. This curvature tends to decrease while the index n in a power-law size distribution r-n grows; at n > 2.5, the log(Pmax)-log(A) diagrams are almost straight linear in appearance. The curvature also noticeably decreases with the packing density of constituent material in irregularly shaped particles forming the mixture. That such a relation exists suggests the Umov effect may also be observed in more complex mixtures.
Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano
2010-01-01
The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of log BB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (log P), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental log BB data had been determined in vivo. In particular, since molecules with log BB > 0.3 cross the blood-brain barrier (BBB) readily while molecules with log BB < −1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the log BB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. PMID:20427217
Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano
2010-06-01
The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of logBB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (logP), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental logBB data had been determined in vivo. In particular, since molecules with logBB>0.3 cross the blood-brain barrier (BBB) readily while molecules with logBB<-1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the logBB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. Published by Elsevier Inc.
In Search of Optimal Cognitive Diagnostic Model(s) for ESL Grammar Test Data
ERIC Educational Resources Information Center
Yi, Yeon-Sook
2017-01-01
This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model…
Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A
2017-02-01
This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p < 0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p < 0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.
Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz
2015-04-01
Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.
ERIC Educational Resources Information Center
Gleason, John M.
1993-01-01
This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)
Joseph L. Ganey; Scott C. Vojta
2017-01-01
Logs provide an important form of coarse woody debris in forest systems, contributing to numerous ecological processes and affecting wildlife habitat and fuel complexes. Despite this, little information is available on the dynamics of log populations in southwestern ponderosa pine (Pinus ponderosa) and especially mixed-conifer forests. A recent episode of elevated tree...
Fabian C.C. Uzoh; William W. Oliver
2008-01-01
A diameter increment model is developed and evaluated for individual trees of ponderosa pine throughout the species range in the United States using a multilevel linear mixed model. Stochastic variability is broken down among period, locale, plot, tree and within-tree components. Covariates acting at tree and stand level, as breast height diameter, density, site index...
Factors Influencing M.S.W. Students' Interest in Clinical Practice
ERIC Educational Resources Information Center
Perry, Robin
2009-01-01
This study utilizes linear and log-linear stochastic models to examine the impact that a variety of variables (including graduate education) have on M.S.W. students' desires to work in clinical practice. Data was collected biannually (between 1992 and 1998) from a complete population sample of all students entering and exiting accredited graduate…
Extended Mixed-Efects Item Response Models with the MH-RM Algorithm
ERIC Educational Resources Information Center
Chalmers, R. Philip
2015-01-01
A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…
ERIC Educational Resources Information Center
Madison, Matthew J.; Bradshaw, Laine P.
2015-01-01
Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other…
NASA Astrophysics Data System (ADS)
Mitra, Anindita; Li, Y.-F.; Shimizu, T.; Klämpfl, Tobias; Zimmermann, J. L.; Morfill, G. E.
2012-10-01
Cold Atmospheric Plasma (CAP) is a fast, low cost, simple, easy to handle technology for biological application. Our group has developed a number of different CAP devices using the microwave technology and the surface micro discharge (SMD) technology. In this study, FlatPlaSter2.0 at different time intervals (0.5 to 5 min) is used for microbial inactivation. There is a continuous demand for deactivation of microorganisms associated with raw foods/seeds without loosing their properties. This research focuses on the kinetics of CAP induced microbial inactivation of naturally growing surface microorganisms on seeds. The data were assessed for log- linear and non-log-linear models for survivor curves as a function of time. The Weibull model showed the best fitting performance of the data. No shoulder and tail was observed. The models are focused in terms of the number of log cycles reduction rather than on classical D-values with statistical measurements. The viability of seeds was not affected for CAP treatment times up to 3 min with our device. The optimum result was observed at 1 min with increased percentage of germination from 60.83% to 89.16% compared to the control. This result suggests the advantage and promising role of CAP in food industry.
NASA Astrophysics Data System (ADS)
Shobana, Sutha; Dharmaraja, Jeyaprakash; Selvaraj, Shanmugaperumal
2013-04-01
Equilibrium studies of Ni(II), Cu(II) and Zn(II) mixed ligand complexes involving a primary ligand 5-fluorouracil (5-FU; A) and imidazoles viz., imidazole (him), benzimidazole (bim), histamine (hist) and L-histidine (his) as co-ligands(B) were carried out pH-metrically in aqueous medium at 310 ± 0.1 K with I = 0.15 M (NaClO4). In solution state, the stoichiometry of MABH, MAB and MAB2 species have been detected. The primary ligand(A) binds the central M(II) ions in a monodentate manner whereas him, bim, hist and his co-ligands(B) bind in mono, mono, bi and tridentate modes respectively. The calculated Δ log K, log X and log X' values indicate higher stability of the mixed ligand complexes in comparison to binary species. Stability of the mixed ligand complex equilibria follows the Irving-Williams order of stability. In vitro biological evaluations of the free ligand(A) and their metal complexes by well diffusion technique show moderate activities against common bacterial and fungal strains. Oxidative cleavage interaction of ligand(A) and their copper complexes with CT DNA is also studied by gel electrophoresis method in the presence of oxidant. In vitro antioxidant evaluations of the primary ligand(A), CuA and CuAB complexes by DPPH free radical scavenging model were carried out. In solid, the MAB type of M(II)sbnd 5-FU(A)sbnd his(B) complexes were isolated and characterized by various physico-chemical and spectral techniques. Both the magnetic susceptibility and electronic spectral analysis suggest distorted octahedral geometry. Thermal studies on the synthesized mixed ligand complexes show loss of coordinated water molecule in the first step followed by decomposition of the organic residues subsequently. XRD and SEM analysis suggest that the microcrystalline nature and homogeneous morphology of MAB complexes. Further, the 3D molecular modeling and analysis for the mixed ligand MAB complexes have also been carried out.
Box-Cox Mixed Logit Model for Travel Behavior Analysis
NASA Astrophysics Data System (ADS)
Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.
2010-09-01
To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
Viscosities of Fe Ni, Fe Co and Ni Co binary melts
NASA Astrophysics Data System (ADS)
Sato, Yuzuru; Sugisawa, Koji; Aoki, Daisuke; Yamamura, Tsutomu
2005-02-01
Viscosities of three binary molten alloys consisting of the iron group elements, Fe, Ni and Co, have been measured by using an oscillating cup viscometer over the entire composition range from liquidus temperatures up to 1600 °C with high precision and excellent reproducibility. The viscosities measured showed good Arrhenius linearity for all the compositions. The viscosities of Fe, Ni and Co as a function of temperature are as follows: \\eqalign{ & \\log \\eta={-}0.6074 + 2493/T\\qquad for\\quad Fe\\\\ & \\log \\eta={-}0.5695 + 2157/T\\qquad for\\quad Ni \\\\ & \\log \\eta={-}0.6620 + 2430/T\\qquad for\\quad Co.} The isothermal viscosities of Fe-Ni and Fe-Co binary melts increase monotonically with increasing Fe content. On the other hand, in Ni-Co binary melt, the isothermal viscosity decreases slightly and then increases with increasing Co. The activation energy of Fe-Co binary melt increased slightly on mixing, and those of Fe-Ni and Ni-Co melts decreased monotonically with increasing Ni content. The above behaviour is discussed based on the thermodynamic properties of the alloys.
Rampersaud, E; Morris, R W; Weinberg, C R; Speer, M C; Martin, E R
2007-01-01
Genotype-based likelihood-ratio tests (LRT) of association that examine maternal and parent-of-origin effects have been previously developed in the framework of log-linear and conditional logistic regression models. In the situation where parental genotypes are missing, the expectation-maximization (EM) algorithm has been incorporated in the log-linear approach to allow incomplete triads to contribute to the LRT. We present an extension to this model which we call the Combined_LRT that incorporates additional information from the genotypes of unaffected siblings to improve assignment of incompletely typed families to mating type categories, thereby improving inference of missing parental data. Using simulations involving a realistic array of family structures, we demonstrate the validity of the Combined_LRT under the null hypothesis of no association and provide power comparisons under varying levels of missing data and using sibling genotype data. We demonstrate the improved power of the Combined_LRT compared with the family-based association test (FBAT), another widely used association test. Lastly, we apply the Combined_LRT to a candidate gene analysis in Autism families, some of which have missing parental genotypes. We conclude that the proposed log-linear model will be an important tool for future candidate gene studies, for many complex diseases where unaffected siblings can often be ascertained and where epigenetic factors such as imprinting may play a role in disease etiology.
ELASTIC NET FOR COX’S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox’s proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox’s proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932
van Rijn, Peter W; Ali, Usama S
2017-05-01
We compare three modelling frameworks for accuracy and speed of item responses in the context of adaptive testing. The first framework is based on modelling scores that result from a scoring rule that incorporates both accuracy and speed. The second framework is the hierarchical modelling approach developed by van der Linden (2007, Psychometrika, 72, 287) in which a regular item response model is specified for accuracy and a log-normal model for speed. The third framework is the diffusion framework in which the response is assumed to be the result of a Wiener process. Although the three frameworks differ in the relation between accuracy and speed, one commonality is that the marginal model for accuracy can be simplified to the two-parameter logistic model. We discuss both conditional and marginal estimation of model parameters. Models from all three frameworks were fitted to data from a mathematics and spelling test. Furthermore, we applied a linear and adaptive testing mode to the data off-line in order to determine differences between modelling frameworks. It was found that a model from the scoring rule framework outperformed a hierarchical model in terms of model-based reliability, but the results were mixed with respect to correlations with external measures. © 2017 The British Psychological Society.
Modeling the Geographic Consequence and Pattern of Dengue Fever Transmission in Thailand.
Bekoe, Collins; Pansombut, Tatdow; Riyapan, Pakwan; Kakchapati, Sampurna; Phon-On, Aniruth
2017-05-04
Dengue fever is one of the infectious diseases that is still a public health problem in Thailand. This study considers in detail, the geographic consequence, seasonal and pattern of dengue fever transmission among the 76 provinces of Thailand from 2003 to 2015. A cross-sectional study. The data for the study was from the Department of Disease Control under the Bureau of Epidemiology, Thailand. The quarterly effects and location on the transmission of dengue was modeled using an alternative additive log-linear model. The model fitted well as illustrated by the residual plots and the Again, the model showed that dengue fever is high in the second quarter of every year from May to August. There was an evidence of an increase in the trend of dengue annually from 2003 to 2015. There was a difference in the distribution of dengue fever within and between provinces. The areas of high risks were the central and southern regions of Thailand. The log-linear model provided a simple medium of modeling dengue fever transmission. The results are very important in the geographic distribution of dengue fever patterns.
voom: precision weights unlock linear model analysis tools for RNA-seq read counts
2014-01-01
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249
voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.
Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K
2014-02-03
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.
Nasari, Masoud M; Szyszkowicz, Mieczysław; Chen, Hong; Crouse, Daniel; Turner, Michelle C; Jerrett, Michael; Pope, C Arden; Hubbell, Bryan; Fann, Neal; Cohen, Aaron; Gapstur, Susan M; Diver, W Ryan; Stieb, David; Forouzanfar, Mohammad H; Kim, Sun-Young; Olives, Casey; Krewski, Daniel; Burnett, Richard T
2016-01-01
The effectiveness of regulatory actions designed to improve air quality is often assessed by predicting changes in public health resulting from their implementation. Risk of premature mortality from long-term exposure to ambient air pollution is the single most important contributor to such assessments and is estimated from observational studies generally assuming a log-linear, no-threshold association between ambient concentrations and death. There has been only limited assessment of this assumption in part because of a lack of methods to estimate the shape of the exposure-response function in very large study populations. In this paper, we propose a new class of variable coefficient risk functions capable of capturing a variety of potentially non-linear associations which are suitable for health impact assessment. We construct the class by defining transformations of concentration as the product of either a linear or log-linear function of concentration multiplied by a logistic weighting function. These risk functions can be estimated using hazard regression survival models with currently available computer software and can accommodate large population-based cohorts which are increasingly being used for this purpose. We illustrate our modeling approach with two large cohort studies of long-term concentrations of ambient air pollution and mortality: the American Cancer Society Cancer Prevention Study II (CPS II) cohort and the Canadian Census Health and Environment Cohort (CanCHEC). We then estimate the number of deaths attributable to changes in fine particulate matter concentrations over the 2000 to 2010 time period in both Canada and the USA using both linear and non-linear hazard function models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipfert, F.W.
1992-11-01
1980 data from up to 149 metropolitan areas were used to define cross-sectional associations between community air pollution and excess human mortality. The regression model proposed by Oezkaynak and Thurston, which accounted for age, race, education, poverty, and population density, was evaluated and several new models were developed. The new models also accounted for population change, drinking water hardness, and smoking, and included a more detailed description of race. Cause-of-death categories analyzed include all causes, all non-external causes, major cardiovascular diseases, and chronic obstructive pulmonary diseases (COPD). Both annual mortality rates and their logarithms were analyzed. The data on particulatesmore » were averaged across all monitoring stations available for each SMSA and the TSP data were restricted to the year 1980. The associations between mortality and air pollution were found to be dependent on the socioeconomic factors included in the models, the specific locations included din the data set, and the type of statistical model used. Statistically significant associations were found between TSP and mortality due to non-external causes with log-linear models, but not with a linear model, and between TS and COPD mortality for both linear and log-linear models. When the sulfate contribution to TSP was subtracted, the relationship with COPD mortality was strengthened. Scatter plots and quintile analyses suggested a TSP threshold for COPD mortality at around 65 ug/m{sup 3} (annual average). SO{sub 4}{sup {minus}2}, Mn, PM{sup 15}, and PM{sub 2.5} were not significantly associated with mortality using the new models.« less
The allometry of coarse root biomass: log-transformed linear regression or nonlinear regression?
Lai, Jiangshan; Yang, Bo; Lin, Dunmei; Kerkhoff, Andrew J; Ma, Keping
2013-01-01
Precise estimation of root biomass is important for understanding carbon stocks and dynamics in forests. Traditionally, biomass estimates are based on allometric scaling relationships between stem diameter and coarse root biomass calculated using linear regression (LR) on log-transformed data. Recently, it has been suggested that nonlinear regression (NLR) is a preferable fitting method for scaling relationships. But while this claim has been contested on both theoretical and empirical grounds, and statistical methods have been developed to aid in choosing between the two methods in particular cases, few studies have examined the ramifications of erroneously applying NLR. Here, we use direct measurements of 159 trees belonging to three locally dominant species in east China to compare the LR and NLR models of diameter-root biomass allometry. We then contrast model predictions by estimating stand coarse root biomass based on census data from the nearby 24-ha Gutianshan forest plot and by testing the ability of the models to predict known root biomass values measured on multiple tropical species at the Pasoh Forest Reserve in Malaysia. Based on likelihood estimates for model error distributions, as well as the accuracy of extrapolative predictions, we find that LR on log-transformed data is superior to NLR for fitting diameter-root biomass scaling models. More importantly, inappropriately using NLR leads to grossly inaccurate stand biomass estimates, especially for stands dominated by smaller trees.
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
Analytical methods in multivariate highway safety exposure data estimation
DOT National Transportation Integrated Search
1984-01-01
Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...
Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun
2013-09-01
By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.
NASA Astrophysics Data System (ADS)
Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo
2014-01-01
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.
Modelling the Progression of Competitive Performance of an Academy's Soccer Teams.
Malcata, Rita M; Hopkins, Will G; Richardson, Scott
2012-01-01
Progression of a team's performance is a key issue in competitive sport, but there appears to have been no published research on team progression for periods longer than a season. In this study we report the game-score progression of three teams of a youth talent-development academy over five seasons using a novel analytic approach based on generalised mixed modelling. The teams consisted of players born in 1991, 1992 and 1993; they played totals of 115, 107 and 122 games in Asia and Europe between 2005 and 2010 against teams differing in age by up to 3 years. Game scores predicted by the mixed model were assumed to have an over-dispersed Poisson distribution. The fixed effects in the model estimated an annual linear pro-gression for Aspire and for the other teams (grouped as a single opponent) with adjustment for home-ground advantage and for a linear effect of age difference between competing teams. A random effect allowed for different mean scores for Aspire and opposition teams. All effects were estimated as factors via log-transformation and presented as percent differences in scores. Inferences were based on the span of 90% confidence intervals in relation to thresholds for small factor effects of x/÷1.10 (+10%/-9%). Most effects were clear only when data for the three teams were combined. Older teams showed a small 27% increase in goals scored per year of age difference (90% confidence interval 13 to 42%). Aspire experienced a small home-ground advantage of 16% (-5 to 41%), whereas opposition teams experienced 31% (7 to 60%) on their own ground. After adjustment for these effects, the Aspire teams scored on average 1.5 goals per match, with little change in the five years of their existence, whereas their opponents' scores fell from 1.4 in their first year to 1.0 in their last. The difference in progression was trivial over one year (7%, -4 to 20%), small over two years (15%, -8 to 44%), but unclear over >2 years. In conclusion, the generalized mixed model has marginal utility for estimating progression of soccer scores, owing to the uncertainty arising from low game scores. The estimates are likely to be more precise and useful in sports with higher game scores. Key pointsA generalized linear mixed model is the approach for tracking game scores, key performance indicators or other measures of performance based on counts in sports where changes within and/or between games/seasons have to be considered.Game scores in soccer could be useful to track performance progression of teams, but hundreds of games are needed.Fewer games will be needed for tracking performance represented by counts with high scores, such as game scores in rugby or key performance indicators based on frequent events or player actions in any team sport.
Modelling the Progression of Competitive Performance of an Academy’s Soccer Teams
Malcata, Rita M.; Hopkins, Will G; Richardson, Scott
2012-01-01
Progression of a team’s performance is a key issue in competitive sport, but there appears to have been no published research on team progression for periods longer than a season. In this study we report the game-score progression of three teams of a youth talent-development academy over five seasons using a novel analytic approach based on generalised mixed modelling. The teams consisted of players born in 1991, 1992 and 1993; they played totals of 115, 107 and 122 games in Asia and Europe between 2005 and 2010 against teams differing in age by up to 3 years. Game scores predicted by the mixed model were assumed to have an over-dispersed Poisson distribution. The fixed effects in the model estimated an annual linear pro-gression for Aspire and for the other teams (grouped as a single opponent) with adjustment for home-ground advantage and for a linear effect of age difference between competing teams. A random effect allowed for different mean scores for Aspire and opposition teams. All effects were estimated as factors via log-transformation and presented as percent differences in scores. Inferences were based on the span of 90% confidence intervals in relation to thresholds for small factor effects of x/÷1.10 (+10%/-9%). Most effects were clear only when data for the three teams were combined. Older teams showed a small 27% increase in goals scored per year of age difference (90% confidence interval 13 to 42%). Aspire experienced a small home-ground advantage of 16% (-5 to 41%), whereas opposition teams experienced 31% (7 to 60%) on their own ground. After adjustment for these effects, the Aspire teams scored on average 1.5 goals per match, with little change in the five years of their existence, whereas their opponents’ scores fell from 1.4 in their first year to 1.0 in their last. The difference in progression was trivial over one year (7%, -4 to 20%), small over two years (15%, -8 to 44%), but unclear over >2 years. In conclusion, the generalized mixed model has marginal utility for estimating progression of soccer scores, owing to the uncertainty arising from low game scores. The estimates are likely to be more precise and useful in sports with higher game scores. Key pointsA generalized linear mixed model is the approach for tracking game scores, key performance indicators or other measures of performance based on counts in sports where changes within and/or between games/seasons have to be considered.Game scores in soccer could be useful to track performance progression of teams, but hundreds of games are needed.Fewer games will be needed for tracking performance represented by counts with high scores, such as game scores in rugby or key performance indicators based on frequent events or player actions in any team sport. PMID:24149364
An empirical model for estimating annual consumption by freshwater fish populations
Liao, H.; Pierce, C.L.; Larscheid, J.G.
2005-01-01
Population consumption is an important process linking predator populations to their prey resources. Simple tools are needed to enable fisheries managers to estimate population consumption. We assembled 74 individual estimates of annual consumption by freshwater fish populations and their mean annual population size, 41 of which also included estimates of mean annual biomass. The data set included 14 freshwater fish species from 10 different bodies of water. From this data set we developed two simple linear regression models predicting annual population consumption. Log-transformed population size explained 94% of the variation in log-transformed annual population consumption. Log-transformed biomass explained 98% of the variation in log-transformed annual population consumption. We quantified the accuracy of our regressions and three alternative consumption models as the mean percent difference from observed (bioenergetics-derived) estimates in a test data set. Predictions from our population-size regression matched observed consumption estimates poorly (mean percent difference = 222%). Predictions from our biomass regression matched observed consumption reasonably well (mean percent difference = 24%). The biomass regression was superior to an alternative model, similar in complexity, and comparable to two alternative models that were more complex and difficult to apply. Our biomass regression model, log10(consumption) = 0.5442 + 0.9962??log10(biomass), will be a useful tool for fishery managers, enabling them to make reasonably accurate annual population consumption predictions from mean annual biomass estimates. ?? Copyright by the American Fisheries Society 2005.
Anumol, Tarun; Sgroi, Massimiliano; Park, Minkyu; Roccaro, Paolo; Snyder, Shane A
2015-06-01
This study investigated the applicability of bulk organic parameters like dissolved organic carbon (DOC), UV absorbance at 254 nm (UV254), and total fluorescence (TF) to act as surrogates in predicting trace organic compound (TOrC) removal by granular activated carbon in water reuse applications. Using rapid small-scale column testing, empirical linear correlations for thirteen TOrCs were determined with DOC, UV254, and TF in four wastewater effluents. Linear correlations (R(2) > 0.7) were obtained for eight TOrCs in each water quality in the UV254 model, while ten TOrCs had R(2) > 0.7 in the TF model. Conversely, DOC was shown to be a poor surrogate for TOrC breakthrough prediction. When the data from all four water qualities was combined, good linear correlations were still obtained with TF having higher R(2) than UV254 especially for TOrCs with log Dow>1. Excellent linear relationship (R(2) > 0.9) between log Dow and the removal of TOrC at 0% surrogate removal (y-intercept) were obtained for the five neutral TOrCs tested in this study. Positively charged TOrCs had enhanced removals due to electrostatic interactions with negatively charged GAC that caused them to deviate from removals that would be expected with their log Dow. Application of the empirical linear correlation models to full-scale samples provided good results for six of seven TOrCs (except meprobamate) tested when comparing predicted TOrC removal by UV254 and TF with actual removals for GAC in all the five samples tested. Surrogate predictions using UV254 and TF provide valuable tools for rapid or on-line monitoring of GAC performance and can result in cost savings by extended GAC run times as compared to using DOC breakthrough to trigger regeneration or replacement. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H
2017-10-25
Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
Workie, Demeke Lakew; Zike, Dereje Tesfaye; Fenta, Haile Mekonnen; Mekonnen, Mulusew Admasu
2018-05-10
Ethiopia is among countries with low contraceptive usage prevalence rate and resulted in high total fertility rate and unwanted pregnancy which intern affects the maternal and child health status. This study aimed to investigate the major factors that affect the number of modern contraceptive users at service delivery point in Ethiopia. The Performance Monitoring and Accountability2020/Ethiopia data collected between March and April 2016 at round-4 from 461 eligible service delivery points were in this study. The weighted log-linear negative binomial model applied to analyze the service delivery point's data. Fifty percent of service delivery points in Ethiopia given service for 61 modern contraceptive users with the interquartile range of 0.62. The expected log number of modern contraceptive users at rural was 1.05 (95% Wald CI: - 1.42 to - 0.68) lower than the expected log number of modern contraceptive users at urban. In addition, the expected log count of modern contraceptive users at others facility type was 0.58 lower than the expected log count of modern contraceptive users at the health center. The numbers of nurses/midwives were affecting the number of modern contraceptive users. Since, the incidence rate of modern contraceptive users increased by one due to an additional nurse in the delivery point. Among different factors considered in this study, residence, region, facility type, the number of days per week family planning offered, the number of nurses/midwives and number of medical assistants were to be associated with the number of modern contraceptive users. Thus, the Government of Ethiopia would take immediate steps to address causes of the number of modern contraceptive users in Ethiopia.
Generalized linear mixed models with varying coefficients for longitudinal data.
Zhang, Daowen
2004-03-01
The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.
NASA Technical Reports Server (NTRS)
Schlesinger, Robert E.
1990-01-01
Results are presented from a linear Lagrangian entraining parcel model of an overshooting thunderstorm cloud top. The model, which is similar to that of Adler and Mack (1986), gives analytic exact solutions for vertical velocity and temperature by representing mixing with Rayleigh damping instead of nonlinearly. Model results are presented for various combinations of stratospheric lapse rate, drag intensity, and mixing strength. The results are compared to those of Adler and Mack.
Partitioning of polar and non-polar neutral organic chemicals into human and cow milk.
Geisler, Anett; Endo, Satoshi; Goss, Kai-Uwe
2011-10-01
The aim of this work was to develop a predictive model for milk/water partition coefficients of neutral organic compounds. Batch experiments were performed for 119 diverse organic chemicals in human milk and raw and processed cow milk at 37°C. No differences (<0.3 log units) in the partition coefficients of these types of milk were observed. The polyparameter linear free energy relationship model fit the calibration data well (SD=0.22 log units). An experimental validation data set including hormones and hormone active compounds was predicted satisfactorily by the model. An alternative modelling approach based on log K(ow) revealed a poorer performance. The model presented here provides a significant improvement in predicting enrichment of potentially hazardous chemicals in milk. In combination with physiologically based pharmacokinetic modelling this improvement in the estimation of milk/water partitioning coefficients may allow a better risk assessment for a wide range of neutral organic chemicals. Copyright © 2011 Elsevier Ltd. All rights reserved.
Familial associations with paratuberculosis ELISA results in Texas Longhorn cattle.
Osterstock, Jason B; Fosgate, Geoffrey T; Cohen, Noah D; Derr, James N; Manning, Elizabeth J B; Collins, Michael T; Roussel, Allen J
2008-05-25
The objective of this cross-sectional study was to estimate familial associations with paratuberculosis ELISA status in beef cattle. Texas Longhorn cattle (n=715) greater than 2years of age were sampled for paratuberculosis testing using ELISA and fecal culture. Diagnostic test results were indicative of substantial numbers of false-positive serological reactions consistent with environmental exposure to non-MAP Mycobacterium spp. Associations between ancestors and paratuberculosis ELISA status of offspring were assessed using conditional logistic regression. The association between ELISA status of the dam and her offspring was assessed using linear mixed-effect models. Significant associations were identified between some ancestors and offspring ELISA status. The odds of being classified as "suspect" or greater based on ELISA results were 4.6 times greater for offspring of dams with similarly increased S:P ratios. A significant positive linear association was also observed between dam and offspring log-transformed S:P ratios. Results indicate that there is familial aggregation of paratuberculosis ELISA results in beef cattle and suggest that genetic selection based on paratuberculosis ELISA status may decrease seroprevalence. However, genetic selection may have minimal effect on paratuberculosis control in herds with exposure to non-MAP Mycobacterium spp.
Al-Chalabi, Ammar; Calvo, Andrea; Chio, Adriano; Colville, Shuna; Ellis, Cathy M; Hardiman, Orla; Heverin, Mark; Howard, Robin S; Huisman, Mark H B; Keren, Noa; Leigh, P Nigel; Mazzini, Letizia; Mora, Gabriele; Orrell, Richard W; Rooney, James; Scott, Kirsten M; Scotton, William J; Seelen, Meinie; Shaw, Christopher E; Sidle, Katie S; Swingler, Robert; Tsuda, Miho; Veldink, Jan H; Visser, Anne E; van den Berg, Leonard H; Pearce, Neil
2014-11-01
Amyotrophic lateral sclerosis shares characteristics with some cancers, such as onset being more common in later life, progression usually being rapid, the disease affecting a particular cell type, and showing complex inheritance. We used a model originally applied to cancer epidemiology to investigate the hypothesis that amyotrophic lateral sclerosis is a multistep process. We generated incidence data by age and sex from amyotrophic lateral sclerosis population registers in Ireland (registration dates 1995-2012), the Netherlands (2006-12), Italy (1995-2004), Scotland (1989-98), and England (2002-09), and calculated age and sex-adjusted incidences for each register. We regressed the log of age-specific incidence against the log of age with least squares regression. We did the analyses within each register, and also did a combined analysis, adjusting for register. We identified 6274 cases of amyotrophic lateral sclerosis from a catchment population of about 34 million people. We noted a linear relationship between log incidence and log age in all five registers: England r(2)=0·95, Ireland r(2)=0·99, Italy r(2)=0·95, the Netherlands r(2)=0·99, and Scotland r(2)=0·97; overall r(2)=0·99. All five registers gave similar estimates of the linear slope ranging from 4·5 to 5·1, with overlapping confidence intervals. The combination of all five registers gave an overall slope of 4·8 (95% CI 4·5-5·0), with similar estimates for men (4·6, 4·3-4·9) and women (5·0, 4·5-5·5). A linear relationship between the log incidence and log age of onset of amyotrophic lateral sclerosis is consistent with a multistage model of disease. The slope estimate suggests that amyotrophic lateral sclerosis is a six-step process. Identification of these steps could lead to preventive and therapeutic avenues. UK Medical Research Council; UK Economic and Social Research Council; Ireland Health Research Board; The Netherlands Organisation for Health Research and Development (ZonMw); the Ministry of Health and Ministry of Education, University, and Research in Italy; the Motor Neurone Disease Association of England, Wales, and Northern Ireland; and the European Commission (Seventh Framework Programme). Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
von Davier, Matthias
2014-01-01
Diagnostic models combine multiple binary latent variables in an attempt to produce a latent structure that provides more information about test takers' performance than do unidimensional latent variable models. Recent developments in diagnostic modeling emphasize the possibility that multiple skills may interact in a conjunctive way within the…
Approximating a nonlinear advanced-delayed equation from acoustics
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2016-10-01
We approximate the solution of a particular non-linear mixed type functional differential equation from physiology, the mucosal wave model of the vocal oscillation during phonation. The mathematical equation models a superficial wave propagating through the tissues. The numerical scheme is adapted from the work presented in [1, 2, 3], using homotopy analysis method (HAM) to solve the non linear mixed type equation under study.
Improving the Power of GWAS and Avoiding Confounding from Population Stratification with PC-Select
Tucker, George; Price, Alkes L.; Berger, Bonnie
2014-01-01
Using a reduced subset of SNPs in a linear mixed model can improve power for genome-wide association studies, yet this can result in insufficient correction for population stratification. We propose a hybrid approach using principal components that does not inflate statistics in the presence of population stratification and improves power over standard linear mixed models. PMID:24788602
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Cliff
2015-01-01
Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Baqué, Michèle; Amendt, Jens
2013-01-01
Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.
Linear Mixed Models: Gum and Beyond
NASA Astrophysics Data System (ADS)
Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens
2014-04-01
In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.
Characteristics of School Campuses and Physical Activity Among Youth
Cradock, Angie L.; Melly, Steven J.; Allen, Joseph G.; Morris, Jeffrey S.; Gortmaker, Steven L.
2009-01-01
Background Previous research suggests that school characteristics may influence physical activity. However, few studies have examined associations between school building and campus characteristics and objective measures of physical activity among middle school students. Methods Students from ten middle schools (n=248, 42% female, mean age 13.7 years) wore TriTrac-R3D accelerometers in 1997 recording measures of minute-by-minute physical movements during the school day that were then averaged over 15-minute intervals (n=16,619) and log-transformed. School characteristics, including school campus area, play area, and building area (per student) were assessed retrospectively in 2004–2005 using land-use parcel data, site visits, ortho-photos, architectural plans, and site maps. In 2006, linear mixed models using SAS PROC MIXED were fit to examine associations between school environmental variables and physical activity, controlling for potentially confounding variables. Results Area per enrolled student ranged from 8.8 to 143.7 m2 for school campuses, from 12.1 to 24.7 m2 for buildings, and from 0.4 to 58.9 m2 for play areas. Play area comprised from 3% to 62% of total campus area across schools. In separate regression models, school campus area per student (β=0.2244, p<0.0001); building area per student (β=2.1302, p<0.02); and play area per student (β=0.347, p<0.0001) were each directly associated with log-TriTrac-R3D vector magnitude. Given the range of area density measures in this sample of schools, this translates into an approximate 20% to 30% increase in average vector magnitude, or walking 2 extra miles over the course of a week. Conclusions Larger school campuses, school buildings, and play areas (per enrolled student) are associated with higher levels of physical activity in middle school students. PMID:17673097
Holtschlag, David J.; Shively, Dawn; Whitman, Richard L.; Haack, Sheridan K.; Fogarty, Lisa R.
2008-01-01
Regression analyses and hydrodynamic modeling were used to identify environmental factors and flow paths associated with Escherichia coli (E. coli) concentrations at Memorial and Metropolitan Beaches on Lake St. Clair in Macomb County, Mich. Lake St. Clair is part of the binational waterway between the United States and Canada that connects Lake Huron with Lake Erie in the Great Lakes Basin. Linear regression, regression-tree, and logistic regression models were developed from E. coli concentration and ancillary environmental data. Linear regression models on log10 E. coli concentrations indicated that rainfall prior to sampling, water temperature, and turbidity were positively associated with bacteria concentrations at both beaches. Flow from Clinton River, changes in water levels, wind conditions, and log10 E. coli concentrations 2 days before or after the target bacteria concentrations were statistically significant at one or both beaches. In addition, various interaction terms were significant at Memorial Beach. Linear regression models for both beaches explained only about 30 percent of the variability in log10 E. coli concentrations. Regression-tree models were developed from data from both Memorial and Metropolitan Beaches but were found to have limited predictive capability in this study. The results indicate that too few observations were available to develop reliable regression-tree models. Linear logistic models were developed to estimate the probability of E. coli concentrations exceeding 300 most probable number (MPN) per 100 milliliters (mL). Rainfall amounts before bacteria sampling were positively associated with exceedance probabilities at both beaches. Flow of Clinton River, turbidity, and log10 E. coli concentrations measured before or after the target E. coli measurements were related to exceedances at one or both beaches. The linear logistic models were effective in estimating bacteria exceedances at both beaches. A receiver operating characteristic (ROC) analysis was used to determine cut points for maximizing the true positive rate prediction while minimizing the false positive rate. A two-dimensional hydrodynamic model was developed to simulate horizontal current patterns on Lake St. Clair in response to wind, flow, and water-level conditions at model boundaries. Simulated velocity fields were used to track hypothetical massless particles backward in time from the beaches along flow paths toward source areas. Reverse particle tracking for idealized steady-state conditions shows changes in expected flow paths and traveltimes with wind speeds and directions from 24 sectors. The results indicate that three to four sets of contiguous wind sectors have similar effects on flow paths in the vicinity of the beaches. In addition, reverse particle tracking was used for transient conditions to identify expected flow paths for 10 E. coli sampling events in 2004. These results demonstrate the ability to track hypothetical particles from the beaches, backward in time, to likely source areas. This ability, coupled with a greater frequency of bacteria sampling, may provide insight into changes in bacteria concentrations between source and sink areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurtubise, R.J.; Hussain, A.; Silver, H.F.
1981-11-01
The normal-phase liquid chromatographic models of Scott, Snyder, and Soczewinski were considered for a ..mu..-Bondapak NH/sub 2/ stationary phase. n-Heptane:2-propanol and n-heptane:ethyl acetate mobile phases of different compositions were used. Linear relationships were obtained from graphs of log K' vs. log mole fraction of the strong solvent for both n-heptane:2-propanol and n-heptane:ethyl acetate mobile phases. A linear relationship was obtained between the reciprocal of corrected retention volume and % wt/v of 2-propanol but not between the reciprocal of corrected retention volume and % wt/v of ethyl acetate. The slopes and intercept terms from the Snyder and Soczewinski models were foundmore » to approximately describe interactions with ..mu..-Bondapak NH/sub 2/. Capacity factors can be predicted for the compounds by using the equations obtained from mobile phase composition variation experiments.« less
Wu, Liejun; Chen, Yongli; Caccamise, Sarah A.L.; Li, Qing X.
2012-01-01
A difference equation (DE) model is developed using the methylene retention increment (Δtz) of n-alkanes to avoid the influence of gas holdup time (tM). The effects of the equation orders (1st–5th) on the accuracy of a curve fitting show that a linear equation (LE) is less satisfactory and it is not necessary to use a complicated cubic or higher order equation. The relationship between the logarithm of Δtz and the carbon number (z) of the n-alkanes under isothermal conditions closely follows the quadratic equation for C3–C30 n-alkanes at column temperatures of 24–260 °C. The first and second order forward differences of the expression (Δlog Δtz and Δ2log Δtz, respectively) are linear and constant, respectively, which validates the DE model. This DE model lays a necessary foundation for further developing a retention model to accurately describe the relationship between the adjusted retention time and z of n-alkanes. PMID:22939376
Gallegos, T.J.; Han, Y.-S.; Hayes, K.F.
2008-01-01
This study investigates the removal of As(III) from solution using mackinawite, a nanoparticulate reduced iron sulfide. Mackinawite suspensions (0.1-40 g/L) effectively lower initial concentrations of 1.3 ?? 10 -5 M As(III) from pH 5-10, with maximum removal occurring under acidic conditions. Based on Eh measurements, it was found that the redox state of the system depended on the mackinawite solids concentration and pH. Higher initial mackinawite concentrations and alkaline pH resulted in a more reducing redox condition. Given this, the pH edge data were modeled thermodynamically using pe (-log[e-]) as a fitting parameter and linear pe-pH relationships within the range of measured Eh values as a function of pH and mackinawite concentration. The model predicts removal of As(III) from solution by precipitation of realgar with the formation of secondary oxidation products, greigite or a mixed-valence iron oxide phase, depending on pH. This study demonstrates that mackinawite is an effective sequestration agent for As(III) and highlights the importance of incorporating redox into models describing the As-Fe-S-H2O system. ?? 2008 American Chemical Society.
Stochastic theory of log-periodic patterns
NASA Astrophysics Data System (ADS)
Canessa, Enrique
2000-12-01
We introduce an analytical model based on birth-death clustering processes to help in understanding the empirical log-periodic corrections to power law scaling and the finite-time singularity as reported in several domains including rupture, earthquakes, world population and financial systems. In our stochastic theory log-periodicities are a consequence of transient clusters induced by an entropy-like term that may reflect the amount of co-operative information carried by the state of a large system of different species. The clustering completion rates for the system are assumed to be given by a simple linear death process. The singularity at t0 is derived in terms of birth-death clustering coefficients.
Thorpe, Susannah K S; Crompton, Robin H
2005-05-01
The large body mass and exclusively arboreal lifestyle of Sumatran orangutans identify them as a key species in understanding the dynamic between primates and their environment. Increased knowledge of primate locomotor ecology, coupled with recent developments in the standardization of positional mode classifications (Hunt et al. [1996] Primates 37:363-387), opened the way for sophisticated multivariate statistical approaches, clarifying complex associations between multiple influences on locomotion. In this study we present a log-linear modelling approach used to identify key associations between orangutan locomotion, canopy level, support use, and contextual behavior. Log-linear modelling is particularly appropriate because it is designed for categorical data, provides a systematic method for testing alternative hypotheses regarding interactions between variables, and allows interactions to be ranked numerically in terms of relative importance. Support diameter and type were found to have the strongest associations with locomotor repertoire, suggesting that orangutans have evolved distinct locomotor modes to solve a variety of complex habitat problems. However, height in the canopy and contextual behavior do not directly influence locomotion: instead, their effect is modified by support type and support diameter, respectively. Contrary to classic predictions, age-sex category has only limited influence on orangutan support use and locomotion, perhaps reflecting the presence of arboreal pathways which individuals of all age-sex categories follow. Effects are primarily related to a tendency for adult, parous females to adopt a more cautious approach to locomotion than adult males and immature subjects. Copyright 2004 Wiley-Liss, Inc.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather than the coefficients. Moreover, use of cubic regression splines provides biological meaningful growth velocity and acceleration curves despite increased complexity in coefficient interpretation. Through this stepwise approach, we provide a set of tools to model longitudinal childhood data for non-statisticians using linear mixed-effect models.
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein
2018-09-15
A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2 = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong
2016-01-01
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471
Growing high quality hardwoods: Plantation trials of mixed hardwood species in Tennessee
Christopher M. Oswalt; Wayne K. Clatterbuck
2011-01-01
Hardwood plantations are becoming increasingly important in the United States. To date, many foresters have relied on a conifer plantation model as the basis of establishing and managing hardwood plantations. The monospecific approach suggested by the conifer plantation model does not appear to provide for the development of quality hardwood logs similar to those found...
NASA Astrophysics Data System (ADS)
Ebrahimpoor, Sonia; Khoshnood, Razieh Sanavi; Beyramabadi, S. Ali
2016-12-01
Complexation of the Cd2+ ion with N, N'-dipyridoxylidene(1,4-butanediamine) Schiff base was studied in pure solvents including acetonitrile (AN), ethanol (EtOH), methanol (MeOH), tetrahydrofuran (THF), dimethylformamide (DMF), water (H2O), and various binary solvent mixtures of acetonitrile-ethanol (AN-EtOH), acetonitrile-methanol (AN-MeOH), acetonitrile-tetrahydrofuran (AN-THF), acetonitrile-dimethylformamide (AN-DMF), and acetonitrile-water (AN-H2O) systems at different temperatures using the conductometric method. The conductance data show that the stoichiometry of complex is 1: 1 [ML] in all solvent systems. A non-linear behavior was observed for changes of log K f of [Cd( N, N'-dipyridoxylidene(1,4-butanediamine)] complex versus the composition of the binary mixed solvents, which was explained in terms of solvent-solvent interactions. The results show that the thermodynamics of complexation reaction is affected by the nature and composition of the mixed solvents.
Low energy probes of PeV scale sfermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altmannshofer, Wolfgang; Harnik, Roni; Zupan, Jure
2013-11-27
We derive bounds on squark and slepton masses in mini-split supersymmetry scenario using low energy experiments. In this setup gauginos are at the TeV scale, while sfermions are heavier by a loop factor. We cover the most sensitive low energy probes including electric dipole moments (EDMs), meson oscillations and charged lepton flavor violation (LFV) transitions. A leading log resummation of the large logs of gluino to sfermion mass ratio is performed. A sensitivity to PeV squark masses is obtained at present from kaon mixing measurements. A number of observables, including neutron EDMs, mu->e transitions and charmed meson mixing, will startmore » probing sfermion masses in the 100 TeV-1000 TeV range with the projected improvements in the experimental sensitivities. We also discuss the implications of our results for a variety of models that address the flavor hierarchy of quarks and leptons. We find that EDM searches will be a robust probe of models in which fermion masses are generated radiatively, while LFV searches remain sensitive to simple-texture based flavor models.« less
Individualizing drug dosage with longitudinal data.
Zhu, Xiaolu; Qu, Annie
2016-10-30
We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Casellas, J; Bach, R
2012-06-01
Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.
Koopman Mode Decomposition Methods in Dynamic Stall: Reduced Order Modeling and Control
2015-11-10
the flow phenomena by separating them into individual modes. The technique of Proper Orthogonal Decomposition (POD), see [ Holmes : 1998] is a popular...sampled values h(k), k = 0,…,2M-1, of the exponential sum 1. Solve the following linear system where 2. Compute all zeros zj D, j = 1,…,M...of the Prony polynomial i.e., calculate all eigenvalues of the associated companion matrix and form fj = log zj for j = 1,…,M, where log is the
Shek, Daniel T L; Ma, Cecilia M S
2011-01-05
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.
Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations
Shek, Daniel T. L.; Ma, Cecilia M. S.
2011-01-01
Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.
Golmohammadi, Hassan
2009-11-30
A quantitative structure-property relationship (QSPR) study was performed to develop models those relate the structure of 141 organic compounds to their octanol-water partition coefficients (log P(o/w)). A genetic algorithm was applied as a variable selection tool. Modeling of log P(o/w) of these compounds as a function of theoretically derived descriptors was established by multiple linear regression (MLR), partial least squares (PLS), and artificial neural network (ANN). The best selected descriptors that appear in the models are: atomic charge weighted partial positively charged surface area (PPSA-3), fractional atomic charge weighted partial positive surface area (FPSA-3), minimum atomic partial charge (Qmin), molecular volume (MV), total dipole moment of molecule (mu), maximum antibonding contribution of a molecule orbital in the molecule (MAC), and maximum free valency of a C atom in the molecule (MFV). The result obtained showed the ability of developed artificial neural network to prediction of partition coefficients of organic compounds. Also, the results revealed the superiority of ANN over the MLR and PLS models. Copyright 2009 Wiley Periodicals, Inc.
On the mixing time of geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan
In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Wemore » specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).« less
(Draft) Community air pollution and mortality: Analysis of 1980 data from US metropolitan areas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipfert, F.W.
1992-11-01
1980 data from up to 149 metropolitan areas were used to define cross-sectional associations between community air pollution and ``excess`` human mortality. The regression model proposed by Ozkaynak and Thurston (1987), which accounted for age, race, education, poverty, and population density, was evaluated and several new models were developed. The new models also accounted for migration, drinking water hardness, and smoking, and included a more detailed description of race. Cause-of-death categories analyzed include all causes, all ``non-external`` causes, major cardiovascular diseases, and chronic obstructive pulmonary diseases (COPD). Both annual mortality rates and their logarithms were analyzed. Air quality data weremore » obtained from the EPA AIRS database (TSP, SO{sub 4}{sup =}, Mn, and ozone) and from the inhalable particulate network (PM{sub 15}, PM{sub 2.5} and SO{sub 4}{sup =}, for 63{sup 4} locations). The data on particulates were averaged across all monitoring stations available for each SMSA and the TSP data were restricted to the year 1980. The associations between mortality and air pollution were found to be dependent on the socioeconomic factors included in the models, the specific locations included in the data set, and the type of statistical model used. Statistically significant associations were found as follows: between TSP and mortality due to non-external causes with log-linear models, but not with a linear model betweenestimated 10-year average (1980--90) ozone levels and 1980 non-external and cardiovascular deaths; and between TSP and COPD mortality for both linear and log-linear models. When the sulfate contribution to TSP was subtracted, the relationship with COPD mortality was strengthened.« less
(Draft) Community air pollution and mortality: Analysis of 1980 data from US metropolitan areas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipfert, F.W.
1992-11-01
1980 data from up to 149 metropolitan areas were used to define cross-sectional associations between community air pollution and excess'' human mortality. The regression model proposed by Ozkaynak and Thurston (1987), which accounted for age, race, education, poverty, and population density, was evaluated and several new models were developed. The new models also accounted for migration, drinking water hardness, and smoking, and included a more detailed description of race. Cause-of-death categories analyzed include all causes, all non-external'' causes, major cardiovascular diseases, and chronic obstructive pulmonary diseases (COPD). Both annual mortality rates and their logarithms were analyzed. Air quality data weremore » obtained from the EPA AIRS database (TSP, SO[sub 4][sup =], Mn, and ozone) and from the inhalable particulate network (PM[sub 15], PM[sub 2.5] and SO[sub 4][sup =], for 63[sup 4] locations). The data on particulates were averaged across all monitoring stations available for each SMSA and the TSP data were restricted to the year 1980. The associations between mortality and air pollution were found to be dependent on the socioeconomic factors included in the models, the specific locations included in the data set, and the type of statistical model used. Statistically significant associations were found as follows: between TSP and mortality due to non-external causes with log-linear models, but not with a linear model betweenestimated 10-year average (1980--90) ozone levels and 1980 non-external and cardiovascular deaths; and between TSP and COPD mortality for both linear and log-linear models. When the sulfate contribution to TSP was subtracted, the relationship with COPD mortality was strengthened.« less
Linear separability in superordinate natural language concepts.
Ruts, Wim; Storms, Gert; Hampton, James
2004-01-01
Two experiments are reported in which linear separability was investigated in superordinate natural language concept pairs (e.g., toiletry-sewing gear). Representations of the exemplars of semantically related concept pairs were derived in two to five dimensions using multidimensional scaling (MDS) of similarities based on possession of the concept features. Next, category membership, obtained from an exemplar generation study (in Experiment 1) and from a forced-choice classification task (in Experiment 2) was predicted from the coordinates of the MDS representation using log linear analysis. The results showed that all natural kind concept pairs were perfectly linearly separable, whereas artifact concept pairs showed several violations. Clear linear separability of natural language concept pairs is in line with independent cue models. The violations in the artifact pairs, however, yield clear evidence against the independent cue models.
Ma, Wan-Li; Sun, De-Zhi; Shen, Wei-Guo; Yang, Meng; Qi, Hong; Liu, Li-Yan; Shen, Ji-Min; Li, Yi-Fan
2011-07-01
A comprehensive sampling campaign was carried out to study atmospheric concentration of polycyclic aromatic hydrocarbons (PAHs) in Beijing and to evaluate the effectiveness of source control strategies in reducing PAHs pollution after the 29th Olympic Games. The sub-cooled liquid vapor pressure (logP(L)(o))-based model and octanol-air partition coefficient (K(oa))-based model were applied based on each seasonal dateset. Regression analysis among log K(P), logP(L)(o) and log K(oa) exhibited high significant correlations for four seasons. Source factors were identified by principle component analysis and contributions were further estimated by multiple linear regression. Pyrogenic sources and coke oven emission were identified as major sources for both the non-heating and heating seasons. As compared with literatures, the mean PAH concentrations before and after the 29th Olympic Games were reduced by more than 60%, indicating that the source control measures were effective for reducing PAHs pollution in Beijing. Copyright © 2011 Elsevier Ltd. All rights reserved.
Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators.
Kim, Pil-Jong; Kim, Hong-Gee; Cho, Byeong-Hoon
2015-05-01
The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC.
Correcting for population structure and kinship using the linear mixed model: theory and extensions.
Hoffman, Gabriel E
2013-01-01
Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.
NASA Astrophysics Data System (ADS)
Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki
2018-02-01
For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.
The frequency and level of sweep in mixed hardwood saw logs in the eastern United States
Peter Hamner; Marshall S. White; Philip A. Araman
2007-01-01
Hardwood sawmills traditionally saw logs in a manner that either orients sawlines parallel to the log central axis (straight sawing) or the log surface (allowing for taper). Sweep is characterized as uniform curvature along the entire length of a log. For logs with sweep, lumber yield losses from straight and taper sawing increase with increasing levels of sweep. Curve...
The effect of dropout on the efficiency of D-optimal designs of linear mixed models.
Ortega-Azurduy, S A; Tan, F E S; Berger, M P F
2008-06-30
Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.
Measures of model performance based on the log accuracy ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Measures of model performance based on the log accuracy ratio
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
2018-01-03
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Rakowska, Magdalena I; Kupryianchyk, Darya; Koelmans, Albert A; Grotenhuis, Tim; Rijnaarts, Huub H M
2014-12-15
Addition of activated carbons (AC) to polluted sediments and soils is an attractive remediation technique aiming at reducing pore water concentrations of hydrophobic organic contaminants (HOCs). In this study, we present (pseudo-)equilibrium as well as kinetic parameters for sorption of a series of PAHs and PCBs to powdered and granular activated carbons (AC) after three different sediment treatments: sediment mixed with powdered AC (PAC), sediment mixed with granular AC (GAC), and addition of GAC followed by 2 d mixing and subsequent removal ('sediment stripping'). Remediation efficiency was assessed by quantifying fluxes of PAHs towards SPME passive samplers inserted in the sediment top layer, which showed that the efficiency decreased in the order of PAC > GAC stripping > GAC addition. Sorption was very strong to PAC, with Log KAC (L/kg) values up to 10.5. Log KAC values for GAC ranged from 6.3-7.1 and 4.8-6.2 for PAHs and PCBs, respectively. Log KAC values for GAC in the stripped sediment were 7.4-8.6 and 5.8-7.7 for PAH and PCB. Apparent first order adsorption rate constants for GAC (kGAC) in the stripping scenario were calculated with a first-order kinetic model and ranged from 1.6 × 10(-2) (PHE) to 1.7 × 10(-5) d(-1) (InP). Sorption affinity parameters did not change within 9 months post treatment, confirming the longer term effectiveness of AC in field applications for PAC and GAC. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Qian, Jiajie; Jennings, Brandon; Cwiertny, David M; Martinez, Andres
2017-11-15
We fabricated a suite of polymeric electrospun nanofiber mats (ENMs) and investigated their performance as next-generation passive sampler media for environmental monitoring of organic compounds. Electrospinning of common polymers [e.g., polyacrylonitrile (PAN), polymethyl methacrylate (PMMA), and polystyrene (PS), among others] yielded ENMs with reproducible control of nanofiber diameters (from 50 to 340 nm). The ENM performance was investigated initially with model hydrophilic (aniline and nitrobenzene) and hydrophobic (selected PCB congeners and dioxin) compounds, generally revealing fast chemical uptake into all of these ENMs, which was well described by a one compartment, first-order kinetic model. Typical times to reach 90% equilibrium (t 90% ) were ≤7 days under mixing conditions for all the ENMs and <0.5 days for the best performing materials under static (i.e., no mixing) conditions. Collectively, these short equilibrium timescales suggest that ENMs may be used in the field as an equilibrium-passive sampler, at least for our model compounds. Equilibrium partitioning coefficients (K ENM-W , L kg -1 ) averaged 2 and 4.7 log units for the hydrophilic and hydrophobic analytes, respectively. PAN, PMMA and PS were prioritized for additional studies because they exhibited not only the greatest capacity for simultaneous uptake of the entire model suite (log K ENM-W ∼1.5-6.2), but also fast uptake. For these optimized ENMs, the rates of uptake into PAN and PMMA were limited by aqueous phase diffusion to the nanofiber surface, and the rate-determining step for PS was analyte specific. Sorption isotherms also revealed that the environmental application of these optimized ENMs would occur within the linear uptake regime. We examined the ENM performance for the measurement of pore water concentrations from spiked soil and freshwater sediments. Soil and sediment studies not only yielded reproducible pore water concentrations and comparable values to other passive sampler materials, but also provided practical insights into ENM stability and fouling in such systems. Furthermore, fast uptake for a suite of structurally diverse hydrophilic and moderately hydrophobic compounds was obtained for PAN and PS, with t 90% ranging from 0.01 to 4 days with mixing and K ENM-W values ranging from 1.3 to 3.2 log units. Our findings show promise for the development and use of ENMs as equilibrium-passive samplers for a range of organic pollutants across soil/sediment and water systems.
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel
2014-01-15
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less
A Bayesian Semiparametric Latent Variable Model for Mixed Responses
ERIC Educational Resources Information Center
Fahrmeir, Ludwig; Raach, Alexander
2007-01-01
In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…
Chen, Juan; Chen, Hao; Zhang, Xing-wen; Lei, Kun; Kenny, Jonathan E
2015-11-01
A fluorescence quenching model using copper(II) ion (Cu(2+)) ion selective electrode (Cu-ISE) is developed. It uses parallel factor analysis (PARAFAC) to model fluorescence excitation-emission matrices (EEMs) of humic acid (HA) samples titrated with Cu(2+) to resolve fluorescence response of fluorescent components to Cu(2+) titration. Meanwhile, Cu-ISE is employed to monitor free Cu(2+) concentration ([Cu]) at each titration step. The fluorescence response of each component is fit individually to a nonlinear function of [Cu] to find the Cu(2+) conditional stability constant for that component. This approach differs from other fluorescence quenching models, including the most up-to-date multi-response model that has a problematic assumption on Cu(2+) speciation, i.e., an assumption that total Cu(2+) present in samples is a sum of [Cu] and those bound by fluorescent components without taking into consideration the contribution of non-fluorescent organic ligands and inorganic ligands to speciation of Cu(2+). This paper employs the new approach to investigate Cu(2+) binding by Pahokee peat HA (PPHA) at pH values of 6.0, 7.0, and 8.0 buffered by phosphate or without buffer. Two fluorescent components (C1 and C2) were identified by PARAFAC. For the new quenching model, the conditional stability constants (logK1 and logK2) of the two components all increased with increasing pH. In buffered solutions, the new quenching model reported logK1 = 7.11, 7.89, 8.04 for C1 and logK2 = 7.04, 7.64, 8.11 for C2 at pH 6.0, 7.0, and 8.0, respectively, nearly two log units higher than the results of the multi-response model. Without buffer, logK1 and logK2 decreased but were still high (>7) at pH 8.0 (logK1 = 7.54, logK2 = 7.95), and all the values were at least 0.5 log unit higher than those (4.83 ~ 5.55) of the multi-response model. These observations indicate that the new quenching model is more intrinsically sensitive than the multi-response model in revealing strong fluorescent binding sites of PPHA in different experimental conditions. The new model was validated by testing it with a mixture of two fluorescing Cu(2+) chelating organic compounds, i.e., l-tryptophan and salicylic acid mixed with one non-fluorescent binding compound oxalic acid titrated with Cu(2+) at pH 5.0.
March, Jordon K; Pratt, Michael D; Lowe, Chinn-Woan; Cohen, Marissa N; Satterfield, Benjamin A; Schaalje, Bruce; O'Neill, Kim L; Robison, Richard A
2015-01-01
This study investigated (1) the susceptibility of Bacillus anthracis (Ames strain), Bacillus subtilis (ATCC 19659), and Clostridium sporogenes (ATCC 3584) spores to commercially available peracetic acid (PAA)- and glutaraldehyde (GA)-based disinfectants, (2) the effects that heat-shocking spores after treatment with these disinfectants has on spore recovery, and (3) the timing of heat-shocking after disinfectant treatment that promotes the optimal recovery of spores deposited on carriers. Suspension tests were used to obtain inactivation kinetics for the disinfectants against three spore types. The effects of heat-shocking spores after disinfectant treatment were also determined. Generalized linear mixed models were used to estimate 6-log reduction times for each spore type, disinfectant, and heat treatment combination. Reduction times were compared statistically using the delta method. Carrier tests were performed according to AOAC Official Method 966.04 and a modified version that employed immediate heat-shocking after disinfectant treatment. Carrier test results were analyzed using Fisher's exact test. PAA-based disinfectants had significantly shorter 6-log reduction times than the GA-based disinfectant. Heat-shocking B. anthracis spores after PAA treatment resulted in significantly shorter 6-log reduction times. Conversely, heat-shocking B. subtilis spores after PAA treatment resulted in significantly longer 6-log reduction times. Significant interactions were also observed between spore type, disinfectant, and heat treatment combinations. Immediately heat-shocking spore carriers after disinfectant treatment produced greater spore recovery. Sporicidal activities of disinfectants were not consistent across spore species. The effects of heat-shocking spores after disinfectant treatment were dependent on both disinfectant and spore species. Caution must be used when extrapolating sporicidal data of disinfectants from one spore species to another. Heat-shocking provides a more accurate picture of spore survival for only some disinfectant/spore combinations. Collaborative studies should be conducted to further examine a revision of AOAC Official Method 966.04 relative to heat-shocking. PMID:26185111
Neurobehavioral function in school-age children exposed to manganese in drinking water.
Oulhote, Youssef; Mergler, Donna; Barbeau, Benoit; Bellinger, David C; Bouffard, Thérèse; Brodeur, Marie-Ève; Saint-Amour, Dave; Legrand, Melissa; Sauvé, Sébastien; Bouchard, Maryse F
2014-12-01
Manganese neurotoxicity is well documented in individuals occupationally exposed to airborne particulates, but few data are available on risks from drinking-water exposure. We examined associations of exposure from concentrations of manganese in water and hair with memory, attention, motor function, and parent- and teacher-reported hyperactive behaviors. We recruited 375 children and measured manganese in home tap water (MnW) and hair (MnH). We estimated manganese intake from water ingestion. Using structural equation modeling, we estimated associations between neurobehavioral functions and MnH, MnW, and manganese intake from water. We evaluated exposure-response relationships using generalized additive models. After adjusting for potential confounders, a 1-SD increase in log10 MnH was associated with a significant difference of -24% (95% CI: -36, -12%) SD in memory and -25% (95% CI: -41, -9%) SD in attention. The relations between log10 MnH and poorer memory and attention were linear. A 1-SD increase in log10 MnW was associated with a significant difference of -14% (95% CI: -24, -4%) SD in memory, and this relation was nonlinear, with a steeper decline in performance at MnW > 100 μg/L. A 1-SD increase in log10 manganese intake from water was associated with a significant difference of -11% (95% CI: -21, -0.4%) SD in motor function. The relation between log10 manganese intake and poorer motor function was linear. There was no significant association between manganese exposure and hyperactivity. Exposure to manganese in water was associated with poorer neurobehavioral performances in children, even at low levels commonly encountered in North America.
Association Between HIV-1 RNA Level and CD4 Cell Count Among Untreated HIV-Infected Individuals
Lima, Viviane D.; Fink, Valeria; Yip, Benita; Hogg, Robert S.; Harrigan, P. Richard
2009-01-01
Objectives. We examined the significance of plasma HIV-1 RNA levels (or viral load alone) in predicting CD4 cell decline in untreated HIV-infected individuals. Methods. Data were obtained from the British Columbia Centre for Excellence in HIV/AIDS. Participants included all residents who ever had a viral load determination in the province and who had never taken antiretroviral drugs (N = 890). We analyzed a total of 2074 viral load measurements and 2332 CD4 cell counts. Linear mixed-effects models were used to predict CD4 cell decline over time. Results. Longitudinal viral load was strongly associated with CD4 cell decline over time; an average of 1 log10 increase in viral load was associated with a 55-cell/mm3 decrease in CD4 cell count. Conclusions. Our results support the combined use of CD4 cell count and viral load as prognostic markers in HIV-infected individuals before the introduction of antiretroviral therapy. PMID:19218172
Adeyinka, F D; Laven, R A; Lawrence, K E; van Den Bosch, M; Blankenvoorde, G; Parkinson, T J
2014-03-01
The aim of this study was to estimate whether fetal age could be accurately estimated using placentome size. Fifty-eight cows with confirmed conception dates in two herds were used for the study. The length of the long axis and cross-sectional area of placentomes close to the cervix were measured once every 10 days between approximately 60-130 days of gestation and once every 15 days between 130-160 days of gestation. Four to six placentomes were measured using transrectal ultrasonography in each uterine horn. A linear mixed model was used to establish the factors that were significantly associated with log mean placentome length and to create an equation to predict gestational age from mean placentome length. Limits of agreement analysis was then used to evaluate whether the predictions were sufficiently accurate for mean placentome length to be used, in practice, as a method of determining gestational age. Only age of gestation (p<0.001) and uterine horn (p=0.048) were found to have a significant effect on log mean placentome length. From the three models used to predict gestational age the one that used log mean placentome length of all placentomes, adjusting for the effect of horn, had the smallest 95% limits of agreement; ±33 days. That is, predicted gestational age had a 95% chance of being between 33 days greater and 33.7 days less than actual age. This is approximately twice that reported in studies using measurement of fetal size. Measurement of placentomes near to the cervix using transrectal ultrasonography was easily achieved. There was a significant association between placentome size and gestational age, but between-cow variation in placentome size and growth resulted in poor agreement between placentome size and gestational age. Although placentomes can be easily visualised during diagnosis of pregnancy using transrectal ultrasonography, mean placentome size should not be used to estimate gestational age.
Palenzuela, D O; Benítez, J; Rivero, J; Serrano, R; Ganzó, O
1997-10-13
In the present work a concept proposed in 1992 by Dopotka and Giesendorf was applied to the quantitative analysis of antibodies to the p24 protein of HIV-1 in infected asymptomatic individuals and AIDS patients. Two approaches were analyzed, a linear model OD = b0 + b1.log(titer) and a nonlinear log(titer) = alpha.OD beta, similar to the Dopotka-Giesendorf's model. The above two proposed models adequately fit the dependence of the optical density values at a single point dilution, and titers achieved by the end point dilution method (EPDM). Nevertheless, the nonlinear model better fits the experimental data, according to residuals analysis. Classical EPDM was compared with the new single point dilution method (SPDM) using both models. The best correlation between titers calculated using both models and titers achieved by EPDM was obtained with the nonlinear model. The correlation coefficients for the nonlinear and linear models were r = 0.85 and r = 0.77, respectively. A new correction factor was introduced into the nonlinear model and this reduced the day-to-day variation of titer values. In general, SPDM saves time, reagents and is more precise and sensitive to changes in antibody levels, and therefore has a higher resolution than EPDM.
Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B
2015-06-01
The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.
Hollenbeak, Christopher S
2005-10-15
While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.
Andrić, Filip; Šegan, Sandra; Dramićanin, Aleksandra; Majstorović, Helena; Milojković-Opsenica, Dušanka
2016-08-05
Soil-water partition coefficient normalized to the organic carbon content (KOC) is one of the crucial properties influencing the fate of organic compounds in the environment. Chromatographic methods are well established alternative for direct sorption techniques used for KOC determination. The present work proposes reversed-phase thin-layer chromatography (RP-TLC) as a simpler, yet equally accurate method as officially recommended HPLC technique. Several TLC systems were studied including octadecyl-(RP18) and cyano-(CN) modified silica layers in combination with methanol-water and acetonitrile-water mixtures as mobile phases. In total 50 compounds of different molecular shape, size, and various ability to establish specific interactions were selected (phenols, beznodiazepines, triazine herbicides, and polyaromatic hydrocarbons). Calibration set of 29 compounds with known logKOC values determined by sorption experiments was used to build simple univariate calibrations, Principal Component Regression (PCR) and Partial Least Squares (PLS) models between logKOC and TLC retention parameters. Models exhibit good statistical performance, indicating that CN-layers contribute better to logKOC modeling than RP18-silica. The most promising TLC methods, officially recommended HPLC method, and four in silico estimation approaches have been compared by non-parametric Sum of Ranking Differences approach (SRD). The best estimations of logKOC values were achieved by simple univariate calibration of TLC retention data involving CN-silica layers and moderate content of methanol (40-50%v/v). They were ranked far well compared to the officially recommended HPLC method which was ranked in the middle. The worst estimates have been obtained from in silico computations based on octanol-water partition coefficient. Linear Solvation Energy Relationship study revealed that increased polarity of CN-layers over RP18 in combination with methanol-water mixtures is the key to better modeling of logKOC through significant diminishing of dipolar and proton accepting influence of the mobile phase as well as enhancing molar refractivity in excess of the chromatographic systems. Copyright © 2016 Elsevier B.V. All rights reserved.
A green vehicle routing problem with customer satisfaction criteria
NASA Astrophysics Data System (ADS)
Afshar-Bakeshloo, M.; Mehrabi, A.; Safari, H.; Maleki, M.; Jolai, F.
2016-12-01
This paper develops an MILP model, named Satisfactory-Green Vehicle Routing Problem. It consists of routing a heterogeneous fleet of vehicles in order to serve a set of customers within predefined time windows. In this model in addition to the traditional objective of the VRP, both the pollution and customers' satisfaction have been taken into account. Meanwhile, the introduced model prepares an effective dashboard for decision-makers that determines appropriate routes, the best mixed fleet, speed and idle time of vehicles. Additionally, some new factors evaluate the greening of each decision based on three criteria. This model applies piecewise linear functions (PLFs) to linearize a nonlinear fuzzy interval for incorporating customers' satisfaction into other linear objectives. We have presented a mixed integer linear programming formulation for the S-GVRP. This model enriches managerial insights by providing trade-offs between customers' satisfaction, total costs and emission levels. Finally, we have provided a numerical study for showing the applicability of the model.
Mafart, P; Leguérinel, I; Couvert, O; Coroller, L
2010-08-01
The assessment and optimization of food heating processes require knowledge of the thermal resistance of target spores. Although the concept of spore resistance may seem simple, the establishment of a reliable quantification system for characterizing the heat resistance of spores has proven far more complex than imagined by early researchers. This paper points out the main difficulties encountered by reviewing the historical works on the subject. During an early period, the concept of individual spore resistance had not yet been considered and the resistance of a strain of spore-forming bacterium was related to a global population regarded as alive or dead. A second period was opened by the introduction of the well-known D parameter (decimal reduction time) associated with the previously introduced z-concept. The present period has introduced three new sources of complexity: consideration of non log-linear survival curves, consideration of environmental factors other than temperature, and awareness of the variability of resistance parameters. The occurrence of non log-linear survival curves makes spore resistance dependent on heating time. Consequently, spore resistance characterisation requires at least two parameters. While early resistance models took only heating temperature into account, new models consider other environmental factors such as pH and water activity ("horizontal extension"). Similarly the new generation of models also considers certain environmental factors of the recovery medium for quantifying "apparent heat resistance" ("vertical extension"). Because the conventional F-value is no longer additive in cases of non log-linear survival curves, the decimal reduction ratio should be preferred for assessing the efficiency of a heating process. Copyright 2010 Elsevier Ltd. All rights reserved.
Dynamic predictive model for growth of Salmonella spp. in scrambled egg mix.
Li, Lin; Cepeda, Jihan; Subbiah, Jeyamkondan; Froning, Glenn; Juneja, Vijay K; Thippareddi, Harshavardhan
2017-06-01
Liquid egg products can be contaminated with Salmonella spp. during processing. A dynamic model for the growth of Salmonella spp. in scrambled egg mix - high solids (SEM) was developed and validated. SEM was prepared and inoculated with ca. 2 log CFU/mL of a five serovar Salmonella spp. cocktail. Salmonella spp. growth data at isothermal temperatures (10, 15, 20, 25, 30, 35, 37, 39, 41, 43, 45, and 47 °C) in SEM were collected. Baranyi model was used (primary model) to fit growth data and the maximum growth rate and lag phase duration for each temperature were determined. A secondary model was developed with maximum growth rate as a function of temperature. The model performance measures, root mean squared error (RMSE, 0.09) and pseudo-R 2 (1.00) indicated good fit for both primary and secondary models. A dynamic model was developed by integrating the primary and secondary models and validated using two sinusoidal temperature profiles, 5-15 °C (low temperature) for 480 h and 10-40 °C (high temperature) for 48 h. The RMSE values for the sinusoidal low and high temperature profiles were 0.47 and 0.42 log CFU/mL, respectively. The model can be used to predict Salmonella spp. growth in case of temperature abuse during liquid egg processing. Copyright © 2016. Published by Elsevier Ltd.
Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong
2016-04-07
Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Platts, J.A.; Abraham, M.H.
The partitioning of organic compounds between air and foliage and between water and foliage is of considerable environmental interest. The purpose of this work is to show that partitioning into the cuticular matrix of one particular species can be satisfactorily modeled by general equations the authors have previously developed and, hence, that the same general equations could be used to model partitioning into other plant materials of the same or different species. The general equations are linear free energy relationships that employ descriptors for polarity/polarizability, hydrogen bond acidity and basicity, dispersive effects, and volume. They have been applied to themore » partition of 62 very varied organic compounds between cuticular matrix of the tomato fruit, Lycopersicon esculentum, and either air (MX{sub a}) or water (MX{sub w}). Values of log MX{sub a} covering a range of 12.4 log units are correlated with a standard deviation of 0.232 log unit, and values of log MX{sub w} covering a range of 7.6 log unit are correlated with an SD of 0.236 log unit. Possibilities are discussed for the prediction of new air-plant cuticular matrix and water-plant cuticular matrix partition values on the basis of the equations developed.« less
Stallard, Robert F.; Murphy, Sheila F.
2014-01-01
An examination of the relation between runoff rate, R, and concentration, C, of twelve major constituents in four small watersheds in eastern Puerto Rico demonstrates a consistent pattern of responses. For solutes that are not substantially bioactive (alkalinity, silica, calcium, magnesium, sodium, and chloride), the log(R)–log(C) relation is almost linear and can be described as a weighted average of two sources, bedrock weathering and atmospheric deposition. The slope of the relation for each solute depends on the respective source contributions to the total river load. If a solute were strictly derived from bedrock weathering, the slope would be −0.3 to −0.4, whereas if strictly derived from atmospheric deposition, the slope would be approximately −0.1. The bioactive constituents (dissolved organic carbon, nitrate, sulfate, and potassium), which are recycled by plants and concentrated in shallow soil, demonstrate nearly flat or downward-arched log(R)–log(C) relations. The peak of the arch represents a transition from dominantly soil-matrix flow to near-surface macropore flow, and finally to overland flow. At highest observed R (80 to >90 mm/h), essentially all reactive surfaces have become wetted, and the input rate of C becomes independent of R (log(R)–log(C) slope of –1). The highest R are tenfold greater than any previous study. Slight clockwise hysteresis for many solutes in the rivers with riparian zones or substantial hyporheic flows indicates that these settings may act as mixing end-members. Particulate constituents (suspended sediment and particulate organic carbon) show slight clockwise hysteresis, indicating mobilization of stored sediment during rising stage.
Shuryak, Igor; Loucas, Bradford D.; Cornforth, Michael N.
2017-01-01
Recent technological advances allow precise radiation delivery to tumor targets. As opposed to more conventional radiotherapy—where multiple small fractions are given—in some cases, the preferred course of treatment may involve only a few (or even one) large dose(s) per fraction. Under these conditions, the choice of appropriate radiobiological model complicates the tasks of predicting radiotherapy outcomes and designing new treatment regimens. The most commonly used model for this purpose is the venerable linear-quadratic (LQ) formalism as it applies to cell survival. However, predictions based on the LQ model are frequently at odds with data following very high acute doses. In particular, although the LQ predicts a continuously bending dose–response relationship for the logarithm of cell survival, empirical evidence over the high-dose region suggests that the survival response is instead log-linear with dose. Here, we show that the distribution of lethal chromosomal lesions among individual human cells (lymphocytes and fibroblasts) exposed to gamma rays and X rays is somewhat overdispersed, compared with the Poisson distribution. Further, we show that such overdispersion affects the predicted dose response for cell survival (the fraction of cells with zero lethal lesions). This causes the dose response to approximate log-linear behavior at high doses, even when the mean number of lethal lesions per cell is well fitted by the continuously curving LQ model. Accounting for overdispersion of lethal lesions provides a novel, mechanistically based explanation for the observed shapes of cell survival dose responses that, in principle, may offer a tractable and clinically useful approach for modeling the effects of high doses per fraction. PMID:29312888
NASA Astrophysics Data System (ADS)
Manolakis, Dimitris G.
2004-10-01
The linear mixing model is widely used in hyperspectral imaging applications to model the reflectance spectra of mixed pixels in the SWIR atmospheric window or the radiance spectra of plume gases in the LWIR atmospheric window. In both cases it is important to detect the presence of materials or gases and then estimate their amount, if they are present. The detection and estimation algorithms available for these tasks are related but they are not identical. The objective of this paper is to theoretically investigate how the heavy tails observed in hyperspectral background data affect the quality of abundance estimates and how the F-test, used for endmember selection, is robust to the presence of heavy tails when the model fits the data.
NASA Astrophysics Data System (ADS)
Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie
2018-05-01
Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Fuertes, Elaine; Flohr, Carsten; Silverberg, Jonathan I; Standl, Marie; Strachan, David P
2017-06-01
We sought to examine the relationship globally between UVR dose exposure and current eczema prevalences. ISAAC Phase Three provided data on eczema prevalence for 13- to 14-year-olds in 214 centers in 87 countries and for 6- to 7-year-olds in 132 centers in 57 countries. Linear and nonlinear associations between (natural log transformed) eczema prevalence and the mean, maximum, minimum, standard deviation, and range of monthly UV dose exposures were assessed using linear mixed-effects regression models. For the 13- to 14-year-olds, the country-level eczema prevalence was positively and linearly associated with country-level monthly mean (prevalence ratio = 1.31 [95% confidence interval = 1.05-1.63] per kJ/m 2 ) and minimum (1.25 [1.06-1.47] per kJ/m 2 ) UVR dose exposure. Linear and nonlinear associations were also observed for other metrics of UV. Results were similar in trend, but nonsignificant, for the fewer centers with 6- to 7-year-olds (e.g., 1.24 [0.96-1.59] per kJ/m 2 for country-level monthly mean UVR). No consistent within-country associations were observed (e.g., 1.05 [0.89-1.23] and 0.92 [0.71-1.18] per kJ/m 2 for center-level monthly mean UVR for the 13- to 14- and 6- to 7-year-olds, respectively). These ecological results support a role for UVR exposure in explaining some of the variation in global childhood eczema prevalence. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Cooley, Richard L.
1993-01-01
A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.
Recall of past use of mobile phone handsets.
Parslow, R C; Hepworth, S J; McKinney, P A
2003-01-01
Previous studies investigating health effects of mobile phones have based their estimation of exposure on self-reported levels of phone use. This UK validation study assesses the accuracy of reported voice calls made from mobile handsets. Data collected by postal questionnaire from 93 volunteers was compared to records obtained prospectively over 6 months from four network operators. Agreement was measured for outgoing calls using the kappa statistic, log-linear modelling, Spearman correlation coefficient and graphical methods. Agreement for number of calls gained moderate classification (kappa = 0.39) with better agreement for duration (kappa = 0.50). Log-linear modelling produced similar results. The Spearman correlation coefficient was 0.48 for number of calls and 0.60 for duration. Graphical agreement methods demonstrated patterns of over-reporting call numbers (by a factor of 1.7) and duration (by a factor of 2.8). These results suggest that self-reported mobile phone use may not fully represent patterns of actual use. This has implications for calculating exposures from questionnaire data.
Zheng, Han; Kimber, Alan; Goodwin, Victoria A; Pickering, Ruth M
2018-01-01
A common design for a falls prevention trial is to assess falling at baseline, randomize participants into an intervention or control group, and ask them to record the number of falls they experience during a follow-up period of time. This paper addresses how best to include the baseline count in the analysis of the follow-up count of falls in negative binomial (NB) regression. We examine the performance of various approaches in simulated datasets where both counts are generated from a mixed Poisson distribution with shared random subject effect. Including the baseline count after log-transformation as a regressor in NB regression (NB-logged) or as an offset (NB-offset) resulted in greater power than including the untransformed baseline count (NB-unlogged). Cook and Wei's conditional negative binomial (CNB) model replicates the underlying process generating the data. In our motivating dataset, a statistically significant intervention effect resulted from the NB-logged, NB-offset, and CNB models, but not from NB-unlogged, and large, outlying baseline counts were overly influential in NB-unlogged but not in NB-logged. We conclude that there is little to lose by including the log-transformed baseline count in standard NB regression compared to CNB for moderate to larger sized datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yadav, Mukesh; Joshi, Shobha; Nayarisseri, Anuraj; Jain, Anuja; Hussain, Aabid; Dubey, Tushar
2013-06-01
Global QSAR models predict biological response of molecular structures which are generic in particular class. A global QSAR dataset admits structural features derived from larger chemical space, intricate to model but more applicable in medicinal chemistry. The present work is global in either sense of structural diversity in QSAR dataset or large number of descriptor input. Forty phenethylamine structure derivatives were selected from a large pool (904) of similar phenethylamines available in Pubchem database. LogP values of selected candidates were collected from physical properties database (PHYSPROP) determined in identical set of conditions. Attempts to model logP value have produced significant QSAR models. MLR aided linear one-variable and two-variable QSAR models with their respective R(2) (0.866, 0.937), R(2)A (0.862, 0.932), F-stat (181.936, 199.812) and Standard Error (0.365, 0.255) are statistically fit and found predictive after internal validation and external validation. The descriptors chosen after improvisation and optimization reveal mechanistic part of work in terms of Verhaar model of Fish base-line toxicity from MLOGP, i.e. (BLTF96) and 3D-MoRSE -signal 15 /unweighted molecular descriptor calculated by summing atom weights viewed by a different angular scattering function (Mor15u) are crucial in regulation of logP values of phenethylamines.
NASA Astrophysics Data System (ADS)
Afshari, Saied; Hejazi, S. Hossein; Kantzas, Apostolos
2018-05-01
Miscible displacement of fluids in porous media is often characterized by the scaling of the mixing zone length with displacement time. Depending on the viscosity contrast of fluids, the scaling law varies between the square root relationship, a sign for dispersive transport regime during stable displacement, and the linear relationship, which represents the viscous fingering regime during an unstable displacement. The presence of heterogeneities in a porous medium significantly affects the scaling behavior of the mixing length as it interacts with the viscosity contrast to control the mixing of fluids in the pore space. In this study, the dynamics of the flow and transport during both unit and adverse viscosity ratio miscible displacements are investigated in heterogeneous packings of circular grains using pore-scale numerical simulations. The pore-scale heterogeneity level is characterized by the variations of the grain diameter and velocity field. The growth of mixing length is employed to identify the nature of the miscible transport regime at different viscosity ratios and heterogeneity levels. It is shown that as the viscosity ratio increases to higher adverse values, the scaling law of mixing length gradually shifts from dispersive to fingering nature up to a certain viscosity ratio and remains almost the same afterwards. In heterogeneous media, the mixing length scaling law is observed to be generally governed by the variations of the velocity field rather than the grain size. Furthermore, the normalization of mixing length temporal plots with respect to the governing parameters of viscosity ratio, heterogeneity, medium length, and medium aspect ratio is performed. The results indicate that mixing length scales exponentially with log-viscosity ratio and grain size standard deviation while the impact of aspect ratio is insignificant. For stable flows, mixing length scales with the square root of medium length, whereas it changes linearly with length during unstable flows. This scaling procedure allows us to describe the temporal variation of mixing length using a generalized curve for various combinations of the flow conditions and porous medium properties.
NASA Astrophysics Data System (ADS)
He, Xiao; Hu, Hengshan; Wang, Xiuming
2013-01-01
Sedimentary rocks can exhibit strong permeability anisotropy due to layering, pre-stresses and the presence of aligned microcracks or fractures. In this paper, we develop a modified cylindrical finite-difference algorithm to simulate the borehole acoustic wavefield in a saturated poroelastic medium with transverse isotropy of permeability and tortuosity. A linear interpolation process is proposed to guarantee the leapfrog finite difference scheme for the generalized dynamic equations and Darcy's law for anisotropic porous media. First, the modified algorithm is validated by comparison against the analytical solution when the borehole axis is parallel to the symmetry axis of the formation. The same algorithm is then used to numerically model the dipole acoustic log in a borehole with its axis being arbitrarily deviated from the symmetry axis of transverse isotropy. The simulation results show that the amplitudes of flexural modes vary with the dipole orientation because the permeability tensor of the formation is dependent on the wellbore azimuth. It is revealed that the attenuation of the flexural wave increases approximately linearly with the radial permeability component in the direction of the transmitting dipole. Particularly, when the borehole axis is perpendicular to the symmetry axis of the formation, it is possible to estimate the anisotropy of permeability by evaluating attenuation of the flexural wave using a cross-dipole sonic logging tool according to the results of sensitivity analyses. Finally, the dipole sonic logs in a deviated borehole surrounded by a stratified porous formation are modelled using the proposed finite difference code. Numerical results show that the arrivals and amplitudes of transmitted flexural modes near the layer interface are sensitive to the wellbore inclination.
Using Log Linear Analysis for Categorical Family Variables.
ERIC Educational Resources Information Center
Moen, Phyllis
The Goodman technique of log linear analysis is ideal for family research, because it is designed for categorical (non-quantitative) variables. Variables are dichotomized (for example, married/divorced, childless/with children) or otherwise categorized (for example, level of permissiveness, life cycle stage). Contingency tables are then…
Magezi, David A
2015-01-01
Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).
Sample Introduction Using the Hildebrand Grid Nebulizer for Plasma Spectrometry
1988-01-01
linear dynamic ranges, precision, and peak width were de- termined for elements in methanol and acetonitrile solutions. , (1)> The grid nebulizer was...FIA) with ICP-OES detection were evaluated. Detec- tion limits, linear dynamic ranges, precision, and peak width were de- termined for elements in...Concentration vs. Log Peak Area for Mn, 59 Cd, Zn, Au, Ni in Methanol (CMSC) 3-28 Log Concentration vs. Log Peak Area for Mn, 60 Cd, Au, Ni in
A comparison of moment-based methods of estimation for the log Pearson type 3 distribution
NASA Astrophysics Data System (ADS)
Koutrouvelis, I. A.; Canavos, G. C.
2000-06-01
The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.; Marino, J. T., Jr.
1974-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.
1975-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.
Real longitudinal data analysis for real people: building a good enough mixed model.
Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E
2010-02-20
Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.
Mor, Orna; Gozlan, Yael; Wax, Marina; Mileguir, Fernando; Rakovsky, Avia; Noy, Bina; Mendelson, Ella; Levy, Itzchak
2015-11-01
HIV-1 RNA monitoring, both before and during antiretroviral therapy, is an integral part of HIV management worldwide. Measurements of HIV-1 viral loads are expected to assess the copy numbers of all common HIV-1 subtypes accurately and to be equally sensitive at different viral loads. In this study, we compared for the first time the performance of the NucliSens v2.0, RealTime HIV-1, Aptima HIV-1 Quant Dx, and Xpert HIV-1 viral load assays. Plasma samples (n = 404) were selected on the basis of their NucliSens v2.0 viral load results and HIV-1 subtypes. Concordance, linear regression, and Bland-Altman plots were assessed, and mixed-model analysis was utilized to compare the analytical performance of the assays for different HIV-1 subtypes and for low and high HIV-1 copy numbers. Overall, high concordance (>83.89%), high correlation values (Pearson r values of >0.89), and good agreement were observed among all assays, although the Xpert and Aptima assays, which provided the most similar outputs (estimated mean viral loads of 2.67 log copies/ml [95% confidence interval [CI], 2.50 to 2.84 log copies/ml] and 2.68 log copies/ml [95% CI, 2.49 to 2.86 log copies/ml], respectively), correlated best with the RealTime assay (89.8% concordance, with Pearson r values of 0.97 to 0.98). These three assays exhibited greater precision than the NucliSens v2.0 assay. All assays were equally sensitive for subtype B and AG/G samples and for samples with viral loads of 1.60 to 3.00 log copies/ml. The NucliSens v2.0 assay underestimated A1 samples and those with viral loads of >3.00 log copies/ml. The RealTime assay tended to underquantify subtype C (compared to the Xpert and Aptima assays) and subtype A1 samples. The Xpert and Aptima assays were equally efficient for detection of all subtypes and viral loads, which renders these new assays most suitable for clinical HIV laboratories. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Wittkopp, Felix; Peeck, Lars; Hafner, Mathias; Frech, Christian
2018-04-13
Process development and characterization based on mathematic modeling provides several advantages and has been applied more frequently over the last few years. In this work, a Donnan equilibrium ion exchange (DIX) model is applied for modelling and simulation of ion exchange chromatography of a monoclonal antibody in linear chromatography. Four different cation exchange resin prototypes consisting of weak, strong and mixed ligands are characterized using pH and salt gradient elution experiments applying the extended DIX model. The modelling results are compared with the results using a classic stoichiometric displacement model. The Donnan equilibrium model is able to describe all four prototype resins while the stoichiometric displacement model fails for the weak and mixed weak/strong ligands. Finally, in silico chromatogram simulations of pH and pH/salt dual gradients are performed to verify the results and to show the consistency of the developed model. Copyright © 2018 Elsevier B.V. All rights reserved.
Analyzing longitudinal data with the linear mixed models procedure in SPSS.
West, Brady T
2009-09-01
Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.
Strange mode instabilities and mass loss in evolved massive primordial stars
NASA Astrophysics Data System (ADS)
Yadav, Abhay Pratap; Kühnrich Biavatti, Stefan Henrique; Glatzel, Wolfgang
2018-04-01
A linear stability analysis of models for evolved primordial stars with masses between 150 and 250 M⊙ is presented. Strange mode instabilities with growth rates in the dynamical range are identified for stellar models with effective temperatures below log Teff = 4.5. For selected models, the final fate of the instabilities is determined by numerical simulation of their evolution into the non-linear regime. As a result, the instabilities lead to finite amplitude pulsations. Associated with them are acoustic energy fluxes capable of driving stellar winds with mass-loss rates in the range between 7.7 × 10-7 and 3.5 × 10-4 M⊙ yr-1.
Turbulence closure for mixing length theories
NASA Astrophysics Data System (ADS)
Jermyn, Adam S.; Lesaffre, Pierre; Tout, Christopher A.; Chitre, Shashikumar M.
2018-05-01
We present an approach to turbulence closure based on mixing length theory with three-dimensional fluctuations against a two-dimensional background. This model is intended to be rapidly computable for implementation in stellar evolution software and to capture a wide range of relevant phenomena with just a single free parameter, namely the mixing length. We incorporate magnetic, rotational, baroclinic, and buoyancy effects exactly within the formalism of linear growth theories with non-linear decay. We treat differential rotation effects perturbatively in the corotating frame using a novel controlled approximation, which matches the time evolution of the reference frame to arbitrary order. We then implement this model in an efficient open source code and discuss the resulting turbulent stresses and transport coefficients. We demonstrate that this model exhibits convective, baroclinic, and shear instabilities as well as the magnetorotational instability. It also exhibits non-linear saturation behaviour, and we use this to extract the asymptotic scaling of various transport coefficients in physically interesting limits.
Mutch, L.S.; Parsons, D.J.
1998-01-01
Pre-and post-burn tree mortality rates, size structure, basal area, and ingrowth were determined for four 1.0 ha mixed conifer forest stands in the Log Creek and Tharp's Creek watersheds of Sequoia National Park. Mean annual mortality between 1986 and 1990 was 0.8% for both watersheds. In the fall of 1990, the Tharp's Creek watershed was treated with a prescribed burn. Between 1991 and 1995, mean annual mortality was 1.4% in the unburned Log Creek watershed and 17.2% in the burned Tharp's Creek watershed. A drought from 1987 to 1992 likely contributed to the mortality increase in the Log Creek watershed. The high mortality in the Tharp's Creek watershed was primarily related to crown scorch from the 1990 fire and was modeled with logistic regression for white fir (Abies concolor [Gord. and Glend.]) and sugar pine (Pinus lambertiana [Dougl.]). From 1989 to 1994, basal area declined an average of 5% per year in the burned Tharp's Creek watershed, compared to average annual increases of less than 1% per year in the unburned Log Creek watershed and in the Tharp's watershed prior to burning. Post-burn size structure was dramatically changed in the Tharp's Creek stands: 75% of trees ???50 cm and 25% of trees >50 cm were killed by the fire.
Evaluating and improving the representation of heteroscedastic errors in hydrological models
NASA Astrophysics Data System (ADS)
McInerney, D. J.; Thyer, M. A.; Kavetski, D.; Kuczera, G. A.
2013-12-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic predictions. In particular, residual errors of hydrological models are often heteroscedastic, with large errors associated with high rainfall and runoff events. Recent studies have shown that using a weighted least squares (WLS) approach - where the magnitude of residuals are assumed to be linearly proportional to the magnitude of the flow - captures some of this heteroscedasticity. In this study we explore a range of Bayesian approaches for improving the representation of heteroscedasticity in residual errors. We compare several improved formulations of the WLS approach, the well-known Box-Cox transformation and the more recent log-sinh transformation. Our results confirm that these approaches are able to stabilize the residual error variance, and that it is possible to improve the representation of heteroscedasticity compared with the linear WLS approach. We also find generally good performance of the Box-Cox and log-sinh transformations, although as indicated in earlier publications, the Box-Cox transform sometimes produces unrealistically large prediction limits. Our work explores the trade-offs between these different uncertainty characterization approaches, investigates how their performance varies across diverse catchments and models, and recommends practical approaches suitable for large-scale applications.
Symmetric log-domain diffeomorphic Registration: a demons-based approach.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2008-01-01
Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.
Jardínez, Christiaan; Vela, Alberto; Cruz-Borbolla, Julián; Alvarez-Mendez, Rodrigo J; Alvarado-Rodríguez, José G
2016-12-01
The relationship between the chemical structure and biological activity (log IC 50 ) of 40 derivatives of 1,4-dihydropyridines (DHPs) was studied using density functional theory (DFT) and multiple linear regression analysis methods. With the aim of improving the quantitative structure-activity relationship (QSAR) model, the reduced density gradient s( r) of the optimized equilibrium geometries was used as a descriptor to include weak non-covalent interactions. The QSAR model highlights the correlation between the log IC 50 with highest molecular orbital energy (E HOMO ), molecular volume (V), partition coefficient (log P), non-covalent interactions NCI(H4-G) and the dual descriptor [Δf(r)]. The model yielded values of R 2 =79.57 and Q 2 =69.67 that were validated with the next four internal analytical validations DK=0.076, DQ=-0.006, R P =0.056, and R N =0.000, and the external validation Q 2 boot =64.26. The QSAR model found can be used to estimate biological activity with high reliability in new compounds based on a DHP series. Graphical abstract The good correlation between the log IC 50 with the NCI (H4-G) estimated by the reduced density gradient approach of the DHP derivatives.
Reflectance of micron-sized dust particles retrieved with the Umov law
NASA Astrophysics Data System (ADS)
Zubko, Evgenij; Videen, Gorden; Zubko, Nataliya; Shkuratov, Yuriy
2017-03-01
The maximum positive polarization Pmax that initially unpolarized light acquires when scattered from a particulate surface inversely correlates with its geometric albedo A. In the literature, this phenomenon is known as the Umov law. We investigate the Umov law in application to single-scattering submicron and micron-sized agglomerated debris particles, model particles that have highly irregular morphology. We find that if the complex refractive index m is constrained to Re(m)=1.4-1.7 and Im(m)=0-0.15, model particles of a given size distribution have a linear inverse correlation between log(Pmax) and log(A). This correlation resembles what is measured in particulate surfaces, suggesting a similar mechanism governing the Umov law in both systems. We parameterize the dependence of log(A) on log(Pmax) of single-scattering particles and analyze the airborne polarimetric measurements of atmospheric aerosols reported by Dolgos & Martins in [1]. We conclude that Pmax ≈ 50% measured by Dolgos & Martins corresponds to very dark aerosols having geometric albedo A=0.019 ± 0.005.
Karimi, Hamid Reza; Gao, Huijun
2008-07-01
A mixed H2/Hinfinity output-feedback control design methodology is presented in this paper for second-order neutral linear systems with time-varying state and input delays. Delay-dependent sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller, which guarantees asymptotic stability and a mixed H2/Hinfinity performance for the closed-loop system of the second-order neutral linear system, is then developed directly instead of coupling the model to a first-order neutral system. A Lyapunov-Krasovskii method underlies the LMI-based mixed H2/Hinfinity output-feedback control design using some free weighting matrices. The simulation results illustrate the effectiveness of the proposed methodology.
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An approach to checking case-crossover analyses based on equivalence with time-series methods.
Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L
2008-03-01
The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.
Anstey, Chris M
2005-06-01
Currently, three strong ion models exist for the determination of plasma pH. Mathematically, they vary in their treatment of weak acids, and this study was designed to determine whether any significant differences exist in the simulated performance of these models. The models were subjected to a "metabolic" stress either in the form of variable strong ion difference and fixed weak acid effect, or vice versa, and compared over the range 25 < or = Pco(2) < or = 135 Torr. The predictive equations for each model were iteratively solved for pH at each Pco(2) step, and the results were plotted as a series of log(Pco(2))-pH titration curves. The results were analyzed for linearity by using ordinary least squares regression and for collinearity by using correlation. In every case, the results revealed a linear relationship between log(Pco(2)) and pH over the range 6.8 < or = pH < or = 7.8, and no significant difference between the curve predictions under metabolic stress. The curves were statistically collinear. Ultimately, their clinical utility will be determined both by acceptance of the strong ion framework for describing acid-base physiology and by the ease of measurement of the independent model parameters.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
Quasi-equilibrium analysis of the ion-pair mediated membrane transport of low-permeability drugs.
Miller, Jonathan M; Dahan, Arik; Gupta, Deepak; Varghese, Sheeba; Amidon, Gordon L
2009-07-01
The aim of this research was to gain a mechanistic understanding of ion-pair mediated membrane transport of low-permeability drugs. Quasi-equilibrium mass transport analyses were developed to describe the ion-pair mediated octanol-buffer partitioning and hydrophobic membrane permeation of the model basic drug phenformin. Three lipophilic counterions were employed: p-toluenesulfonic acid, 2-naphthalenesulfonic acid, and 1-hydroxy-2-naphthoic acid (HNAP). Association constants and intrinsic octanol-buffer partition coefficients (Log P(AB)) of the ion-pairs were obtained by fitting a transport model to double reciprocal plots of apparent octanol-buffer distribution coefficients versus counterion concentration. All three counterions enhanced the lipophilicity of phenformin, with HNAP providing the greatest increase in Log P(AB), 3.7 units over phenformin alone. HNAP also enhanced the apparent membrane permeability of phenformin, 27-fold in the PAMPA model, and 4.9-fold across Caco-2 cell monolayers. As predicted from a quasi-equilibrium analysis of ion-pair mediated membrane transport, an order of magnitude increase in phenformin flux was observed per log increase in counterion concentration, such that log-log plots of phenformin flux versus HNAP concentration gave linear relationships. These results provide increased understanding of the underlying mechanisms of ion-pair mediated membrane transport, emphasizing the potential of this approach to enable oral delivery of low-permeability drugs.
Ordinal probability effect measures for group comparisons in multinomial cumulative link models.
Agresti, Alan; Kateri, Maria
2017-03-01
We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
Toziou, Peristera-Maria; Barmpalexis, Panagiotis; Boukouvala, Paraskevi; Verghese, Susan; Nikolakakis, Ioannis
2018-05-30
Since culture-based methods are costly and time consuming, alternative methods are investigated for the quantification of probiotics in commercial products. In this work ATR- FTIR vibration spectroscopy was applied for the differentiation and quantification of live Lactobacillus (La 5) in mixed populations of live and killed La 5, in the absence and in the presence of enteric polymer Eudragit ® L 100-55. Suspensions of live (La 5_L) and killed in acidic environment bacillus (La 5_K) were prepared and binary mixtures of different percentages were used to grow cell cultures for colony counting and spectral analysis. The increase in the number of colonies with added%La 5_L to the mixture was log-linear (r 2 = 0.926). Differentiation of La 5_L from La 5_K was possible directly from the peak area at 1635 cm -1 (amides of proteins and peptides) and a linear relationship between%La 5_L and peak area in the range 0-95% was obtained. Application of partial least squares regression (PLSR) gave reasonable prediction of%La 5_L (RMSEp = 6.48) in binary mixtures of live and killed La 5 but poor prediction (RMSEp = 11.75) when polymer was added to the La 5 mixture. Application of artificial neural networks (ANNs) improved greatly the predictive ability for%La 5_L both in the absence and in the presence of polymer (RMSEp = 8.11 × 10 -8 for La 5 only mixtures and RMSEp = 8.77 × 10 -8 with added polymer) due to their ability to express in the calibration models more hidden spectral information than PLSR. Copyright © 2018 Elsevier B.V. All rights reserved.
A Comparison of Strategies for Estimating Conditional DIF
ERIC Educational Resources Information Center
Moses, Tim; Miao, Jing; Dorans, Neil J.
2010-01-01
In this study, the accuracies of four strategies were compared for estimating conditional differential item functioning (DIF), including raw data, logistic regression, log-linear models, and kernel smoothing. Real data simulations were used to evaluate the estimation strategies across six items, DIF and No DIF situations, and four sample size…
Inflammation, homocysteine and carotid intima-media thickness.
Baptista, Alexandre P; Cacdocar, Sanjiva; Palmeiro, Hugo; Faísca, Marília; Carrasqueira, Herménio; Morgado, Elsa; Sampaio, Sandra; Cabrita, Ana; Silva, Ana Paula; Bernardo, Idalécio; Gome, Veloso; Neves, Pedro L
2008-01-01
Cardiovascular disease is the main cause of morbidity and mortality in chronic renal patients. Carotid intima-media thickness (CIMT) is one of the most accurate markers of atherosclerosis risk. In this study, the authors set out to evaluate a population of chronic renal patients to determine which factors are associated with an increase in intima-media thickness. We included 56 patients (F=22, M=34), with a mean age of 68.6 years, and an estimated glomerular filtration rate of 15.8 ml/min (calculated by the MDRD equation). Various laboratory and inflammatory parameters (hsCRP, IL-6 and TNF-alpha) were evaluated. All subjects underwent measurement of internal carotid artery intima-media thickness by high-resolution real-time B-mode ultrasonography using a 10 MHz linear transducer. Intima-media thickness was used as a dependent variable in a simple linear regression model, with the various laboratory parameters as independent variables. Only parameters showing a significant correlation with CIMT were evaluated in a multiple regression model: age (p=0.001), hemoglobin (p=00.3), logCRP (p=0.042), logIL-6 (p=0.004) and homocysteine (p=0.002). In the multiple regression model we found that age (p=0.001) and homocysteine (p=0.027) were independently correlated with CIMT. LogIL-6 did not reach statistical significance (p=0.057), probably due to the small population size. The authors conclude that age and homocysteine correlate with carotid intima-media thickness, and thus can be considered as markers/risk factors in chronic renal patients.
Comparing hospital costs: what is gained by accounting for more than a case-mix index?
Hvenegaard, Anne; Street, Andrew; Sørensen, Torben Højmark; Gyrd-Hansen, Dorte
2009-08-01
We explore what effect controlling for various patient characteristics beyond a case-mix index (DRG) has on inferences drawn about the relative cost performance of hospital departments. We estimate fixed effect cost models in which 3754 patients are clustered within six Danish vascular departments. We compare a basic model including a DRG index only with models also including age and gender, health related characteristics, such as smoking status, diabetes, and American Society of Anesthesiogists score (ASA-score), and socioeconomic characteristics such as income, employment and whether the patient lives alone. We find that the DRG index is a robust and important explanatory factor and adding other routinely collected characteristics such as age and gender and other health related or socioeconomic characteristics do not seem to alter the results significantly. The results are more sensitive to choice of functional form, i.e. in particular to whether costs are log transformed. Our results suggest that the routinely collected characteristics such as DRG index, age and gender are sufficient when drawing inferences about relative cost performance. Adding health related or socioeconomic patient characteristics only slightly improves our model in terms of explanatory power but not when drawing inferences about relative performance. The results are, however, sensitive to whether costs are log transformed.
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...
Reduced Intellectual Development in Children with Prenatal Lead Exposure
Schnaas, Lourdes; Rothenberg, Stephen J.; Flores, Maria-Fernanda; Martinez, Sandra; Hernandez, Carmen; Osorio, Erica; Velasco, Silvia Ruiz; Perroni, Estela
2006-01-01
Objective Low-level postnatal lead exposure is associated with poor intellectual development in children, although effects of prenatal exposure are less well studied. We hypothesized that prenatal lead exposure would have a more powerful and lasting impact on child development than postnatal exposure. Design We used generalized linear mixed models with random intercept and slope to analyze the pattern of lead effect of the cohort from pregnancy through 10 years of age on child IQ from 6 to 10 years. We statistically evaluated dose–response nonlinearity. Participants A cohort of 175 children, 150 of whom had complete data for all included covariates, attended the National Institute of Perinatology in Mexico City from 1987 through 2002. Evaluations/Measurements We used the Wechsler Intelligence Scale for Children–Revised, Spanish version, to measure IQ. Blood lead (BPb) was measured by a reference laboratory of the Centers for Disease Control and Prevention (CDC) quality assurance program for BPb. Results Geometric mean BPb during pregnancy was 8.0 μg/dL (range, 1–33 μg/dL), from 1 through 5 years was 9.8 μg/dL (2.8–36.4 μg/dL), and from 6 through 10 years was 6.2 μg/dL (2.2–18.6 μg/dL). IQ at 6–10 years decreased significantly only with increasing natural-log third-trimester BPb (β = −3.90; 95% confidence interval, −6.45 to −1.36), controlling for other BPb and covariates. The dose–response BPb–IQ function was log-linear, not linear–linear. Conclusions Lead exposure around 28 weeks gestation is a critical period for later child intellectual development, with lasting and possibly permanent effects. There was no evidence of a threshold; the strongest lead effects on IQ occurred within the first few micrograms of BPb. Relevance to Clinical Practice Current CDC action limits for children applied to pregnant women permit most lead-associated child IQ decreases measured over the studied BPb range. PMID:16675439
Xu, Feng; Liang, Xinmiao; Lin, Bingcheng; Su, Fan; Schramm, Karl-Werner; Kettrup, Antonius
2002-08-01
The capacity factors of a series of hydrophobic organic compounds (HOCs) were measured in soil leaching column chromatography (SLCC) on a soil column, and in reversed-phase liquid chromatography on a C18 column with different volumetric fractions (phi) of methanol in methanol-water mixtures. A general equation of linear solvation energy relationships, log(XYZ) XYZ0 + mV(I)/100 + spi + bbetam + aalpham, was applied to analyze capacity factors (k'), soil organic partition coefficients (Koc) and octanol-water partition coefficients (P). The analyses exhibited high accuracy. The chief solute factors that control logKoc, log P, and logk' (on soil and on C18) are the solute size (V(I)/100) and hydrogen-bond basicity (betam). Less important solute factors are the dipolarity/polarizability (pi*) and hydrogen-bond acidity (alpham). Log k' on soil and log Koc have similar signs in four fitting coefficients (m, s, b and a) and similar ratios (m:s:b:a), while log k' on C18 and logP have similar signs in coefficients (m, s, b and a) and similar ratios (m:s:b:a). Consequently, logk' values on C18 have good correlations with logP (r > 0.97), while logk' values on soil have good correlations with logKoc (r > 0.98). Two Koc estimation methods were developed, one through solute solvatochromic parameters, and the other through correlations with k' on soil. For HOCs, a linear relationship between logarithmic capacity factor and methanol composition in methanol-water mixtures could also be derived in SLCC.
Kinetic Behavior of Escherichia coli on Various Cheeses under Constant and Dynamic Temperature.
Kim, K; Lee, H; Gwak, E; Yoon, Y
2014-07-01
In this study, we developed kinetic models to predict the growth of pathogenic Escherichia coli on cheeses during storage at constant and changing temperatures. A five-strain mixture of pathogenic E. coli was inoculated onto natural cheeses (Brie and Camembert) and processed cheeses (sliced Mozzarella and sliced Cheddar) at 3 to 4 log CFU/g. The inoculated cheeses were stored at 4, 10, 15, 25, and 30°C for 1 to 320 h, with a different storage time being used for each temperature. Total bacteria and E. coli cells were enumerated on tryptic soy agar and MacConkey sorbitol agar, respectively. E. coli growth data were fitted to the Baranyi model to calculate the maximum specific growth rate (μ max; log CFU/g/h), lag phase duration (LPD; h), lower asymptote (log CFU/g), and upper asymptote (log CFU/g). The kinetic parameters were then analyzed as a function of storage temperature, using the square root model, polynomial equation, and linear equation. A dynamic model was also developed for varying temperature. The model performance was evaluated against observed data, and the root mean square error (RMSE) was calculated. At 4°C, E. coli cell growth was not observed on any cheese. However, E. coli growth was observed at 10°C to 30°C with a μ max of 0.01 to 1.03 log CFU/g/h, depending on the cheese. The μ max values increased as temperature increased, while LPD values decreased, and μ max and LPD values were different among the four types of cheese. The developed models showed adequate performance (RMSE = 0.176-0.337), indicating that these models should be useful for describing the growth kinetics of E. coli on various cheeses.
Lu, Jun; Li, Li-Ming; He, Ping-Ping; Cao, Wei-Hua; Zhan, Si-Yan; Hu, Yong-Hua
2004-06-01
To introduce the application of mixed linear model in the analysis of secular trend of blood pressure under antihypertensive treatment. A community-based postmarketing surveillance of benazepril was conducted in 1831 essential hypertensive patients (age range from 35 to 88 years) in Shanghai. Data of blood pressure was analyzed every 3 months with mixed linear model to describe the secular trend of blood pressure and changes of age-specific and gender-specific. The changing trends of systolic blood pressure (SBP) and diastolic blood pressure (DBP) were found to fit the curvilinear models. A piecewise model was fit for pulse pressure (PP), i.e., curvilinear model in the first 9 months and linear model after 9 months of taking medication. Both blood pressure and its velocity gradually slowed down. There were significant variation for the curve parameters of intercept, slope, and acceleration. Blood pressure in patients with higher initial levels was persistently declining in the 3-year-treatment. However blood pressures of patients with relatively low initial levels remained low when dropped down to some degree. Elderly patients showed high SBP but low DBP, so as with higher PP. The velocity and sizes of blood pressure reductions increased with the initial level of blood pressure. Mixed linear model is flexible and robust when applied to the analysis of longitudinal data but with missing values and can also make the maximum use of available information.
Statistical analysis of dendritic spine distributions in rat hippocampal cultures
2013-01-01
Background Dendritic spines serve as key computational structures in brain plasticity. Much remains to be learned about their spatial and temporal distribution among neurons. Our aim in this study was to perform exploratory analyses based on the population distributions of dendritic spines with regard to their morphological characteristics and period of growth in dissociated hippocampal neurons. We fit a log-linear model to the contingency table of spine features such as spine type and distance from the soma to first determine which features were important in modeling the spines, as well as the relationships between such features. A multinomial logistic regression was then used to predict the spine types using the features suggested by the log-linear model, along with neighboring spine information. Finally, an important variant of Ripley’s K-function applicable to linear networks was used to study the spatial distribution of spines along dendrites. Results Our study indicated that in the culture system, (i) dendritic spine densities were "completely spatially random", (ii) spine type and distance from the soma were independent quantities, and most importantly, (iii) spines had a tendency to cluster with other spines of the same type. Conclusions Although these results may vary with other systems, our primary contribution is the set of statistical tools for morphological modeling of spines which can be used to assess neuronal cultures following gene manipulation such as RNAi, and to study induced pluripotent stem cells differentiated to neurons. PMID:24088199
Panagopoulos, Dimitri; Jahnke, Annika; Kierkegaard, Amelie; MacLeod, Matthew
2015-10-20
The sorption of cyclic volatile methyl siloxanes (cVMS) to organic matter has a strong influence on their fate in the aquatic environment. We report new measurements of the partition ratios between freshwater sediment organic carbon and water (KOC) and between Aldrich humic acid dissolved organic carbon and water (KDOC) for three cVMS, and for three polychlorinated biphenyls (PCBs) that were used as reference chemicals. Our measurements were made using a purge-and-trap method that employs benchmark chemicals to calibrate mass transfer at the air/water interface in a fugacity-based multimedia model. The measured log KOC of octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5), and dodecamethylcyclohexasiloxane (D6) were 5.06, 6.12, and 7.07, and log KDOC were 5.05, 6.13, and 6.79. To our knowledge, our measurements for KOC of D6 and KDOC of D4 and D6 are the first reported. Polyparameter linear free energy relationships (PP-LFERs) derived from training sets of empirical data that did not include cVMS generally did not predict our measured partition ratios of cVMS accurately (root-mean-squared-error (RMSE) for logKOC 0.76 and for logKDOC 0.73). We constructed new PP-LFERs that accurately describe partition ratios for the cVMS as well as for other chemicals by including our new measurements in the existing training sets (logKOC RMSEcVMS: 0.09, logKDOC RMSEcVMS: 0.12). The PP-LFERs we have developed here should be further evaluated and perhaps recalibrated when experimental data for other siloxanes become available.
Alfvén wave interactions in the solar wind
NASA Astrophysics Data System (ADS)
Webb, G. M.; McKenzie, J. F.; Hu, Q.; le Roux, J. A.; Zank, G. P.
2012-11-01
Alfvén wave mixing (interaction) equations used in locally incompressible turbulence transport equations in the solar wind are analyzed from the perspective of linear wave theory. The connection between the wave mixing equations and non-WKB Alfven wave driven wind theories are delineated. We discuss the physical wave energy equation and the canonical wave energy equation for non-WKB Alfven waves and the WKB limit. Variational principles and conservation laws for the linear wave mixing equations for the Heinemann and Olbert non-WKB wind model are obtained. The connection with wave mixing equations used in locally incompressible turbulence transport in the solar wind are discussed.
NASA Astrophysics Data System (ADS)
Fleury, Manon; Charron, Dominique F.; Holt, John D.; Allen, O. Brian; Maarouf, Abdel R.
2006-07-01
The incidence of enteric infections in the Canadian population varies seasonally, and may be expected to be change in response to global climate changes. To better understand any potential impact of warmer temperature on enteric infections in Canada, we investigated the relationship between ambient temperature and weekly reports of confirmed cases of three pathogens in Canada: Salmonella, pathogenic Escherichia coli and Campylobacter, between 1992 and 2000 in two Canadian provinces. We used generalized linear models (GLMs) and generalized additive models (GAMs) to estimate the effect of seasonal adjustments on the estimated models. We found a strong non-linear association between ambient temperature and the occurrence of all three enteric pathogens in Alberta, Canada, and of Campylobacter in Newfoundland-Labrador. Threshold models were used to quantify the relationship of disease and temperature with thresholds chosen from 0 to -10°C depending on the pathogen modeled. For Alberta, the log relative risk of Salmonella weekly case counts increased by 1.2%, Campylobacter weekly case counts increased by 2.2%, and E. coli weekly case counts increased by 6.0% for every degree increase in weekly mean temperature. For Newfoundland-Labrador the log relative risk increased by 4.5% for Campylobacter for every degree increase in weekly mean temperature.
ERIC Educational Resources Information Center
Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang
2006-01-01
This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…
USDA-ARS?s Scientific Manuscript database
Transformations to multiple trait mixed model equations (MME) which are intended to improve computational efficiency in best linear unbiased prediction (BLUP) and restricted maximum likelihood (REML) are described. It is shown that traits that are expected or estimated to have zero residual variance...
2005-01-01
Introduction Risk prediction scores usually overestimate mortality in obstetric populations because mortality rates in this group are considerably lower than in others. Studies examining this effect were generally small and did not distinguish between obstetric and nonobstetric pathologies. We evaluated the performance of the Acute Physiology and Chronic Health Evaluation (APACHE) II model in obstetric admissions to critical care units contributing to the ICNARC Case Mix Programme. Methods All obstetric admissions were extracted from the ICNARC Case Mix Programme Database of 219,468 admissions to UK critical care units from 1995 to 2003 inclusive. Cases were divided into direct obstetric pathologies and indirect or coincidental pathologies, and compared with a control cohort of all women aged 16–50 years not included in the obstetric categories. The predictive ability of APACHE II was evaluated in the three groups. A prognostic model was developed for direct obstetric admissions to predict the risk for hospital mortality. A log-linear model was developed to predict the length of stay in the critical care unit. Results A total of 1452 direct obstetric admissions were identified, the most common pathologies being haemorrhage and hypertensive disorders of pregnancy. There were 278 admissions identified as indirect or coincidental and 22,938 in the nonpregnant control cohort. Hospital mortality rates were 2.2%, 6.0% and 19.6% for the direct obstetric group, the indirect or coincidental group, and the control cohort, respectively. Cox regression calibration analysis showed a reasonable fit of the APACHE II model for the nonpregnant control cohort (slope = 1.1, intercept = -0.1). However, the APACHE II model vastly overestimated mortality for obstetric admissions (mortality ratio = 0.25). Risk prediction modelling demonstrated that the Glasgow Coma Scale score was the best discriminator between survival and death in obstetric admissions. Conclusion This study confirms that APACHE II overestimates mortality in obstetric admissions to critical care units. This may be because of the physiological changes in pregnancy or the unique scoring profile of obstetric pathologies such as HELLP syndrome. It may be possible to recalibrate the APACHE II score for obstetric admissions or to devise an alternative score specifically for obstetric admissions.
A hierarchical model for estimating change in American Woodcock populations
Sauer, J.R.; Link, W.A.; Kendall, W.L.; Kelley, J.R.; Niven, D.K.
2008-01-01
The Singing-Ground Survey (SGS) is a primary source of information on population change for American woodcock (Scolopax minor). We analyzed the SGS using a hierarchical log-linear model and compared the estimates of change and annual indices of abundance to a route regression analysis of SGS data. We also grouped SGS routes into Bird Conservation Regions (BCRs) and estimated population change and annual indices using BCRs within states and provinces as strata. Based on the hierarchical model?based estimates, we concluded that woodcock populations were declining in North America between 1968 and 2006 (trend = -0.9%/yr, 95% credible interval: -1.2, -0.5). Singing-Ground Survey results are generally similar between analytical approaches, but the hierarchical model has several important advantages over the route regression. Hierarchical models better accommodate changes in survey efficiency over time and space by treating strata, years, and observers as random effects in the context of a log-linear model, providing trend estimates that are derived directly from the annual indices. We also conducted a hierarchical model analysis of woodcock data from the Christmas Bird Count and the North American Breeding Bird Survey. All surveys showed general consistency in patterns of population change, but the SGS had the shortest credible intervals. We suggest that population management and conservation planning for woodcock involving interpretation of the SGS use estimates provided by the hierarchical model.
White noise analysis of Phycomyces light growth response system. I. Normal intensity range.
Lipson, E D
1975-01-01
The Wiener-Lee-Schetzen method for the identification of a nonlinear system through white gaussian noise stimulation was applied to the transient light growth response of the sporangiophore of Phycomyces. In order to cover a moderate dynamic range of light intensity I, the imput variable was defined to be log I. The experiments were performed in the normal range of light intensity, centered about I0 = 10(-6) W/cm2. The kernels of the Wierner functionals were computed up to second order. Within the range of a few decades the system is reasonably linear with log I. The main nonlinear feature of the second-order kernel corresponds to the property of rectification. Power spectral analysis reveals that the slow dynamics of the system are of at least fifth order. The system can be represented approximately by a linear transfer function, including a first-order high-pass (adaptation) filter with a 4 min time constant and an underdamped fourth-order low-pass filter. Accordingly a linear electronic circuit was constructed to simulate the small scale response characteristics. In terms of the adaptation model of Delbrück and Reichardt (1956, in Cellular Mechanisms in Differentiation and Growth, Princeton University Press), kernels were deduced for the dynamic dependence of the growth velocity (output) on the "subjective intensity", a presumed internal variable. Finally the linear electronic simulator above was generalized to accommodate the large scale nonlinearity of the adaptation model and to serve as a tool for deeper test of the model. PMID:1203444
Linear mixing model applied to AVHRR LAC data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.
Estimation of transformation parameters for microarray data.
Durbin, Blythe; Rocke, David M
2003-07-22
Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.
Drug awareness in adolescents attending a mental health service: analysis of longitudinal data.
Arnau, Jaume; Bono, Roser; Díaz, Rosa; Goti, Javier
2011-11-01
One of the procedures used most recently with longitudinal data is linear mixed models. In the context of health research the increasing number of studies that now use these models bears witness to the growing interest in this type of analysis. This paper describes the application of linear mixed models to a longitudinal study of a sample of Spanish adolescents attending a mental health service, the aim being to investigate their knowledge about the consumption of alcohol and other drugs. More specifically, the main objective was to compare the efficacy of a motivational interviewing programme with a standard approach to drug awareness. The models used to analyse the overall indicator of drug awareness were as follows: (a) unconditional linear growth curve model; (b) growth model with subject-associated variables; and (c) individual curve model with predictive variables. The results showed that awareness increased over time and that the variable 'schooling years' explained part of the between-subjects variation. The effect of motivational interviewing was also significant.
Time and frequency domain analysis of sampled data controllers via mixed operation equations
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1981-01-01
Specification of the mathematical equations required to define the dynamic response of a linear continuous plant, subject to sampled data control, is complicated by the fact that the digital components of the control system cannot be modeled via linear ordinary differential equations. This complication can be overcome by introducing two new mathematical operations; namely, the operation of zero order hold and digial delay. It is shown that by direct utilization of these operations, a set of linear mixed operation equations can be written and used to define the dynamic response characteristics of the controlled system. It also is shown how these linear mixed operation equations lead, in an automatable manner, directly to a set of finite difference equations which are in a format compatible with follow on time and frequency domain analysis methods.
Wang, Yuanjia; Chen, Huaihou
2012-01-01
Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801
Wang, Yuanjia; Chen, Huaihou
2012-12-01
We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.
A Unified Framework for Bounded and Unbounded Numerical Estimation
ERIC Educational Resources Information Center
Kim, Dan; Opfer, John E.
2017-01-01
Representations of numerical value have been assessed by using bounded (e.g., 0-1,000) and unbounded (e.g., 0-?) number-line tasks, with considerable debate regarding whether 1 or both tasks elicit unique cognitive strategies (e.g., addition or subtraction) and require unique cognitive models. To test this, we examined how well a mixed log-linear…
Probability distribution functions for unit hydrographs with optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh
2017-05-01
A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.
Correlation between Gas Bubble Formation and Hydrogen Evolution Reaction Kinetics at Nanoelectrodes.
Chen, Qianjin; Luo, Long
2018-04-17
We report the correlation between H 2 gas bubble formation potential and hydrogen evolution reaction (HER) activity for Au and Pt nanodisk electrodes (NEs). Microkinetic models were formulated to obtain the HER kinetic information for individual Au and Pt NEs. We found that the rate-determining steps for the HER at Au and Pt NEs were the Volmer step and the Heyrovsky step, respectively. More interestingly, the standard rate constant ( k 0 ) of the rate-determining step was found to vary over 2 orders of magnitude for the same type of NEs. The observed variations indicate the HER activity heterogeneity at the nanoscale. Furthermore, we discovered a linear relationship between bubble formation potential ( E bubble ) and log( k 0 ) with a slope of 125 mV/decade for both Au and Pt NEs. As log ( k 0 ) increases, E bubble shifts linearly to more positive potentials, meaning NEs with higher HER activities form H 2 bubbles at less negative potentials. Our theoretical model suggests that such linear relationship is caused by the similar critical bubble formation condition for Au and Pt NEs with varied sizes. Our results have potential implications for using gas bubble formation to evaluate the HER activity distribution of nanoparticles in an ensemble.
A Spreadsheet for a 2 x 3 x 2 Log-Linear Analysis. AIR 1991 Annual Forum Paper.
ERIC Educational Resources Information Center
Saupe, Joe L.
This paper describes a personal computer spreadsheet set up to carry out hierarchical log-linear analyses, a type of analysis useful for institutional research into multidimensional frequency tables formed from categorical variables such as faculty rank, student class level, gender, or retention status. The spreadsheet provides a concrete vehicle…
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
Nagy, P; Faye, B; Marko, O; Thomas, S; Wernery, U; Juhasz, J
2013-09-01
The objectives of the present study were to monitor the microbiological quality and somatic cell count (SCC) of bulk tank milk at the world's first large-scale camel dairy farm for a 2-yr period, to compare the results of 2 methods for the enumeration of SCC, to evaluate correlation among milk quality indicators, and to determine the effect of specific factors (year, season, stage of lactation, and level of production) on milk quality indicators. The study was conducted from January 2008 to January 2010. Total viable count (TVC), coliform count (CC), California Mastitis Test (CMT) score, and SCC were determined from daily bulk milk samples. Somatic cell count was measured by using a direct microscopic method and with an automatic cell counter. In addition, production parameters [total daily milk production (TDM, kg), number of milking camels (NMC), average milk per camel (AMC, kg)] and stage of lactation (average postpartum days, PPD) were recorded for each test day. A strong correlation (r=0.33) was found between the 2 methods for SCC enumeration; however, values derived using the microscopic method were higher. The geometric means of SCC and TVC were 394×10(3) cells/mL and 5,157 cfu/mL during the observation period, respectively. Somatic cell count was >500×10(3) cells/mL on 14.6% (106/725) and TVC was >10×10(3) cfu/mL on 4.0% (30/742) of the test days. Both milk quality indicators had a distinct seasonal pattern. For log SCC, the mean was lowest in summer and highest in autumn. The seasonal pattern of log TVC was slightly different, with the lowest values being recorded during the spring. The monthly mean TVC pattern showed a clear difference between years. Coliform count was <10 cfu/mL in most of the samples (709/742, 95.6%). A positive correlation was found between log SCC and log TVC (r=0.32), between log SCC and CMT score (r=0.26), and between log TVC and CC in yr 1 (r=0.30). All production parameters and stage of lactation showed strong seasonal variation. Log SCC was negatively correlated with TDM (r=-0.35), AMC (r=-0.37), and NMC (r=-0.15) and positively correlated with PPD (r=0.40). Log TVC had a negative correlation with AMC (r=-0.40) but a positive correlation with NMC (r=0.32), TDM (r=0.16), and PPD (r=0.45). The linear mixed model with stepwise variable selection showed that the main sources of log SCC variation were PPD, TDM, PPD × season, and season. For log TVC, the same factors and year contributed to the variation. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
How does abundance scale with body size in coupled size-structured food webs?
Blanchard, Julia L; Jennings, Simon; Law, Richard; Castle, Matthew D; McCloghrie, Paul; Rochet, Marie-Joëlle; Benoît, Eric
2009-01-01
1. Widely observed macro-ecological patterns in log abundance vs. log body mass of organisms can be explained by simple scaling theory based on food (energy) availability across a spectrum of body sizes. The theory predicts that when food availability falls with body size (as in most aquatic food webs where larger predators eat smaller prey), the scaling between log N vs. log m is steeper than when organisms of different sizes compete for a shared unstructured resource (e.g. autotrophs, herbivores and detritivores; hereafter dubbed 'detritivores'). 2. In real communities, the mix of feeding characteristics gives rise to complex food webs. Such complexities make empirical tests of scaling predictions prone to error if: (i) the data are not disaggregated in accordance with the assumptions of the theory being tested, or (ii) the theory does not account for all of the trophic interactions within and across the communities sampled. 3. We disaggregated whole community data collected in the North Sea into predator and detritivore components and report slopes of log abundance vs. log body mass relationships. Observed slopes for fish and epifaunal predator communities (-1.2 to -2.25) were significantly steeper than those for infaunal detritivore communities (-0.56 to -0.87). 4. We present a model describing the dynamics of coupled size spectra, to explain how coupling of predator and detritivore communities affects the scaling of log N vs. log m. The model captures the trophic interactions and recycling of material that occur in many aquatic ecosystems. 5. Our simulations demonstrate that the biological processes underlying growth and mortality in the two distinct size spectra lead to patterns consistent with data. Slopes of log N vs. log m were steeper and growth rates faster for predators compared to detritivores. Size spectra were truncated when primary production was too low for predators and when detritivores experienced predation pressure. 6. The approach also allows us to assess the effects of external sources of mortality (e.g. harvesting). Removal of large predators resulted in steeper predator spectra and increases in their prey (small fish and detritivores). The model predictions are remarkably consistent with observed patterns of exploited ecosystems.
Convective Mixing in Distal Pipes Exacerbates Legionella pneumophila Growth in Hot Water Plumbing.
Rhoads, William J; Pruden, Amy; Edwards, Marc A
2016-03-12
Legionella pneumophila is known to proliferate in hot water plumbing systems, but little is known about the specific physicochemical factors that contribute to its regrowth. Here, L. pneumophila trends were examined in controlled, replicated pilot-scale hot water systems with continuous recirculation lines subject to two water heater settings (40 °C and 58 °C) and three distal tap water use frequencies (high, medium, and low) with two pipe configurations (oriented upward to promote convective mixing with the recirculating line and downward to prevent it). Water heater temperature setting determined where L. pneumophila regrowth occurred in each system, with an increase of up to 4.4 log gene copies/mL in the 40 °C system tank and recirculating line relative to influent water compared to only 2.5 log gene copies/mL regrowth in the 58 °C system. Distal pipes without convective mixing cooled to room temperature (23-24 °C) during periods of no water use, but pipes with convective mixing equilibrated to 30.5 °C in the 40 °C system and 38.8 °C in the 58 °C system. Corresponding with known temperature effects on L. pneumophila growth and enhanced delivery of nutrients, distal pipes with convective mixing had on average 0.2 log more gene copies/mL in the 40 °C system and 0.8 log more gene copies/mL in the 58 °C system. Importantly, this work demonstrated the potential for thermal control strategies to be undermined by distal taps in general, and convective mixing in particular.
A combined QSAR and partial order ranking approach to risk assessment.
Carlsen, L
2006-04-01
QSAR generated data appear as an attractive alternative to experimental data as foreseen in the proposed new chemicals legislation REACH. A preliminary risk assessment for the aquatic environment can be based on few factors, i.e. the octanol-water partition coefficient (Kow), the vapour pressure (VP) and the potential biodegradability of the compound in combination with the predicted no-effect concentration (PNEC) and the actual tonnage in which the substance is produced. Application of partial order ranking, allowing simultaneous inclusion of several parameters leads to a mutual prioritisation of the investigated substances, the prioritisation possibly being further analysed through the concept of linear extensions and average ranks. The ranking uses endpoint values (log Kow and log VP) derived from strictly linear 'noise-deficient' QSAR models as input parameters. Biodegradation estimates were adopted from the BioWin module of the EPI Suite. The population growth impairment of Tetrahymena pyriformis was used as a surrogate for fish lethality.
A hybrid probabilistic/spectral model of scalar mixing
NASA Astrophysics Data System (ADS)
Vaithianathan, T.; Collins, Lance
2002-11-01
In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.
Saxena, Aditi R; Seely, Ellen W; Rich-Edwards, Janet W; Wilkins-Haug, Louise E; Karumanchi, S Ananth; McElrath, Thomas F
2013-04-04
First trimester Pregnancy Associated Plasma Protein A (PAPP-A) levels, routinely measured for aneuploidy screening, may predict development of preeclampsia. This study tests the hypothesis that first trimester PAPP-A levels correlate with soluble fms-like tyrosine kinase-1 (sFlt-1) levels, an angiogenic marker associated with preeclampsia, throughout pregnancy. sFlt-1 levels were measured longitudinally in 427 women with singleton pregnancies in all three trimesters. First trimester PAPP-A and PAPP-A Multiples of Median (MOM) were measured. Student's T and Wilcoxon tests compared preeclamptic and normal pregnancies. A linear mixed model assessed the relationship between log PAPP-A and serial log sFlt-1 levels. PAPP-A and PAPP-A MOM levels were significantly lower in preeclamptic (n = 19), versus normal pregnancies (p = 0.02). Although mean third trimester sFlt-1 levels were significantly higher in preeclampsia (p = 0.002), first trimester sFlt-1 levels were lower in women who developed preeclampsia, compared with normal pregnancies (p = 0.03). PAPP-A levels correlated significantly with serial sFlt-1 levels. Importantly, low first trimester PAPP-A MOM predicted decreased odds of normal pregnancy (OR 0.2, p = 0.002). Low first trimester PAPP-A levels suggests increased future risk of preeclampsia and correlate with serial sFlt-1 levels throughout pregnancy. Furthermore, low first trimester PAPP-A status significantly predicted decreased odds of normal pregnancy.
Kim, Jee Young; Hauser, Russ; Wand, Matthew P; Herrick, Robert F; Houk, R S; Aeschliman, David B; Woodin, Mark A; Christiani, David C
2003-11-01
Exposure to metal-containing particulate matter has been associated with adverse pulmonary responses. Metals in particulate matter are soluble, hence are readily recovered in urine of exposed individuals. This study investigated the association between urinary metal concentrations and the fractional concentration of expired nitric oxide (F(E)NO) in boilermakers (N = 32) exposed to residual oil fly ash (ROFA). Subjects were monitored at a boiler overhaul site located in the New England area, USA. F(E)NO and urine samples were collected pre- and post-workshift for 5 consecutive workdays. Metals investigated included vanadium (V), chromium (Cr), manganese (Mn), nickel (Ni), copper (Cu), and lead (Pb). The median F(E)NO was 7.5 ppb (95% CI: 7.4-8.0), and the median creatinine-adjusted urinary metal concentrations (mug/g creatinine) were: vanadium, 1.37; chromium, 0.48; manganese, 0.30; nickel, 1.52; copper, 3.70; and lead, 2.32. Linear mixed-effects models indicated significant inverse exposure-response relationships between log F(E)NO and the log-transformed urinary concentrations of vanadium, manganese, nickel, copper, and lead at several lag times, after adjusting for smoking status. Urine samples may be utilized as a biomarker of occupational metal exposure. The inverse association between F(E)NO and urinary metal concentrations suggests that exposure to metals in particulate matter may have an adverse effect on respiratory health. Copyright 2003 Wiley-Liss, Inc.
Zhang, Zhiping; Ji, Hairui; Gong, Guiping; Zhang, Xu; Tan, Tianwei
2014-07-01
The optimal mixed culture model of oleaginous yeast Rhodotorula glutinis and microalga Chlorella vulgaris was confirmed to enhance lipid production. A double system bubble column photo-bioreactor was designed and used for demonstrating the relationship of yeast and alga in mixed culture. The results showed that using the log-phase cultures of yeast and alga as seeds for mixed culture, the improvements of biomass and lipid yields reached 17.3% and 70.9%, respectively, compared with those of monocultures. Growth curves of two species were confirmed in the double system bubble column photo-bioreactor, and the second growth of yeast was observed during 36-48 h of mixed culture. Synergistic effects of two species for cell growth and lipid accumulation were demonstrated on O2/CO2 balance, substance exchange, dissolved oxygen and pH adjustment in mixed culture. This study provided a theoretical basis and culture model for producing lipids by mixed culture in place of monoculture. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Wen Deng; Chen, Guang De; Yuan, Zhao Lin; Yang, Chuang Hua; Ye, Hong Gang; Wu, Ye Long
2016-02-01
The theoretical investigations of the interface optical phonons, electron-phonon couplings and its ternary mixed effects in zinc-blende spherical quantum dots are obtained by using the dielectric continuum model and modified random-element isodisplacement model. The features of dispersion curves, electron-phonon coupling strengths, and its ternary mixed effects for interface optical phonons in a single zinc-blende GaN/AlxGa1-xN spherical quantum dot are calculated and discussed in detail. The numerical results show that there are three branches of interface optical phonons. One branch exists in low frequency region; another two branches exist in high frequency region. The interface optical phonons with small quantum number l have more important contributions to the electron-phonon interactions. It is also found that ternary mixed effects have important influences on the interface optical phonon properties in a single zinc-blende GaN/AlxGa1-xN quantum dot. With the increase of Al component, the interface optical phonon frequencies appear linear changes, and the electron-phonon coupling strengths appear non-linear changes in high frequency region. But in low frequency region, the frequencies appear non-linear changes, and the electron-phonon coupling strengths appear linear changes.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
Ragaert, P; Devlieghere, F; Devuyst, E; Dewulf, J; Van Langenhove, H; Debevere, J
2006-11-01
This paper describes the volatile metabolite production of spoilage bacteria (Pantoea agglomerans and Rahnella aquatilis) and spoilage yeasts (Pichia fermentans and Cryptococcus laurentii), previously isolated from mixed lettuce, on a simulation medium of shredded mixed lettuce (mixed-lettuce agar) both under air conditions and modified atmosphere (MA)-conditions at 7 degrees C. These latter conditions simulated the equilibrium modified atmosphere packaging, which is used to extend the shelf-life of shredded mixed lettuce. Besides volatile metabolites, organic acid metabolites and consumption of sugars were measured. Microbiological growth on the mixed-lettuce agar resulted in metabolite production and consumption of sugars. Bacteria and yeasts produced a range of volatile organic compounds both under air conditions and MA-conditions: ethanol, ethyl acetate, 2-methyl-1-propanol, 2-methyl-1-butanol, 3-methyl-1-butanol, 2,3-butanedione, 3-methyl-1-pentanol, 1-butanol and 1-hexanol. Under MA-conditions, 2-methyl-1-butanol, 3-methyl-1-butanol and ethanol were the first compounds that were detected in the headspace as being produced by the inoculated micro-organisms. In the case of the yeast P. fermentans, production of these compounds was detected from a count of 5.0+/-0.1 log cfu/cm(2) with a fast increase when exceeding 6.0-6.5 log cfu/cm(2). Unlike P. fermentans, the yeast C. laurentii showed a slow metabolism under MA-conditions, compared to air conditions. In the case of the bacteria, production of 2-methyl-1-butanol and 3-methyl-1-butanol was detected starting from a count of 6.7+/-0.1 log cfu/cm(2) in the case of R. aquatilis and from a count of 7.1+/-0.4 log cfu/cm(2) in the case of P. agglomerans with a fast increase when exceeding 8 log cfu/cm(2). No production of ethanol by the bacteria under MA-conditions was detected in contradiction to air conditions. It could be concluded that, if these counts are reached on the cut surfaces of shredded mixed lettuce which are simulated by the mixed-lettuce agar, sensorial quality of shredded mixed lettuce could be influenced by the microbiological production of metabolites.
Vu, Cung; Nihei, Kurt T.; Schmitt, Denis P.; Skelt, Christopher; Johnson, Paul A.; Guyer, Robert; TenCate, James A.; Le Bas, Pierre-Yves
2013-01-01
In some aspects of the disclosure, a method for creating three-dimensional images of non-linear properties and the compressional to shear velocity ratio in a region remote from a borehole using a conveyed logging tool is disclosed. In some aspects, the method includes arranging a first source in the borehole and generating a steered beam of elastic energy at a first frequency; arranging a second source in the borehole and generating a steerable beam of elastic energy at a second frequency, such that the steerable beam at the first frequency and the steerable beam at the second frequency intercept at a location away from the borehole; receiving at the borehole by a sensor a third elastic wave, created by a three wave mixing process, with a frequency equal to a difference between the first and second frequencies and a direction of propagation towards the borehole; determining a location of a three wave mixing region based on the arrangement of the first and second sources and on properties of the third wave signal; and creating three-dimensional images of the non-linear properties using data recorded by repeating the generating, receiving and determining at a plurality of azimuths, inclinations and longitudinal locations within the borehole. The method is additionally used to generate three dimensional images of the ratio of compressional to shear acoustic velocity of the same volume surrounding the borehole.
Zhang, Z; Guillaume, F; Sartelet, A; Charlier, C; Georges, M; Farnir, F; Druet, T
2012-10-01
In many situations, genome-wide association studies are performed in populations presenting stratification. Mixed models including a kinship matrix accounting for genetic relatedness among individuals have been shown to correct for population and/or family structure. Here we extend this methodology to generalized linear mixed models which properly model data under various distributions. In addition we perform association with ancestral haplotypes inferred using a hidden Markov model. The method was shown to properly account for stratification under various simulated scenari presenting population and/or family structure. Use of ancestral haplotypes resulted in higher power than SNPs on simulated datasets. Application to real data demonstrates the usefulness of the developed model. Full analysis of a dataset with 4600 individuals and 500 000 SNPs was performed in 2 h 36 min and required 2.28 Gb of RAM. The software GLASCOW can be freely downloaded from www.giga.ulg.ac.be/jcms/prod_381171/software. francois.guillaume@jouy.inra.fr Supplementary data are available at Bioinformatics online.
Psychometric functions for pure-tone frequency discrimination.
Dai, Huanping; Micheyl, Christophe
2011-07-01
The form of the psychometric function (PF) for auditory frequency discrimination is of theoretical interest and practical importance. In this study, PFs for pure-tone frequency discrimination were measured for several standard frequencies (200-8000 Hz) and levels [35-85 dB sound pressure level (SPL)] in normal-hearing listeners. The proportion-correct data were fitted using a cumulative-Gaussian function of the sensitivity index, d', computed as a power transformation of the frequency difference, Δf. The exponent of the power function corresponded to the slope of the PF on log(d')-log(Δf) coordinates. The influence of attentional lapses on PF-slope estimates was investigated. When attentional lapses were not taken into account, the estimated PF slopes on log(d')-log(Δf) coordinates were found to be significantly lower than 1, suggesting a nonlinear relationship between d' and Δf. However, when lapse rate was included as a free parameter in the fits, PF slopes were found not to differ significantly from 1, consistent with a linear relationship between d' and Δf. This was the case across the wide ranges of frequencies and levels tested in this study. Therefore, spectral and temporal models of frequency discrimination must account for a linear relationship between d' and Δf across a wide range of frequencies and levels. © 2011 Acoustical Society of America
Flow-covariate prediction of stream pesticide concentrations.
Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin
2018-01-01
Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.
The prisoner's dilemma as a cancer model.
West, Jeffrey; Hasnain, Zaki; Mason, Jeremy; Newton, Paul K
2016-09-01
Tumor development is an evolutionary process in which a heterogeneous population of cells with different growth capabilities compete for resources in order to gain a proliferative advantage. What are the minimal ingredients needed to recreate some of the emergent features of such a developing complex ecosystem? What is a tumor doing before we can detect it? We outline a mathematical model, driven by a stochastic Moran process, in which cancer cells and healthy cells compete for dominance in the population. Each are assigned payoffs according to a Prisoner's Dilemma evolutionary game where the healthy cells are the cooperators and the cancer cells are the defectors. With point mutational dynamics, heredity, and a fitness landscape controlling birth and death rates, natural selection acts on the cell population and simulated 'cancer-like' features emerge, such as Gompertzian tumor growth driven by heterogeneity, the log-kill law which (linearly) relates therapeutic dose density to the (log) probability of cancer cell survival, and the Norton-Simon hypothesis which (linearly) relates tumor regression rates to tumor growth rates. We highlight the utility, clarity, and power that such models provide, despite (and because of) their simplicity and built-in assumptions.
NASA Astrophysics Data System (ADS)
Aleardi, Mattia
2018-01-01
We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.
NASA Astrophysics Data System (ADS)
Kamaruddin, Ainur Amira; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Ahmad, Wan Muhamad Amir W.
2014-07-01
Logistic regression analysis examines the influence of various factors on a dichotomous outcome by estimating the probability of the event's occurrence. Logistic regression, also called a logit model, is a statistical procedure used to model dichotomous outcomes. In the logit model the log odds of the dichotomous outcome is modeled as a linear combination of the predictor variables. The log odds ratio in logistic regression provides a description of the probabilistic relationship of the variables and the outcome. In conducting logistic regression, selection procedures are used in selecting important predictor variables, diagnostics are used to check that assumptions are valid which include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers and a test statistic is calculated to determine the aptness of the model. This study used the binary logistic regression model to investigate overweight and obesity among rural secondary school students on the basis of their demographics profile, medical history, diet and lifestyle. The results indicate that overweight and obesity of students are influenced by obesity in family and the interaction between a student's ethnicity and routine meals intake. The odds of a student being overweight and obese are higher for a student having a family history of obesity and for a non-Malay student who frequently takes routine meals as compared to a Malay student.
ERIC Educational Resources Information Center
Denham, Bryan E.
2009-01-01
Grounded conceptually in social cognitive theory, this research examines how personal, behavioral, and environmental factors are associated with risk perceptions of anabolic-androgenic steroids. Ordinal logistic regression and logit log-linear models applied to data gathered from high-school seniors (N = 2,160) in the 2005 Monitoring the Future…
Using Configural Frequency Analysis as a Person-Centered Analytic Approach with Categorical Data
ERIC Educational Resources Information Center
Stemmler, Mark; Heine, Jörg-Henrik
2017-01-01
Configural frequency analysis and log-linear modeling are presented as person-centered analytic approaches for the analysis of categorical or categorized data in multi-way contingency tables. Person-centered developmental psychology, based on the holistic interactionistic perspective of the Stockholm working group around David Magnusson and Lars…
Full analogue electronic realisation of the Hodgkin-Huxley neuronal dynamics in weak-inversion CMOS.
Lazaridis, E; Drakakis, E M; Barahona, M
2007-01-01
This paper presents a non-linear analog synthesis path towards the modeling and full implementation of the Hodgkin-Huxley neuronal dynamics in silicon. The proposed circuits have been realized in weak-inversion CMOS technology and take advantage of both log-domain and translinear transistor-level techniques.
ERIC Educational Resources Information Center
Zwick, Rebecca; Lenaburg, Lubella
2009-01-01
In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…
Milloy, M-J; Marshall, Brandon; Kerr, Thomas; Richardson, Lindsey; Hogg, Robert; Guillemi, Silvia; Montaner, Julio S G; Wood, Evan
2015-03-01
Cannabis use is common among people who are living with human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS). While there is growing pre-clinical evidence of the immunomodulatory and anti-viral effects of cannabinoids, their possible effects on HIV disease parameters in humans are largely unknown. Thus, we sought to investigate the possible effects of cannabis use on plasma HIV-1 RNA viral loads (pVLs) among recently seroconverted illicit drug users. We used data from two linked longitudinal observational cohorts of people who use injection drugs. Using multivariable linear mixed-effects modelling, we analysed the relationship between pVL and high-intensity cannabis use among participants who seroconverted following recruitment. Between May 1996 and March 2012, 88 individuals seroconverted after recruitment and were included in these analyses. Median pVL in the first 365 days among all seroconverters was 4.66 log10 c mL(-1) . In a multivariable model, at least daily cannabis use was associated with 0.51 log10 c mL(-1) lower pVL (β = -0.51, standard error = 0.170, P value = 0.003). Consistent with the findings from recent in vitro and in vivo studies, including one conducted among lentiviral-infected primates, we observed a strong association between cannabis use and lower pVL following seroconversion among illicit drug-using participants. Our findings support the further investigation of the immunomodulatory or antiviral effects of cannabinoids among individuals living with HIV/AIDS. © 2014 Australasian Professional Society on Alcohol and other Drugs.
Mixed models, linear dependency, and identification in age-period-cohort models.
O'Brien, Robert M
2017-07-20
This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Interrelation of creep and relaxation: a modeling approach for ligaments.
Lakes, R S; Vanderby, R
1999-12-01
Experimental data (Thornton et al., 1997) show that relaxation proceeds more rapidly (a greater slope on a log-log scale) than creep in ligament, a fact not explained by linear viscoelasticity. An interrelation between creep and relaxation is therefore developed for ligaments based on a single-integral nonlinear superposition model. This interrelation differs from the convolution relation obtained by Laplace transforms for linear materials. We demonstrate via continuum concepts of nonlinear viscoelasticity that such a difference in rate between creep and relaxation phenomenologically occurs when the nonlinearity is of a strain-stiffening type, i.e., the stress-strain curve is concave up as observed in ligament. We also show that it is inconsistent to assume a Fung-type constitutive law (Fung, 1972) for both creep and relaxation. Using the published data of Thornton et al. (1997), the nonlinear interrelation developed herein predicts creep behavior from relaxation data well (R > or = 0.998). Although data are limited and the causal mechanisms associated with viscoelastic tissue behavior are complex, continuum concepts demonstrated here appear capable of interrelating creep and relaxation with fidelity.
Kriss, A B; Paul, P A; Madden, L V
2012-09-01
A multilevel analysis of heterogeneity of disease incidence was conducted based on observations of Fusarium head blight (caused by Fusarium graminearum) in Ohio during the 2002-11 growing seasons. Sampling consisted of counting the number of diseased and healthy wheat spikes per 0.3 m of row at 10 sites (about 30 m apart) in a total of 67 to 159 sampled fields in 12 to 32 sampled counties per year. Incidence was then determined as the proportion of diseased spikes at each site. Spatial heterogeneity of incidence among counties, fields within counties, and sites within fields and counties was characterized by fitting a generalized linear mixed model to the data, using a complementary log-log link function, with the assumption that the disease status of spikes was binomially distributed conditional on the effects of county, field, and site. Based on the estimated variance terms, there was highly significant spatial heterogeneity among counties and among fields within counties each year; magnitude of the estimated variances was similar for counties and fields. The lowest level of heterogeneity was among sites within fields, and the site variance was either 0 or not significantly greater than 0 in 3 of the 10 years. Based on the variances, the intracluster correlation of disease status of spikes within sites indicated that spikes from the same site were somewhat more likely to share the same disease status relative to spikes from other sites, fields, or counties. The estimated best linear unbiased predictor (EBLUP) for each county was determined, showing large differences across the state in disease incidence (as represented by the link function of the estimated probability that a spike was diseased) but no consistency between years for the different counties. The effects of geographical location, corn and wheat acreage per county, and environmental conditions on the EBLUP for each county were not significant in the majority of years.
Wu, Zilan; Lin, Tian; Li, Zhongxia; Jiang, Yuqing; Li, Yuanyuan; Yao, Xiaohong; Gao, Huiwang; Guo, Zhigang
2017-11-01
We measured 15 parent polycyclic aromatic hydrocarbons (PAHs) in atmosphere and water during a research cruise from the East China Sea (ECS) to the northwestern Pacific Ocean (NWP) in the spring of 2015 to investigate the occurrence, air-sea gas exchange, and gas-particle partitioning of PAHs with a particular focus on the influence of East Asian continental outflow. The gaseous PAH composition and identification of sources were consistent with PAHs from the upwind area, indicating that the gaseous PAHs (three-to five-ring PAHs) were influenced by upwind land pollution. In addition, air-sea exchange fluxes of gaseous PAHs were estimated to be -54.2-107.4 ng m -2 d -1 , and was indicative of variations of land-based PAH inputs. The logarithmic gas-particle partition coefficient (logK p ) of PAHs regressed linearly against the logarithmic subcooled liquid vapor pressure (logP L 0 ), with a slope of -0.25. This was significantly larger than the theoretical value (-1), implying disequilibrium between the gaseous and particulate PAHs over the NWP. The non-equilibrium of PAH gas-particle partitioning was shielded from the volatilization of three-ring gaseous PAHs from seawater and lower soot concentrations in particular when the oceanic air masses prevailed. Modeling PAH absorption into organic matter and adsorption onto soot carbon revealed that the status of PAH gas-particle partitioning deviated more from the modeling K p for oceanic air masses than those for continental air masses, which coincided with higher volatilization of three-ring PAHs and confirmed the influence of air-sea exchange. Meanwhile, significant linear regressions between logK p and logK oa (logK sa ) for PAHs were observed for continental air masses, suggesting the dominant effect of East Asian continental outflow on atmospheric PAHs over the NWP during the sampling campaign. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tang, Ronggui; Ding, Changfeng; Ma, Yibing; Wan, Mengxue; Zhang, Taolin; Wang, Xingxiang
2018-06-02
To explore the main controlling factors in soil and build a predictive model between the lead concentrations in earthworms (Pb earthworm ) and the soil physicochemical parameters, 13 soils with low level of lead contamination were used to conduct toxicity experiments using earthworms. The results indicated that a relatively high bioaccumulation factor appeared in the soils with low pH values. The lead concentrations between earthworms and soils after log transformation had a significantly positive correlation (R 2 = 0.46, P < 0.0001, n = 39). Stepwise multiple linear regression analysis derived a fitting empirical model between Pb earthworm and the soil physicochemical properties: log(Pb earthworm ) = 0.96log(Pb soil ) - 0.74log(OC) - 0.22pH + 0.95, (R 2 = 0.66, n = 39). Furthermore, path analysis confirmed that the Pb concentrations in the soil (Pb soil ), soil pH, and soil organic carbon (OC) were the primary controlling factors of Pb earthworm with high pathway parameters (0.71, - 0.51, and - 0.49, respectively). The predictive model based on Pb earthworm in a nationwide range of soils with low-level lead contamination could provide a reference for the establishment of safety thresholds in Pb-contaminated soils from the perspective of soil-animal systems.
Estradiol and inflammatory markers in older men.
Maggio, Marcello; Ceda, Gian Paolo; Lauretani, Fulvio; Bandinelli, Stefania; Metter, E Jeffrey; Artoni, Andrea; Gatti, Elisa; Ruggiero, Carmelinda; Guralnik, Jack M; Valenti, Giorgio; Ling, Shari M; Basaria, Shehzad; Ferrucci, Luigi
2009-02-01
Aging is characterized by a mild proinflammatory state. In older men, low testosterone levels have been associated with increasing levels of proinflammatory cytokines. It is still unclear whether estradiol (E2), which generally has biological activities complementary to testosterone, affects inflammation. We analyzed data obtained from 399 men aged 65-95 yr enrolled in the Invecchiare in Chianti study with complete data on body mass index (BMI), serum E2, testosterone, IL-6, soluble IL-6 receptor, TNF-alpha, IL-1 receptor antagonist, and C-reactive protein. The relationship between E2 and inflammatory markers was examined using multivariate linear models adjusted for age, BMI, smoking, physical activity, chronic disease, and total testosterone. In age-adjusted analysis, log (E2) was positively associated with log (IL-6) (r = 0.19; P = 0.047), and the relationship was statistically significant (P = 0.032) after adjustments for age, BMI, smoking, physical activity, chronic disease, and serum testosterone levels. Log (E2) was not significantly associated with log (C-reactive protein), log (soluble IL-6 receptor), or log (TNF-alpha) in both age-adjusted and fully adjusted analyses. In older men, E2 is weakly positively associated with IL-6, independent of testosterone and other confounders including BMI.
Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin
2010-01-01
Nonlinear mixed-modeling methods were used to estimate parameters in an individual-tree basal area growth model for shortleaf pine (Pinus echinata Mill.). Shortleaf pine individual-tree growth data were available from over 200 permanently established 0.2-acre fixed-radius plots located in naturally-occurring even-aged shortleaf pine forests on the...
Interpretation of a compositional time series
NASA Astrophysics Data System (ADS)
Tolosana-Delgado, R.; van den Boogaart, K. G.
2012-04-01
Common methods for multivariate time series analysis use linear operations, from the definition of a time-lagged covariance/correlation to the prediction of new outcomes. However, when the time series response is a composition (a vector of positive components showing the relative importance of a set of parts in a total, like percentages and proportions), then linear operations are afflicted of several problems. For instance, it has been long recognised that (auto/cross-)correlations between raw percentages are spurious, more dependent on which other components are being considered than on any natural link between the components of interest. Also, a long-term forecast of a composition in models with a linear trend will ultimately predict negative components. In general terms, compositional data should not be treated in a raw scale, but after a log-ratio transformation (Aitchison, 1986: The statistical analysis of compositional data. Chapman and Hill). This is so because the information conveyed by a compositional data is relative, as stated in their definition. The principle of working in coordinates allows to apply any sort of multivariate analysis to a log-ratio transformed composition, as long as this transformation is invertible. This principle is of full application to time series analysis. We will discuss how results (both auto/cross-correlation functions and predictions) can be back-transformed, viewed and interpreted in a meaningful way. One view is to use the exhaustive set of all possible pairwise log-ratios, which allows to express the results into D(D - 1)/2 separate, interpretable sets of one-dimensional models showing the behaviour of each possible pairwise log-ratios. Another view is the interpretation of estimated coefficients or correlations back-transformed in terms of compositions. These two views are compatible and complementary. These issues are illustrated with time series of seasonal precipitation patterns at different rain gauges of the USA. In this data set, the proportion of annual precipitation falling in winter, spring, summer and autumn is considered a 4-component time series. Three invertible log-ratios are defined for calculations, balancing rainfall in autumn vs. winter, in summer vs. spring, and in autumn-winter vs. spring-summer. Results suggest a 2-year correlation range, and certain oscillatory behaviour in the last balance, which does not occur in the other two.
ERIC Educational Resources Information Center
Puhan, Gautam; Moses, Tim P.; Yu, Lei; Dorans, Neil J.
2007-01-01
The purpose of the current study was to examine whether log-linear smoothing of observed score distributions in small samples results in more accurate differential item functioning (DIF) estimates under the simultaneous item bias test (SIBTEST) framework. Data from a teacher certification test were analyzed using White candidates in the reference…
Using nonlinear quantile regression to estimate the self-thinning boundary curve
Quang V. Cao; Thomas J. Dean
2015-01-01
The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...
Yang, James J; Williams, L Keoki; Buu, Anne
2017-08-24
A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.
Estimates of Social Contact in a Middle School Based on Self-Report and Wireless Sensor Data.
Leecaster, Molly; Toth, Damon J A; Pettey, Warren B P; Rainey, Jeanette J; Gao, Hongjiang; Uzicanin, Amra; Samore, Matthew
2016-01-01
Estimates of contact among children, used for infectious disease transmission models and understanding social patterns, historically rely on self-report logs. Recently, wireless sensor technology has enabled objective measurement of proximal contact and comparison of data from the two methods. These are mostly small-scale studies, and knowledge gaps remain in understanding contact and mixing patterns and also in the advantages and disadvantages of data collection methods. We collected contact data from a middle school, with 7th and 8th grades, for one day using self-report contact logs and wireless sensors. The data were linked for students with unique initials, gender, and grade within the school. This paper presents the results of a comparison of two approaches to characterize school contact networks, wireless proximity sensors and self-report logs. Accounting for incomplete capture and lack of participation, we estimate that "sensor-detectable", proximal contacts longer than 20 seconds during lunch and class-time occurred at 2 fold higher frequency than "self-reportable" talk/touch contacts. Overall, 55% of estimated talk-touch contacts were also sensor-detectable whereas only 15% of estimated sensor-detectable contacts were also talk-touch. Contacts detected by sensors and also in self-report logs had longer mean duration than contacts detected only by sensors (6.3 vs 2.4 minutes). During both lunch and class-time, sensor-detectable contacts demonstrated substantially less gender and grade assortativity than talk-touch contacts. Hallway contacts, which were ascertainable only by proximity sensors, were characterized by extremely high degree and short duration. We conclude that the use of wireless sensors and self-report logs provide complementary insight on in-school mixing patterns and contact frequency.
Estimates of Social Contact in a Middle School Based on Self-Report and Wireless Sensor Data
Leecaster, Molly; Toth, Damon J. A.; Pettey, Warren B. P.; Rainey, Jeanette J.; Gao, Hongjiang; Uzicanin, Amra; Samore, Matthew
2016-01-01
Estimates of contact among children, used for infectious disease transmission models and understanding social patterns, historically rely on self-report logs. Recently, wireless sensor technology has enabled objective measurement of proximal contact and comparison of data from the two methods. These are mostly small-scale studies, and knowledge gaps remain in understanding contact and mixing patterns and also in the advantages and disadvantages of data collection methods. We collected contact data from a middle school, with 7th and 8th grades, for one day using self-report contact logs and wireless sensors. The data were linked for students with unique initials, gender, and grade within the school. This paper presents the results of a comparison of two approaches to characterize school contact networks, wireless proximity sensors and self-report logs. Accounting for incomplete capture and lack of participation, we estimate that “sensor-detectable”, proximal contacts longer than 20 seconds during lunch and class-time occurred at 2 fold higher frequency than “self-reportable” talk/touch contacts. Overall, 55% of estimated talk-touch contacts were also sensor-detectable whereas only 15% of estimated sensor-detectable contacts were also talk-touch. Contacts detected by sensors and also in self-report logs had longer mean duration than contacts detected only by sensors (6.3 vs 2.4 minutes). During both lunch and class-time, sensor-detectable contacts demonstrated substantially less gender and grade assortativity than talk-touch contacts. Hallway contacts, which were ascertainable only by proximity sensors, were characterized by extremely high degree and short duration. We conclude that the use of wireless sensors and self-report logs provide complementary insight on in-school mixing patterns and contact frequency. PMID:27100090
Analysis of lithology: Vegetation mixes in multispectral images
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M.; Adams, J. D.
1982-01-01
Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.
Song, Won-Jae; Kang, Dong-Hyun
2016-12-01
This study evaluated the efficacy of a 915 MHz microwave with 3 different electric power levels to inactivate three pathogens in peanut butter with different aw. Peanut butter inoculated with Escherichia coli O157:H7, Salmonella enterica serovar Typhimurium and Listeria monocytogenes (0.3, 0.4, and 0.5 aw) were treated with a 915 MHz microwave with 2, 4, and 6 kW for up to 5 min. Six kW 915 MHz microwave treatment for 5 min reduced these three pathogens by 1.97 to >5.17 log CFU/g. Four kW 915 MHz microwave processing for 5 min reduced these pathogens by 0.41-1.98 log CFU/g. Two kW microwave heating did not inactivate pathogens in peanut butter. Weibull and Log-Linear + Shoulder models were used to describe the survival curves of three pathogens because they exhibited shouldering behavior. Td and T5d values were calculated based on the Weibull and Log-Linear + Shoulder models. Td values of the three pathogens were similar to D-values of Salmonella subjected to conventional heating at 90 °C but T5d values were much shorter than those of conventional heating at 90 °C. Generally, increased aw resulted in shorter T5d values of pathogens, but not shorter Td values. The results of this study can be used to optimize microwave heating pasteurization system of peanut butter. Copyright © 2016. Published by Elsevier Ltd.
On the validity of effective formulations for transport through heterogeneous porous media
NASA Astrophysics Data System (ADS)
de Dreuzy, Jean-Raynald; Carrera, Jesus
2016-04-01
Geological heterogeneity enhances spreading of solutes and causes transport to be anomalous (i.e., non-Fickian), with much less mixing than suggested by dispersion. This implies that modeling transport requires adopting either stochastic approaches that model heterogeneity explicitly or effective transport formulations that acknowledge the effects of heterogeneity. A number of such formulations have been developed and tested as upscaled representations of enhanced spreading. However, their ability to represent mixing has not been formally tested, which is required for proper reproduction of chemical reactions and which motivates our work. We propose that, for an effective transport formulation to be considered a valid representation of transport through heterogeneous porous media (HPM), it should honor mean advection, mixing and spreading. It should also be flexible enough to be applicable to real problems. We test the capacity of the multi-rate mass transfer (MRMT) model to reproduce mixing observed in HPM, as represented by the classical multi-Gaussian log-permeability field with a Gaussian correlation pattern. Non-dispersive mixing comes from heterogeneity structures in the concentration fields that are not captured by macrodispersion. These fine structures limit mixing initially, but eventually enhance it. Numerical results show that, relative to HPM, MRMT models display a much stronger memory of initial conditions on mixing than on dispersion because of the sensitivity of the mixing state to the actual values of concentration. Because MRMT does not restitute the local concentration structures, it induces smaller non-dispersive mixing than HPM. However long-lived trapping in the immobile zones may sustain the deviation from dispersive mixing over much longer times. While spreading can be well captured by MRMT models, in general non-dispersive mixing cannot.
Rowe, Annette R; Mansfeldt, Cresten B; Heavner, Gretchen L; Richardson, Ruth E
2013-01-02
Molecular biomarkers hold promise for inferring rates of key metabolic activities in complex microbial systems. However, few studies have assessed biomarker levels for simultaneously occurring (and potentially competing) respirations. In this study, methanogenesis biomarkers for Methanospirillum hungatei were developed, tested, and compared to Dehalococcoides mccartyi biomarkers in a well-characterized mixed culture. Proteomic analyses of mixed culture samples (n = 4) confirmed expression of many M. hungatei methanogenesis enzymes. The mRNAs for two oxidoreductases detected were explored as quantitative biomarkers of hydrogenotrophic methanogenesis: a coenzyme F(420)-reducing hydrogenase (FrcA) and an iron sulfur protein (MvrD). As shown previously in D. mccartyi, M. hungatei transcript levels correlated linearly with measured (R = 0.97 for FrcA, R = 0.91 for MvrD; n = 7) or calculated respiration rate (R = 0.81 for FrcA, R = 0.62 for MvrD; n = 35) across two orders of magnitude on a log-log scale. The average abundance of MvrD transcripts was consistently two orders of magnitude lower than FrcA, regardless of experimental condition. In experiments where M. hungatei was competing for hydrogen with D. mccartyi, transcripts for the key respiratory hydrogenase HupL were generally less abundant per mL than FrcA and more abundant than MvrD. With no chlorinated electron acceptor added, HupL transcripts fell below both targets. These biomarkers hold promise for the prediction of in situ rates of respiration for these microbes, even when growing in mixed culture and utilizing a shared substrate which has important implications for both engineered and environmental systems. However, the differences in overall biomarker abundances suggest that the strength of any particular mRNA biomarker relies upon empirically established quantitative trends under a range of pertinent conditions.
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
Marrero-Ponce, Yovani; Martínez-Albelo, Eugenio R; Casañola-Martín, Gerardo M; Castillo-Garit, Juan A; Echevería-Díaz, Yunaimy; Zaldivar, Vicente Romero; Tygat, Jan; Borges, José E Rodriguez; García-Domenech, Ramón; Torrens, Francisco; Pérez-Giménez, Facundo
2010-11-01
Novel bond-level molecular descriptors are proposed, based on linear maps similar to the ones defined in algebra theory. The kth edge-adjacency matrix (E(k)) denotes the matrix of bond linear indices (non-stochastic) with regard to canonical basis set. The kth stochastic edge-adjacency matrix, ES(k), is here proposed as a new molecular representation easily calculated from E(k). Then, the kth stochastic bond linear indices are calculated using ES(k) as operators of linear transformations. In both cases, the bond-type formalism is developed. The kth non-stochastic and stochastic total linear indices are calculated by adding the kth non-stochastic and stochastic bond linear indices, respectively, of all bonds in molecule. First, the new bond-based molecular descriptors (MDs) are tested for suitability, for the QSPRs, by analyzing regressions of novel indices for selected physicochemical properties of octane isomers (first round). General performance of the new descriptors in this QSPR studies is evaluated with regard to the well-known sets of 2D/3D MDs. From the analysis, we can conclude that the non-stochastic and stochastic bond-based linear indices have an overall good modeling capability proving their usefulness in QSPR studies. Later, the novel bond-level MDs are also used for the description and prediction of the boiling point of 28 alkyl-alcohols (second round), and to the modeling of the specific rate constant (log k), partition coefficient (log P), as well as the antibacterial activity of 34 derivatives of 2-furylethylenes (third round). The comparison with other approaches (edge- and vertices-based connectivity indices, total and local spectral moments, and quantum chemical descriptors as well as E-state/biomolecular encounter parameters) exposes a good behavior of our method in this QSPR studies. Finally, the approach described in this study appears to be a very promising structural invariant, useful not only for QSPR studies but also for similarity/diversity analysis and drug discovery protocols.
Coliforms removal in full-scale activated sludge plants in India.
Kazmi, A A; Tyagi, V K; Trivedi, R C; Kumar, Arvind
2008-05-01
This paper investigates the removal of coliforms in full-scale activated sludge plants (ASP) operating in northern regions of India. Log2.2 and log2.4 removal were observed for total coliforms (TC) and fecal coliforms (FC), respectively. However, the effluent still contained a significant number of TC and FC which was greater than the permissible limit for unrestricted irrigation as prescribed by WHO. The observations also suggest that extended aeration (EA) process operating under high mixed liquor suspended solids (MLSS) and sludge retention time (SRT) is more efficient in the removal of coliforms. Further attempts have been made to establish the relationship between two key wastewater parameters, i.e. biochemical oxygen demand (BOD) and suspended solids (SS) with respect to fecal and TC. The relationships were observed to be linear with a good coefficient of correlation. The interrelationship of BOD and SS with coliforms manifest that improvement of the microbiological quality of wastewater could be linked with the removal of SS. Therefore, SS can serve as a regulatory tool in lieu of an explicit coliforms standard.
Women's Endorsement of Models of Sexual Response: Correlates and Predictors.
Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert
2016-02-01
Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.
NASA Astrophysics Data System (ADS)
Tian, Xiang-Dong
The purpose of this research is to simulate induction and measuring-while-drilling (MWD) logs. In simulation of logs, there are two tasks. The first task, the forward modeling procedure, is to compute the logs from known formation. The second task, the inversion procedure, is to determine the unknown properties of the formation from the measured field logs. In general, the inversion procedure requires the solution of a forward model. In this study, a stable numerical method to simulate induction and MWD logs is presented. The proposed algorithm is based on a horizontal eigenmode expansion method. Vertical propagation of modes is modeled by a three-layer module. The multilayer cases are treated as a cascade of these modules. The mode tracing algorithm possesses stable characteristics that are superior to other methods. This method is applied to simulate the logs in the formations with both vertical and horizontal layers, and also used to study the groove effects of the MWD tool. The results are very good. Two-dimensional inversion of induction logs is an nonlinear problem. Nonlinear functions of the apparent conductivity are expanded into a Taylor series. After truncating the high order terms in this Taylor series, the nonlinear functions are linearized. An iterative procedure is then devised to solve the inversion problem. In each iteration, the Jacobian matrix is calculated, and a small variation computed using the least-squares method is used to modify the background medium. Finally, the inverted medium is obtained. The horizontal eigenstate method is used to solve the forward problem. It is found that a good inverted formation can be obtained by using measurements. In order to help the user simulate the induction logs conveniently, a Wellog Simulator, based on the X-window system, is developed. The application software (FORTRAN codes) embedded in the Simulator is designed to simulate the responses of the induction tools in the layered formation with dipping beds. The graphic user-interface part of the Wellog Simulator is implemented with C and Motif. Through the user interface, the user can prepare the simulation data, select the tools, simulate the logs and plot the results.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Micklesfield, Lisa K; Norris, Shane A; Nelson, Dorothy A; Lambert, Estelle V; van der Merwe, Lize; Pettifor, John M
2007-12-01
We compared whole body BMC of 811 black, white, and mixed ancestral origin children from Detroit, MI; Johannesburg, South Africa; and Cape Town, South Africa. Our findings support the role of genetic and environmental influences in the determination of bone mass in prepubertal children. Higher bone mass and lower fracture rates have been shown in black compared with white children and adults in North America. We compared whole body BMC (WBBMC), whole body fat mass (WBFM), and whole body fat free soft tissue (WBFFST) data between three ethnic groups of children from Detroit, MI (n = 181 white, USW; n = 230 black, USB), Johannesburg, South Africa (n = 73 white, SAW; n = 263 black, SAB), and Cape Town, South Africa (n = 64 mixed ancestral origin, SAM). SAB and SAW groups were slightly older than USW and USB groups (9.5 +/- 0.3 versus 9.3 +/- 0.1 yr); however, USB and USW boys were significantly taller, were heavier, and had a higher BMI than SAM and SAB boys. USB girls were significantly taller than SAB girls and heavier than SAB and SAM girls. In South Africa and the United States, black children had a significantly higher WBBMC than white children, after adjusting for selected best predictors. After adjusting for age, weight, and height, WBBMC was significantly higher in the SAB and SAW boys than in USW and USB and in the SAM group compared with the USW and USB groups. WBFFST and WBFM made significant contributions to a best linear model for log(WBBMC), together with age, height, and ethnicity. The best model accounted for 79% of the WBBMC variance. When included separately in the model, the model containing WBFFST accounted for 76%, and the model containing WBFM accounted for 70%, of the variance in WBBMC. WBBMC is lower in children of European ancestry compared with African ancestry, irrespective of geographical location; however, South African children have significantly higher WBBMC compared with USB and USW groups, thereby acknowledging the possible contribution of environmental factors. Reasons for the significantly higher WBBMC in the children of mixed ancestral origin compared with the other groups need to be studied further.
Lee, Ho-Won; Muniyappa, Ranganath; Yan, Xu; Yue, Lilly Q.; Linden, Ellen H.; Chen, Hui; Hansen, Barbara C.
2011-01-01
The euglycemic glucose clamp is the reference method for assessing insulin sensitivity in humans and animals. However, clamps are ill-suited for large studies because of extensive requirements for cost, time, labor, and technical expertise. Simple surrogate indexes of insulin sensitivity/resistance including quantitative insulin-sensitivity check index (QUICKI) and homeostasis model assessment (HOMA) have been developed and validated in humans. However, validation studies of QUICKI and HOMA in both rats and mice suggest that differences in metabolic physiology between rodents and humans limit their value in rodents. Rhesus monkeys are a species more similar to humans than rodents. Therefore, in the present study, we evaluated data from 199 glucose clamp studies obtained from a large cohort of 86 monkeys with a broad range of insulin sensitivity. Data were used to evaluate simple surrogate indexes of insulin sensitivity/resistance (QUICKI, HOMA, Log HOMA, 1/HOMA, and 1/Fasting insulin) with respect to linear regression, predictive accuracy using a calibration model, and diagnostic performance using receiver operating characteristic. Most surrogates had modest linear correlations with SIClamp (r ≈ 0.4–0.64) with comparable correlation coefficients. Predictive accuracy determined by calibration model analysis demonstrated better predictive accuracy of QUICKI than HOMA and Log HOMA. Receiver operating characteristic analysis showed equivalent sensitivity and specificity of most surrogate indexes to detect insulin resistance. Thus, unlike in rodents but similar to humans, surrogate indexes of insulin sensitivity/resistance including QUICKI and log HOMA may be reasonable to use in large studies of rhesus monkeys where it may be impractical to conduct glucose clamp studies. PMID:21209021
Ozone and Ozone By-Products in the Cabins of Commercial Aircraft
Weisel, Clifford; Weschler, Charles J.; Mohan, Kris; Vallarino, Jose; Spengler, John D.
2013-01-01
The aircraft cabin represents a unique indoor environment due to its high surface-to-volume ratio, high occupant density and the potential for high ozone concentrations at cruising altitudes. Ozone was continuously measured and air was sampled on sorbent traps, targeting carbonyl compounds, on 52 transcontinental U.S. or international flights between 2008 and 2010. The sampling was predominantly on planes that did not have ozone scrubbers (catalytic converters). Peak ozone levels on aircraft without catalytic convertors exceeded 100 ppb, with some flights having periods of more than an hour when the ozone levels were > 75ppb. Ozone was greatly reduced on relatively new aircraft with catalytic convertors, but ozone levels on two flights whose aircraft had older convertors were similar to those on planes without catalytic convertors. Hexanal, heptanal, octanal, nonanal, decanal and 6-methyl-5-hepten-2-one (6-MHO) were detected in the aircraft cabin at sub- to low ppb levels. Linear regression models that included the log transformed mean ozone concentration, percent occupancy and plane type were statistically significant and explained between 18 and 25% of the variance in the mixing ratio of these carbonyls. Occupancy was also a significant factor for 6-MHO, but not the linear aldehydes, consistent with 6-MHO’s formation from the reaction between ozone and squalene, which is present in human skin oils. PMID:23517299
Patrick H. Brose
2009-01-01
A field guide of 45 pairs of photographs depicting ericaceous shrub, leaf litter, and logging slash fuel types of eastern oak forests and observed fire behavior of these fuel types during prescribed burning. The guide contains instructions on how to use the photo guide to choose appropriate fuel models for prescribed fire planning.
Kim, Do-Kyun; Kim, Soo-Ji; Kang, Dong-Hyun
2017-01-01
In order to assure the microbial safety of drinking water, UVC-LED treatment has emerged as a possible technology to replace the use of conventional low pressure (LP) mercury vapor UV lamps. In this investigation, inactivation of Human Enteric Virus (HuEV) surrogates with UVC-LEDs was investigated in a water disinfection system, and kinetic model equations were applied to depict the surviving infectivities of the viruses. MS2, Qβ, and ΦX 174 bacteriophages were inoculated into sterile distilled water (DW) and irradiated with UVC-LED printed circuit boards (PCBs) (266nm and 279nm) or conventional LP lamps. Infectivities of bacteriophages were effectively reduced by up to 7-log after 9mJ/cm 2 treatment for MS2 and Qβ, and 1mJ/cm 2 for ΦX 174. UVC-LEDs showed a superior viral inactivation effect compared to conventional LP lamps at the same dose (1mJ/cm 2 ). Non-log linear plot patterns were observed, so that Weibull, Biphasic, Log linear-tail, and Weibull-tail model equations were used to fit the virus survival curves. For MS2 and Qβ, Weibull and Biphasic models fit well with R 2 values approximately equal to 0.97-0.99, and the Weibull-tail equation accurately described survival of ΦX 174. The level of UV-susceptibility among coliphages measured by the inactivation rate constant, k, was statistically different (ΦX 174 (ssDNA)>MS2, Qβ (ssRNA)), and indicated that sensitivity to UV was attributed to viral genetic material. Copyright © 2016 Elsevier Ltd. All rights reserved.
Non-linear Growth Models in Mplus and SAS
Grimm, Kevin J.; Ram, Nilam
2013-01-01
Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134
Gerba, Charles P; Riley, Kelley R; Nwachuku, Nena; Ryu, Hodon; Abbaszadegan, Morteza
2003-07-01
The removal of the Microsporidia, Encephalitozoon intestinalis, feline calicivirus and coliphages MS-2, PRD-1, and Fr were evaluated during conventional drinking water treatment in a pilot plant. The treatment consisted of coagulation, sedimentation, and mixed media filtration. Fr coliphage was removed the most (3.21 log), followed by feline calicivirus (3.05 log), E. coli (2.67 log), E. intestinalis (2.47 log), MS-2 (2.51 log). and PRD-1 (1.85 log). With the exception of PRD-1 the greatest removal of the viruses occurred during the flocculation step of the water treatment process.
Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel
2015-09-10
Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.
March, Jordon K; Pratt, Michael D; Lowe, Chinn-Woan; Cohen, Marissa N; Satterfield, Benjamin A; Schaalje, Bruce; O'Neill, Kim L; Robison, Richard A
2015-10-01
This study investigated (1) the susceptibility of Bacillus anthracis (Ames strain), Bacillus subtilis (ATCC 19659), and Clostridium sporogenes (ATCC 3584) spores to commercially available peracetic acid (PAA)- and glutaraldehyde (GA)-based disinfectants, (2) the effects that heat-shocking spores after treatment with these disinfectants has on spore recovery, and (3) the timing of heat-shocking after disinfectant treatment that promotes the optimal recovery of spores deposited on carriers. Suspension tests were used to obtain inactivation kinetics for the disinfectants against three spore types. The effects of heat-shocking spores after disinfectant treatment were also determined. Generalized linear mixed models were used to estimate 6-log reduction times for each spore type, disinfectant, and heat treatment combination. Reduction times were compared statistically using the delta method. Carrier tests were performed according to AOAC Official Method 966.04 and a modified version that employed immediate heat-shocking after disinfectant treatment. Carrier test results were analyzed using Fisher's exact test. PAA-based disinfectants had significantly shorter 6-log reduction times than the GA-based disinfectant. Heat-shocking B. anthracis spores after PAA treatment resulted in significantly shorter 6-log reduction times. Conversely, heat-shocking B. subtilis spores after PAA treatment resulted in significantly longer 6-log reduction times. Significant interactions were also observed between spore type, disinfectant, and heat treatment combinations. Immediately heat-shocking spore carriers after disinfectant treatment produced greater spore recovery. Sporicidal activities of disinfectants were not consistent across spore species. The effects of heat-shocking spores after disinfectant treatment were dependent on both disinfectant and spore species. Caution must be used when extrapolating sporicidal data of disinfectants from one spore species to another. Heat-shocking provides a more accurate picture of spore survival for only some disinfectant/spore combinations. Collaborative studies should be conducted to further examine a revision of AOAC Official Method 966.04 relative to heat-shocking. © 2015 The Authors. MicrobiologyOpen published by John Wiley & Sons Ltd.
Convective Mixing in Distal Pipes Exacerbates Legionella pneumophila Growth in Hot Water Plumbing
Rhoads, William J.; Pruden, Amy; Edwards, Marc A.
2016-01-01
Legionella pneumophila is known to proliferate in hot water plumbing systems, but little is known about the specific physicochemical factors that contribute to its regrowth. Here, L. pneumophila trends were examined in controlled, replicated pilot-scale hot water systems with continuous recirculation lines subject to two water heater settings (40 °C and 58 °C) and three distal tap water use frequencies (high, medium, and low) with two pipe configurations (oriented upward to promote convective mixing with the recirculating line and downward to prevent it). Water heater temperature setting determined where L. pneumophila regrowth occurred in each system, with an increase of up to 4.4 log gene copies/mL in the 40 °C system tank and recirculating line relative to influent water compared to only 2.5 log gene copies/mL regrowth in the 58 °C system. Distal pipes without convective mixing cooled to room temperature (23–24 °C) during periods of no water use, but pipes with convective mixing equilibrated to 30.5 °C in the 40 °C system and 38.8 °C in the 58 °C system. Corresponding with known temperature effects on L. pneumophila growth and enhanced delivery of nutrients, distal pipes with convective mixing had on average 0.2 log more gene copies/mL in the 40 °C system and 0.8 log more gene copies/mL in the 58 °C system. Importantly, this work demonstrated the potential for thermal control strategies to be undermined by distal taps in general, and convective mixing in particular. PMID:26985908
Liang, Chao; Qiao, Jun-Qin; Lian, Hong-Zhen
2017-12-15
Reversed-phase liquid chromatography (RPLC) based octanol-water partition coefficient (logP) or distribution coefficient (logD) determination methods were revisited and assessed comprehensively. Classic isocratic and some gradient RPLC methods were conducted and evaluated for neutral, weak acid and basic compounds. Different lipophilicity indexes in logP or logD determination were discussed in detail, including the retention factor logk w corresponding to neat water as mobile phase extrapolated via linear solvent strength (LSS) model from isocratic runs and calculated with software from gradient runs, the chromatographic hydrophobicity index (CHI), apparent gradient capacity factor (k g ') and gradient retention time (t g ). Among the lipophilicity indexes discussed, logk w from whether isocratic or gradient elution methods best correlated with logP or logD. Therefore logk w is recommended as the preferred lipophilicity index for logP or logD determination. logk w easily calculated from methanol gradient runs might be the main candidate to replace logk w calculated from classic isocratic run as the ideal lipophilicity index. These revisited RPLC methods were not applicable for strongly ionized compounds that are hardly ion-suppressed. A previously reported imperfect ion-pair RPLC method was attempted and further explored for studying distribution coefficients (logD) of sulfonic acids that totally ionized in the mobile phase. Notably, experimental logD values of sulfonic acids were given for the first time. The IP-RPLC method provided a distinct way to explore logD values of ionized compounds. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sıdır, Yadigar Gülseven; Sıdır, İsa
2013-08-01
In this study, the twelve new modeled N-substituted-6-acylbenzothiazolon derivatives having analgesic analog structure have been investigated by quantum chemical methods using a lot of electronic parameters and structure-activity properties; such as molecular polarizability (α), dipole moment (μ), EHOMO, ELUMO, q-, qH+, molecular volume (Vm), ionization potential (IP), electron affinity (EA), electronegativity (χ), molecular hardness (η), molecular softness (S), electrophilic index (ω), heat of formation (HOF), molar refractivity (MR), octanol-water partition coefficient (log P), thermochemical properties (entropy (S), capacity of heat (Cv)); as to investigate activity relationships with molecular structure. The correlations of log P with Vm, MR, ω, EA, EHOMO - ELUMO (ΔE), HOF in aqueous phase, χ, μ, S, η parameters, respectively are obtained, while the linear relation of log P with IP, Cv, HOF in gas phase are not observed. The log P parameter is obtained to be depending on different properties of compounds due to their complexity.
Genetic mixed linear models for twin survival data.
Ha, Il Do; Lee, Youngjo; Pawitan, Yudi
2007-07-01
Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.
The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staebler, G. M.; Candy, J.; Howard, N. T.
2016-06-15
The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. The zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ion-scale gyrokinetic simulations.« less
The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence
Staebler, Gary M.; Candy, John; Howard, Nathan T.; ...
2016-06-29
The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the thresholdmore » for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. Finally, the zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ionscale gyrokinetic simulations.« less
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.
2011-01-01
Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252
Fitzsimmons, Eric J; Kvam, Vanessa; Souleyrette, Reginald R; Nambisan, Shashi S; Bonett, Douglas G
2013-01-01
Despite recent improvements in highway safety in the United States, serious crashes on curves remain a significant problem. To assist in better understanding causal factors leading to this problem, this article presents and demonstrates a methodology for collection and analysis of vehicle trajectory and speed data for rural and urban curves using Z-configured road tubes. For a large number of vehicle observations at 2 horizontal curves located in Dexter and Ames, Iowa, the article develops vehicle speed and lateral position prediction models for multiple points along these curves. Linear mixed-effects models were used to predict vehicle lateral position and speed along the curves as explained by operational, vehicle, and environmental variables. Behavior was visually represented for an identified subset of "risky" drivers. Linear mixed-effect regression models provided the means to predict vehicle speed and lateral position while taking into account repeated observations of the same vehicle along horizontal curves. Speed and lateral position at point of entry were observed to influence trajectory and speed profiles. Rural horizontal curve site models are presented that indicate that the following variables were significant and influenced both vehicle speed and lateral position: time of day, direction of travel (inside or outside lane), and type of vehicle.
NASA Astrophysics Data System (ADS)
Faulkner, B. R.; Lyon, W. G.
2001-12-01
We present a probabilistic model for predicting virus attenuation. The solution employs the assumption of complete mixing. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve 4-log attenuation. We tabulated data from related studies to develop probability density functions for input parameters, and utilized a database of soil hydraulic parameters based on the 12 USDA soil categories. Regulators can use the model based on limited information such as boring logs, climate data, and soil survey reports for a particular site of interest. Plackett-Burman sensitivity analysis indicated the most important main effects on probability of failure to achieve 4-log attenuation in our model were mean logarithm of saturated hydraulic conductivity (+0.396), mean water content (+0.203), mean solid-water mass transfer coefficient (-0.147), and the mean solid-water equilibrium partitioning coefficient (-0.144). Using the model, we predicted the probability of failure of a one-meter thick proposed hydrogeologic barrier and a water content of 0.3. With the currently available data and the associated uncertainty, we predicted soils classified as sand would fail (p=0.999), silt loams would also fail (p=0.292), but soils classified as clays would provide the required 4-log attenuation (p=0.001). The model is extendible in the sense that probability density functions of parameters can be modified as future studies refine the uncertainty, and the lightweight object-oriented design of the computer model (implemented in Java) will facilitate reuse with modified classes. This is an abstract of a proposed presentation and does not necessarily reflect EPA policy.
Mixed Integer Linear Programming model for Crude Palm Oil Supply Chain Planning
NASA Astrophysics Data System (ADS)
Sembiring, Pasukat; Mawengkang, Herman; Sadyadharma, Hendaru; Bu'ulolo, F.; Fajriana
2018-01-01
The production process of crude palm oil (CPO) can be defined as the milling process of raw materials, called fresh fruit bunch (FFB) into end products palm oil. The process usually through a series of steps producing and consuming intermediate products. The CPO milling industry considered in this paper does not have oil palm plantation, therefore the FFB are supplied by several public oil palm plantations. Due to the limited availability of FFB, then it is necessary to choose from which plantations would be appropriate. This paper proposes a mixed integer linear programming model the supply chain integrated problem, which include waste processing. The mathematical programming model is solved using neighborhood search approach.
Joint T1 and brain fiber log-demons registration using currents to model geometry.
Siless, Viviana; Glaunès, Joan; Guevara, Pamela; Mangin, Jean-François; Poupon, Cyril; Le Bihan, Denis; Thirion, Bertrand; Fillard, Pierre
2012-01-01
We present an extension of the diffeomorphic Geometric Demons algorithm which combines the iconic registration with geometric constraints. Our algorithm works in the log-domain space, so that one can efficiently compute the deformation field of the geometry. We represent the shape of objects of interest in the space of currents which is sensitive to both location and geometric structure of objects. Currents provides a distance between geometric structures that can be defined without specifying explicit point-to-point correspondences. We demonstrate this framework by registering simultaneously T1 images and 65 fiber bundles consistently extracted in 12 subjects and compare it against non-linear T1, tensor, and multi-modal T1 + Fractional Anisotropy (FA) registration algorithms. Results show the superiority of the Log-domain Geometric Demons over their purely iconic counterparts.
Accuracy and precision of Legionella isolation by US laboratories in the ELITE program pilot study.
Lucas, Claressa E; Taylor, Thomas H; Fields, Barry S
2011-10-01
A pilot study for the Environmental Legionella Isolation Techniques Evaluation (ELITE) Program, a proficiency testing scheme for US laboratories that culture Legionella from environmental samples, was conducted September 1, 2008 through March 31, 2009. Participants (n=20) processed panels consisting of six sample types: pure and mixed positive, pure and mixed negative, pure and mixed variable. The majority (93%) of all samples (n=286) were correctly characterized, with 88.5% of samples positive for Legionella and 100% of negative samples identified correctly. Variable samples were incorrectly identified as negative in 36.9% of reports. For all samples reported positive (n=128), participants underestimated the cfu/ml by a mean of 1.25 logs with standard deviation of 0.78 logs, standard error of 0.07 logs, and a range of 3.57 logs compared to the CDC re-test value. Centering results around the interlaboratory mean yielded a standard deviation of 0.65 logs, standard error of 0.06 logs, and a range of 3.22 logs. Sampling protocol, treatment regimen, culture procedure, and laboratory experience did not significantly affect the accuracy or precision of reported concentrations. Qualitative and quantitative results from the ELITE pilot study were similar to reports from a corresponding proficiency testing scheme available in the European Union, indicating these results are probably valid for most environmental laboratories worldwide. The large enumeration error observed suggests that the need for remediation of a water system should not be determined solely by the concentration of Legionella observed in a sample since that value is likely to underestimate the true level of contamination. Published by Elsevier Ltd.
Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L
2014-01-01
Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of reporting has room for improvement regarding the characteristics of the analysis, estimation method, validation, and selection of the model.
Ultrafast CT scanning of an oak log for internal defects
Francis G. Wagner; Fred W. Taylor; Douglas S. Ladd; Charles W. McMillin; Fredrick L. Roder
1989-01-01
Detecting internal defects in sawlogs and veneer logs with computerized tomographic (CT) scanning is possible, but has been impractical due to the long scanning time required. This research investigated a new scanner able to acquire 34 cross-sectional log scans per second. This scanning rate translates to a linear log feed rate of 85 feet (25.91 m) per minute at one...
Relationship between vitamin D and inflammatory markers in older individuals.
De Vita, Francesca; Lauretani, Fulvio; Bauer, Juergen; Bautmans, Ivan; Shardell, Michelle; Cherubini, Antonio; Bondi, Giuliana; Zuliani, Giovanni; Bandinelli, Stefania; Pedrazzoni, Mario; Dall'Aglio, Elisabetta; Ceda, Gian Paolo; Maggio, Marcello
2014-01-01
In older persons, vitamin D insufficiency and a subclinical chronic inflammatory status frequently coexist. Vitamin D has immune-modulatory and in vitro anti-inflammatory properties. However, there is inconclusive evidence about the anti-inflammatory role of vitamin D in older subjects. Thus, we investigated the hypothesis of an inverse relationship between 25-hydroxyvitamin D (25(OH)D) and inflammatory markers in a population-based study of older individuals. After excluding participants with high-sensitivity C-reactive protein (hsCRP) ≥ 10 mg/dl and those who were on chronic anti-inflammatory treatment, we evaluated 867 older adults ≥65 years from the InCHIANTI Study. Participants had complete data on serum concentrations of 25(OH)D, hsCRP, tumor necrosis factor (TNF)-α, soluble TNF-α receptors 1 and 2, interleukin (IL)-1β, IL-1 receptor antagonist, IL-10, IL-18, IL-6, and soluble IL-6 receptors (sIL6r and sgp130). Two general linear models were fit (model 1-adjusted for age, sex, and parathyroid hormone (PTH); model 2-including covariates of model 1 plus dietary and smoking habits, physical activity, ADL disability, season, osteoporosis, depressive status, and comorbidities). The mean age was 75.1 ± 17.1 years ± SD. In model 1, log(25OH-D) was significantly and inversely associated with log(IL-6) (β ± SE = -0.11 ± 0.03, p = <0.0001) and log (hsCRP) (β ± SE = -0.04 ± 0.02, p = 0.04) and positively associated with log(sIL6r) (β ± SE = 0.11 ± 0.04, p = 0.003) but not with other inflammatory markers. In model 2, log (25OH-D) remained negatively associated with log (IL-6) (β ± SE = -0.10 ± 0.03, p = 0.0001) and positively associated with log(sIL6r) (β ± SE = 0.11 ± 0.03, p = 0.004) but not with log(hsCRP) (β ± SE = -0.01 ± 0.03, p = 0.07). 25(OH)D is independently and inversely associated with IL-6 and positively with sIL6r, suggesting a potential anti-inflammatory role for vitamin D in older individuals.
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-04-01
To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.
Wave models for turbulent free shear flows
NASA Technical Reports Server (NTRS)
Liou, W. W.; Morris, P. J.
1991-01-01
New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.
Finite mixture models for the computation of isotope ratios in mixed isotopic samples
NASA Astrophysics Data System (ADS)
Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas
2013-04-01
Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control parameters of the algorithm, i.e. the maximum count of ratios, the minimum relative group-size of data points belonging to each ratio has to be defined. Computation of the models can be done with statistical software. In this study Leisch and Grün's flexmix package [2] for the statistical open-source software R was applied. A code example is available in the electronic supplementary material of Kappel et al. [1]. In order to demonstrate the usefulness of finite mixture models in fields dealing with the computation of multiple isotope ratios in mixed samples, a transparent example based on simulated data is presented and problems regarding small group-sizes are illustrated. In addition, the application of finite mixture models to isotope ratio data measured in uranium oxide particles is shown. The results indicate that finite mixture models perform well in computing isotope ratios relative to traditional estimation procedures and can be recommended for more objective and straightforward calculation of isotope ratios in geochemistry than it is current practice. [1] S. Kappel, S. Boulyga, L. Dorta, D. Günther, B. Hattendorf, D. Koffler, G. Laaha, F. Leisch and T. Prohaska: Evaluation Strategies for Isotope Ratio Measurements of Single Particles by LA-MC-ICPMS, Analytical and Bioanalytical Chemistry, 2013, accepted for publication on 2012-12-18 (doi: 10.1007/s00216-012-6674-3) [2] B. Grün and F. Leisch: Fitting finite mixtures of generalized linear regressions in R. Computational Statistics & Data Analysis, 51(11), 5247-5252, 2007. (doi:10.1016/j.csda.2006.08.014)
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
ERIC Educational Resources Information Center
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2018-01-01
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591
ERIC Educational Resources Information Center
Si, Yajuan; Reiter, Jerome P.
2013-01-01
In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…
ERIC Educational Resources Information Center
Suh, Youngsuk; Talley, Anna E.
2015-01-01
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Context Effects in Multi-Alternative Decision Making: Empirical Data and a Bayesian Model
ERIC Educational Resources Information Center
Hawkins, Guy; Brown, Scott D.; Steyvers, Mark; Wagenmakers, Eric-Jan
2012-01-01
For decisions between many alternatives, the benchmark result is Hick's Law: that response time increases log-linearly with the number of choice alternatives. Even when Hick's Law is observed for response times, divergent results have been observed for error rates--sometimes error rates increase with the number of choice alternatives, and…
Reassessing the Economic Value of Advanced Level Mathematics
ERIC Educational Resources Information Center
Adkins, Michael; Noyes, Andrew
2016-01-01
In the late 1990s, the economic return to Advanced level (A-level) mathematics was examined. The analysis was based upon a series of log-linear models of earnings in the 1958 National Child Development Survey (NCDS) and the National Survey of 1980 Graduates and Diplomates. The core finding was that A-level mathematics had a unique earnings premium…
Interracial and Intraracial Patterns of Mate Selection among America's Diverse Black Populations
ERIC Educational Resources Information Center
Batson, Christie D.; Qian, Zhenchao; Lichter, Daniel T.
2006-01-01
Despite recent immigration from Africa and the Caribbean, Blacks in America are still viewed as a monolith in many previous studies. In this paper, we use newly released 2000 census data to estimate log-linear models that highlight patterns of interracial and intraracial marriage and cohabitation among African Americans, West Indians, Africans,…
A mixed-methods analysis of logging injuries in Montana and Idaho.
Lagerstrom, Elise; Magzamen, Sheryl; Rosecrance, John
2017-12-01
Despite advances in mechanization, logging continues to be one of the most dangerous occupations in the United States. Logging in the Intermountain West region (Montana and Idaho) is especially hazardous due to steep terrain, extreme weather, and remote work locations. We implemented a mixed-methods approach combining analyses of workers' compensation claims and focus groups to identify factors associated with injuries and fatalities in the logging industry. Inexperienced workers (>6 months experience) accounted for over 25% of claims. Sprain/strain injuries were the most common, accounting for 36% of claims, while fatalities had the highest median claim cost ($274 411). Focus groups identified job tasks involving felling trees, skidding, and truck driving as having highest risk. Injury prevention efforts should focus on training related to safe work methods (especially for inexperienced workers), the development of a safety culture and safety leadership, as well as implementation of engineering controls. © 2017 Wiley Periodicals, Inc.
General-Purpose Software For Computer Graphics
NASA Technical Reports Server (NTRS)
Rogers, Joseph E.
1992-01-01
NASA Device Independent Graphics Library (NASADIG) is general-purpose computer-graphics package for computer-based engineering and management applications which gives opportunity to translate data into effective graphical displays for presentation. Features include two- and three-dimensional plotting, spline and polynomial interpolation, control of blanking of areas, multiple log and/or linear axes, control of legends and text, control of thicknesses of curves, and multiple text fonts. Included are subroutines for definition of areas and axes of plots; setup and display of text; blanking of areas; setup of style, interpolation, and plotting of lines; control of patterns and of shading of colors; control of legends, blocks of text, and characters; initialization of devices; and setting of mixed alphabets. Written in FORTRAN 77.
Microbiological examination of vegetable seed sprouts in Korea.
Kim, Hoikyung; Lee, Youngjun; Beuchat, Larry R; Yoon, Bong-June; Ryu, Jee-Hoon
2009-04-01
Sprouted vegetable seeds used as food have been implicated as sources of outbreaks of Salmonella and Escherichia coli O157:H7 infections. We profiled the microbiological quality of sprouts and seeds sold at retail shops in Seoul, Korea. Ninety samples of radish sprouts and mixed sprouts purchased at department stores, supermarkets, and traditional markets and 96 samples of radish, alfalfa, and turnip seeds purchased from online stores were analyzed to determine the number of total aerobic bacteria (TAB) and molds or yeasts (MY) and the incidence of Salmonella, E. coli O157:H7, and Enterobacter sakazakii. Significantly higher numbers of TAB (7.52 log CFU/g) and MY (7.36 log CFU/g) were present on mixed sprouts than on radish sprouts (6.97 and 6.50 CFU/g, respectively). Populations of TAB and MY on the sprouts were not significantly affected by location of purchase. Radish seeds contained TAB and MY populations of 4.08 and 2.42 log CFU/g, respectively, whereas populations of TAB were only 2.54 to 2.84 log CFU/g and populations of MY were 0.82 to 1.69 log CFU/g on alfalfa and turnip seeds, respectively. Salmonella and E. coli O157:H7 were not detected on any of the sprout and seed samples tested. E. sakazakii was not found on seeds, but 13.3% of the mixed sprout samples contained this potentially pathogenic bacterium.
The Apollo 16 regolith - A petrographically-constrained chemical mixing model
NASA Technical Reports Server (NTRS)
Kempa, M. J.; Papike, J. J.; White, C.
1980-01-01
A mixing model for Apollo 16 regolith samples has been developed, which differs from other A-16 mixing models in that it is both petrographically constrained and statistically sound. The model was developed using three components representative of rock types present at the A-16 site, plus a representative mare basalt. A linear least-squares fitting program employing the chi-squared test and sum of components was used to determine goodness of fit. Results for surface soils indicate that either there are no significant differences between Cayley and Descartes material at the A-16 site or, if differences do exist, they have been obscured by meteoritic reworking and mixing of the lithologies.
Chiem, N H; Harrison, D J
1998-03-01
A glass microchip is described in which reagents and serum samples for competitive immunoassay of serum theophylline can be mixed, reacted, separated, and analyzed. The device functions as an automated microfluidic immunoassay system, creating a lab-on-a-chip. Electroosmotic pumping was used to control first the mixing of 50-fold-diluted serum sample with labeled theophylline tracer in a 1:1 ratio, followed by 1:1 mixing and reaction with anti-theophylline antibody. The 51-nL on-chip mixer gave the same concentration as dilution performed off-chip, within 3%. A 100-pL plug of the reacted solution was then injected into an electrophoresis separation channel integrated within the same chip. Measurements of free and bound tracer by fluorescence detection gave linear calibration curves of signal vs log[theophylline] between 0 and 40 mg/L, with a slope of 0.52 +/- 0.03 and an intercept of -0.04 +/- 0.04 after a 90-s reaction time. A detection limit of 0.26 mg/L in serum (expressed before the dilution step, actual concentration of 1.3 micrograms/L at the detector) was obtained. Recovery values were 107% +/- 8% for 15 mg/L serum samples.
ERIC Educational Resources Information Center
Kelderman, Henk
In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…
Yoon, Hyunjoo; Lee, Joo-Yeon; Suk, Hee-Jin; Lee, Sunah; Lee, Heeyoung; Lee, Soomin; Yoon, Yohan
2012-12-01
This study developed models to predict the growth probabilities and kinetic behavior of Salmonella enterica strains on cutting boards. Polyethylene coupons (3 by 5 cm) were rubbed with pork belly, and pork purge was then sprayed on the coupon surface, followed by inoculation of a five-strain Salmonella mixture onto the surface of the coupons. These coupons were stored at 13 to 35°C for 12 h, and total bacterial and Salmonella cell counts were enumerated on tryptic soy agar and xylose lysine deoxycholate (XLD) agar, respectively, every 2 h, which produced 56 combinations. The combinations that had growth of ≥0.5 log CFU/cm(2) of Salmonella bacteria recovered on XLD agar were given the value 1 (growth), and the combinations that had growth of <0.5 log CFU/cm(2) were assigned the value 0 (no growth). These growth response data from XLD agar were analyzed by logistic regression for producing growth/no growth interfaces of Salmonella bacteria. In addition, a linear model was fitted to the Salmonella cell counts to calculate the growth rate (log CFU per square centimeter per hour) and initial cell count (log CFU per square centimeter), following secondary modeling with the square root model. All of the models developed were validated with observed data, which were not used for model development. Growth of total bacteria and Salmonella cells was observed at 28, 30, 33, and 35°C, but there was no growth detected below 20°C within the time frame investigated. Moreover, various indices indicated that the performance of the developed models was acceptable. The results suggest that the models developed in this study may be useful in predicting the growth/no growth interface and kinetic behavior of Salmonella bacteria on polyethylene cutting boards.
Comparison of bacteriophage and enteric virus removal in pilot scale activated sludge plants.
Arraj, A; Bohatier, J; Laveran, H; Traore, O
2005-01-01
The aim of this experimental study was to determine comparatively the removal of two types of bacteriophages, a somatic coliphage and an F-specific RNA phage and of three types of enteric viruses, hepatitis A virus (HAV), poliovirus and rotavirus during sewage treatment by activated sludge using laboratory pilot plants. The cultivable simian rotavirus SA11, the HAV HM 175/18f cytopathic strain and poliovirus were quantified by cell culture. The bacteriophages were quantified by plaque formation on the host bacterium in agar medium. In each experiment, two pilots simulating full-scale activated sludge plants were inoculated with viruses at known concentrations, and mixed liquor and effluent samples were analysed regularly. In the mixed liquor, liquid and solid fractions were analysed separately. The viral behaviour in both the liquid and solid phases was similar between pilots of each experiment. Viral concentrations decreased rapidly following viral injection in the pilots. Ten minutes after the injections, viral concentrations in the liquid phase had decreased from 1.0 +/- 0.4 log to 2.2 +/- 0.3 log. Poliovirus and HAV were predominantly adsorbed on the solid matters of the mixed liquor while rotavirus was not detectable in the solid phase. In our model, the estimated mean log viral reductions after 3-day experiment were 9.2 +/- 0.4 for rotavirus, 6.6 +/- 2.4 for poliovirus, 5.9 +/- 3.5 for HAV, 3.2 +/- 1.2 for MS2 and 2.3 +/- 0.5 for PhiX174. This study demonstrates that the pilots are useful models to assess the removal of infectious enteric viruses and bacteriophages by activated sludge treatment. Our results show the efficacy of the activated sludge treatment on the five viruses and suggest that coliphages could be an acceptable indicator of viral removal in this treatment system.
On the validity of effective formulations for transport through heterogeneous porous media
NASA Astrophysics Data System (ADS)
de Dreuzy, J.-R.; Carrera, J.
2015-11-01
Geological heterogeneity enhances spreading of solutes, and causes transport to be anomalous (i.e., non-Fickian), with much less mixing than suggested by dispersion. This implies that modeling transport requires adopting either stochastic approaches that model heterogeneity explicitly or effective transport formulations that acknowledge the effects of heterogeneity. A number of such formulations have been developed and tested as upscaled representations of enhanced spreading. However, their ability to represent mixing has not been formally tested, which is required for proper reproduction of chemical reactions and which motivates our work. We propose that, for an effective transport formulation to be considered a valid representation of transport through Heterogeneous Porous Media (HPM), it should honor mean advection, mixing and spreading. It should also be flexible enough to be applicable to real problems. We test the capacity of the Multi-Rate Mass Transfer (MRMT) to reproduce mixing observed in HPM, as represented by the classical multi-Gaussian log-permeability field with a Gaussian correlation pattern. Non-dispersive mixing comes from heterogeneity structures in the concentration fields that are not captured by macrodispersion. These fine structures limit mixing initially, but eventually enhance it. Numerical results show that, relative to HPM, MRMT models display a much stronger memory of initial conditions on mixing than on dispersion because of the sensitivity of the mixing state to the actual values of concentration. Because MRMT does not restitute the local concentration structures, it induces smaller non-dispersive mixing than HPM. However long-lived trapping in the immobile zones may sustain the deviation from dispersive mixing over much longer times. While spreading can be well captured by MRMT models, non-dispersive mixing cannot.
Logging costs and production rates for the group selection cutting method
Philip M. McDonald
1965-01-01
Young-growth, mixed-conifer stands were logged by a group-selection method designed to create openings 30, 60, and 90 feet in diameter. Total costs for felling, limbing, bucking, and skidding on these openings ranged from $7.04 to $7.99 per thousand board feet. Cost differences between openings were not statistically significant. Logging costs for group selection...
van Os-Medendorp, Harmieke; van Leent-de Wit, Ilse; de Bruin-Weller, Marjolein; Knulst, André
2015-05-23
Two online self-management programs for patients with atopic dermatitis (AD) or food allergy (FA) were developed with the aim of helping patients cope with their condition, follow the prescribed treatment regimen, and deal with the consequences of their illness in daily life. Both programs consist of several modules containing information, personal stories by fellow patients, videos, and exercises with feedback. Health care professionals can refer their patients to the programs. However, the use of the program in daily practice is unknown. The aim of this study was to explore the use and characteristics of users of the online self-management programs "Living with eczema," and "Living with food allergy," and to investigate factors related to the use of the trainings. A cross-sectional design was carried out in which the outcome parameters were the number of log-ins by patients, the number of hits on the system's core features, disease severity, quality of life, and domains of self-management. Descriptive statistics were used to summarize sample characteristics and to describe number of log-ins and hits per module and per functionality. Correlation and regression analyses were used to explore the relation between the number of log-ins and patient characteristics. Since the start, 299 adult patients have been referred to the online AD program; 173 logged in for at least one occasion. Data from 75 AD patients were available for analyses. Mean number of log-ins was 3.1 (range 1-11). Linear regression with the number of log-ins as dependent variable showed that age and quality of life contributed most to the model, with betas of .35 ( P=.002) and .26 (P=.05), respectively, and an R(2) of .23. Two hundred fourteen adult FA patients were referred to the online FA training, 124 logged in for at least one occasion and data from 45 patients were available for analysis. Mean number of log-ins was 3.0 (range 1-11). Linear regression with the number of log-ins as dependent variable revealed that adding the self-management domain "social integration and support" to the model led to an R(2) of .13. The modules with information about the disease, diagnosis, and treatment were most visited. Most hits were on the information parts of the modules (55-58%), followed by exercises (30-32%). The online self-management programs "Living with eczema" and "Living with food allergy" were used by patients in addition to the usual face-to-face care. Almost 60% of all referred patients logged in, with an average of three log-ins. All modules seemed to be relevant, but there is room for improvement in the use of the training. Age, quality of life, and lower social integration and support were related to the use of the training, but only part of the variance in use could be explained by these variables.
Identifying ontogenetic, environmental and individual components of forest tree growth
Chaubert-Pereira, Florence; Caraglio, Yves; Lavergne, Christian; Guédon, Yann
2009-01-01
Background and Aims This study aimed to identify and characterize the ontogenetic, environmental and individual components of forest tree growth. In the proposed approach, the tree growth data typically correspond to the retrospective measurement of annual shoot characteristics (e.g. length) along the trunk. Methods Dedicated statistical models (semi-Markov switching linear mixed models) were applied to data sets of Corsican pine and sessile oak. In the semi-Markov switching linear mixed models estimated from these data sets, the underlying semi-Markov chain represents both the succession of growth phases and their lengths, while the linear mixed models represent both the influence of climatic factors and the inter-individual heterogeneity within each growth phase. Key Results On the basis of these integrative statistical models, it is shown that growth phases are not only defined by average growth level but also by growth fluctuation amplitudes in response to climatic factors and inter-individual heterogeneity and that the individual tree status within the population may change between phases. Species plasticity affected the response to climatic factors while tree origin, sampling strategy and silvicultural interventions impacted inter-individual heterogeneity. Conclusions The transposition of the proposed integrative statistical modelling approach to cambial growth in relation to climatic factors and the study of the relationship between apical growth and cambial growth constitute the next steps in this research. PMID:19684021
Smooth individual level covariates adjustment in disease mapping.
Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise
2018-05-01
Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Shimony, Maya K; Schliep, Karen C; Schisterman, Enrique F; Ahrens, Katherine A; Sjaarda, Lindsey A; Rotman, Yaron; Perkins, Neil J; Pollack, Anna Z; Wactawski-Wende, Jean; Mumford, Sunni L
2016-03-01
To prospectively assess the association between sugar-sweetened beverages (SSB), added sugar, and total fructose and serum concentrations of liver enzymes among healthy, reproductive-age women. A prospective cohort of 259 premenopausal women (average age 27.3 ± 8.2 years; BMI 24.1 ± kg/m(2)) were followed up for up to two menstrual cycles, providing up to eight fasting blood specimens/cycle and four 24-h dietary recalls/cycle. Women with a history of chronic disease were excluded. Alanine and aspartate aminotransferases (ALT and AST, respectively) were measured in serum samples. Linear mixed models estimated associations between average SSB, added sugar, and total fructose intake and log-transformed liver enzymes adjusting for age, race, body mass index, total energy and alcohol intake, and Mediterranean diet score. For every 1 cup/day increase in SSB consumption and 10 g/day increase in added sugar and total fructose, log ALT increased by 0.079 U/L (95 % CI 0.022, 0.137), 0.012 U/L (95 % CI 0.002, 0.022), and 0.031 (0.012, 0.050), respectively, and log AST increased by 0.029 U/L (-0.011, 0.069), 0.007 U/L (0.000, 0.014), and 0.017 U/L (0.004, 0.030), respectively. Women who consumed ≥1.50 cups/day (12 oz can) SSB versus less had 0.127 U/L (95 % CI 0.001, 0.254) higher ALT [percent change 13.5 % (95 % CI 0.1, 28.9)] and 0.102 (95 % CI 0.015, 0.190) higher AST [percent change 10.8 % (95 % CI 1.5, 20.9)]. Sugar-sweetened beverages were associated with higher serum ALT and AST concentrations among healthy premenopausal women, indicating that habitual consumption of even moderate SSB may elicit hepatic lipogenesis.
Heavy neutrino mixing and single production at linear collider
NASA Astrophysics Data System (ADS)
Gluza, J.; Maalampi, J.; Raidal, M.; Zrałek, M.
1997-02-01
We study the single production of heavy neutrinos via the processes e- e+ -> νN and e- γ -> W- N at future linear colliders. As a base of our considerations we take a wide class of models, both with vanishing and non-vanishing left-handed Majorana neutrino mass matrix mL. We perform a model independent analyses of the existing experimental data and find connections between the characteristic of heavy neutrinos (masses, mixings, CP eigenvalues) and the mL parameters. We show that with the present experimental constraints heavy neutrino masses almost up to the collision energy can be tested in the future experiments.
A CORRELATION BETWEEN RADIATION TOLERANCE AND NUCLEAR SURFACE AREA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iversen, S.
1962-09-22
Sparrow and Miksche (Science, 134:282) determined the dose (r/day) required to produce severe growth inhibition in 23 species of plants and found a linear relationship between log nuclear volume and log dose. The following equations hold for 6 species: log nuclear volume - 4.42 -0.82 log dose and log nuclear volume = 1.66 + 0.66 log (DNA content). If all the nuclear DNA is distributed in two peripheral zones, the equations also hold: 2(log nuclear surface area) - 1.33(log nuclear volume) - 2.21 + 0.88 log(DNA content) and 5.88-- 1.09 log dose. For the 23 species, the equation was obtained:more » 2(log nuclear surface area) = 5.41 -- 0.97 log dose. All the slopes are close to the expected value of 1.00. (D.L.C.)« less
Using the Logarithm of Odds to Define a Vector Space on Probabilistic Atlases
Pohl, Kilian M.; Fisher, John; Bouix, Sylvain; Shenton, Martha; McCarley, Robert W.; Grimson, W. Eric L.; Kikinis, Ron; Wells, William M.
2007-01-01
The Logarithm of the Odds ratio (LogOdds) is frequently used in areas such as artificial neural networks, economics, and biology, as an alternative representation of probabilities. Here, we use LogOdds to place probabilistic atlases in a linear vector space. This representation has several useful properties for medical imaging. For example, it not only encodes the shape of multiple anatomical structures but also captures some information concerning uncertainty. We demonstrate that the resulting vector space operations of addition and scalar multiplication have natural probabilistic interpretations. We discuss several examples for placing label maps into the space of LogOdds. First, we relate signed distance maps, a widely used implicit shape representation, to LogOdds and compare it to an alternative that is based on smoothing by spatial Gaussians. We find that the LogOdds approach better preserves shapes in a complex multiple object setting. In the second example, we capture the uncertainty of boundary locations by mapping multiple label maps of the same object into the LogOdds space. Third, we define a framework for non-convex interpolations among atlases that capture different time points in the aging process of a population. We evaluate the accuracy of our representation by generating a deformable shape atlas that captures the variations of anatomical shapes across a population. The deformable atlas is the result of a principal component analysis within the LogOdds space. This atlas is integrated into an existing segmentation approach for MR images. We compare the performance of the resulting implementation in segmenting 20 test cases to a similar approach that uses a more standard shape model that is based on signed distance maps. On this data set, the Bayesian classification model with our new representation outperformed the other approaches in segmenting subcortical structures. PMID:17698403
NASA Technical Reports Server (NTRS)
MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.
2005-01-01
Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.
González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier
2008-01-01
Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760
VENVAL : a plywood mill cost accounting program
Henry Spelter
1991-01-01
This report documents a package of computer programs called VENVAL. These programs prepare plywood mill data for a linear programming (LP) model that, in turn, calculates the optimum mix of products to make, given a set of technologies and market prices. (The software to solve a linear program is not provided and must be obtained separately.) Linear programming finds...
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M
2011-09-10
The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.
Estradiol and Inflammatory Markers in Older Men
Maggio, Marcello; Ceda, Gian Paolo; Lauretani, Fulvio; Bandinelli, Stefania; Metter, E. Jeffrey; Artoni, Andrea; Gatti, Elisa; Ruggiero, Carmelinda; Guralnik, Jack M.; Valenti, Giorgio; Ling, Shari M.; Basaria, Shehzad; Ferrucci, Luigi
2009-01-01
Background: Aging is characterized by a mild proinflammatory state. In older men, low testosterone levels have been associated with increasing levels of proinflammatory cytokines. It is still unclear whether estradiol (E2), which generally has biological activities complementary to testosterone, affects inflammation. Methods: We analyzed data obtained from 399 men aged 65–95 yr enrolled in the Invecchiare in Chianti study with complete data on body mass index (BMI), serum E2, testosterone, IL-6, soluble IL-6 receptor, TNF-α, IL-1 receptor antagonist, and C-reactive protein. The relationship between E2 and inflammatory markers was examined using multivariate linear models adjusted for age, BMI, smoking, physical activity, chronic disease, and total testosterone. Results: In age-adjusted analysis, log (E2) was positively associated with log (IL-6) (r = 0.19; P = 0.047), and the relationship was statistically significant (P = 0.032) after adjustments for age, BMI, smoking, physical activity, chronic disease, and serum testosterone levels. Log (E2) was not significantly associated with log (C-reactive protein), log (soluble IL-6 receptor), or log (TNF-α) in both age-adjusted and fully adjusted analyses. Conclusions: In older men, E2 is weakly positively associated with IL-6, independent of testosterone and other confounders including BMI. PMID:19050054
Ma, Qiuyun; Jiao, Yan; Ren, Yiping
2017-01-01
In this study, length-weight relationships and relative condition factors were analyzed for Yellow Croaker (Larimichthys polyactis) along the north coast of China. Data covered six regions from north to south: Yellow River Estuary, Coastal Waters of Northern Shandong, Jiaozhou Bay, Coastal Waters of Qingdao, Haizhou Bay, and South Yellow Sea. In total 3,275 individuals were collected during six years (2008, 2011-2015). One generalized linear model, two simply linear models and nine linear mixed effect models that applied the effects from regions and/or years to coefficient a and/or the exponent b were studied and compared. Among these twelve models, the linear mixed effect model with random effects from both regions and years fit the data best, with lowest Akaike information criterion value and mean absolute error. In this model, the estimated a was 0.0192, with 95% confidence interval 0.0178~0.0308, and the estimated exponent b was 2.917 with 95% confidence interval 2.731~2.945. Estimates for a and b with the random effects in intercept and coefficient from Region and Year, ranged from 0.013 to 0.023 and from 2.835 to 3.017, respectively. Both regions and years had effects on parameters a and b, while the effects from years were shown to be much larger than those from regions. Except for Coastal Waters of Northern Shandong, a decreased from north to south. Condition factors relative to reference years of 1960, 1986, 2005, 2007, 2008~2009 and 2010 revealed that the body shape of Yellow Croaker became thinner in recent years. Furthermore relative condition factors varied among months, years, regions and length. The values of a and relative condition factors decreased, when the environmental pollution became worse, therefore, length-weight relationships could be an indicator for the environment quality. Results from this study provided basic description of current condition of Yellow Croaker along the north coast of China.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Song, Lijie; Aryana, Kayanush J
2014-10-01
For manufacture of commercial yogurt powder, yogurt has to go through a drying process, which substantially lowers the yogurt culture counts, so the potential health benefits of the yogurt culture bacteria are reduced. Also, upon reconstitution, commercial yogurt powder does not taste like yogurt and has an off-flavor. The objective was to study the microbial, physicochemical, and sensory characteristics of reconstituted yogurt from yogurt cultured milk powder (YCMP) mix and reconstituted yogurt from commercial yogurt powder (CYP). The CYP reconstituted yogurt was the control and YCMP mix reconstituted yogurt was the treatment. Microbial and physicochemical characteristics of the CYP reconstituted yogurt and YCMP mix reconstituted yogurt were analyzed daily for the first week and then weekly for a period of 8 wk. Sensory consumer testing of CYP reconstituted yogurt and YCMP mix reconstituted yogurt was conducted with 100 consumers. At 56 d, YCMP mix reconstituted yogurt had 5 log cfu/mL higher counts of Streptococcus thermophilus than the control (CYP reconstituted yogurt). Also, Lactobacillus bulgaricus counts of YCMP mix reconstituted yogurt were 6.55 log cfu/mL at 28 d and were 5.35 log cfu/mL at 56 d, whereas the CYP reconstituted yogurt from 28 d onwards had a count of <10 cfu/mL. The YCMP mix reconstituted yogurt also had significantly higher apparent viscosity and sensory scores for appearance, color, aroma, taste, thickness, overall liking, consumer acceptability, and purchase intent than CYP reconstituted yogurt. Overall, YCMP mix reconstituted yogurt had more desirable characteristics than CYP reconstituted yogurt. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
A comparison of methods for estimating the random effects distribution of a linear mixed model.
Ghidey, Wendimagegn; Lesaffre, Emmanuel; Verbeke, Geert
2010-12-01
This article reviews various recently suggested approaches to estimate the random effects distribution in a linear mixed model, i.e. (1) the smoothing by roughening approach of Shen and Louis,(1) (2) the semi-non-parametric approach of Zhang and Davidian,(2) (3) the heterogeneity model of Verbeke and Lesaffre( 3) and (4) a flexible approach of Ghidey et al. (4) These four approaches are compared via an extensive simulation study. We conclude that for the considered cases, the approach of Ghidey et al. (4) often shows to have the smallest integrated mean squared error for estimating the random effects distribution. An analysis of a longitudinal dental data set illustrates the performance of the methods in a practical example.
Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.
McDonald, S A; Hutchinson, S J; Schnier, C; McLeod, A; Goldberg, D J
2014-01-01
In countries maintaining national hepatitis C virus (HCV) surveillance systems, a substantial proportion of individuals report no risk factors for infection. Our goal was to estimate the proportion of diagnosed HCV antibody-positive persons in Scotland (1991-2010) who probably acquired infection through injecting drug use (IDU), by combining data on IDU risk from four linked data sources using log-linear capture-recapture methods. Of 25,521 HCV-diagnosed individuals, 14,836 (58%) reported IDU risk with their HCV diagnosis. Log-linear modelling estimated a further 2484 HCV-diagnosed individuals with IDU risk, giving an estimated prevalence of 83. Stratified analyses indicated variation across birth cohort, with estimated prevalence as low as 49% in persons born before 1960 and greater than 90% for those born since 1960. These findings provide public-health professionals with a more complete profile of Scotland's HCV-infected population in terms of transmission route, which is essential for targeting educational, prevention and treatment interventions.
Optimization Research of Generation Investment Based on Linear Programming Model
NASA Astrophysics Data System (ADS)
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
Grumetto, Lucia; Russo, Giacomo; Barbato, Francesco
2016-08-01
The affinity indexes for phospholipids (log kW(IAM)) for 42 compounds were measured by high performance liquid chromatography (HPLC) on two different phospholipid-based stationary phases (immobilized artificial membrane, IAM), i.e., IAM.PC.MG and IAM.PC.DD2. The polar/electrostatic interaction forces between analytes and membrane phospholipids (Δlog kW(IAM)) were calculated as the differences between the experimental values of log kW(IAM) and those expected for isolipophilic neutral compounds having polar surface area (PSA) = 0. The values of passage through a porcine brain lipid extract (PBLE) artificial membrane for 36 out of the 42 compounds considered, measured by the so-called PAMPA-BBB technique, were taken from the literature (P0(PAMPA-BBB)). The values of blood-brain barrier (BBB) passage measured in situ, P0(in situ), for 38 out of the 42 compounds considered, taken from the literature, represented the permeability of the neutral forms on "efflux minimized" rodent models. The present work was aimed at verifying the soundness of Δlog kW(IAM) at describing the potential of passage through the BBB as compared to data achieved by the PAMPA-BBB technique. In a first instance, the values of log P0(PAMPA-BBB) (32 data points) were found significantly related to the n-octanol lipophilicity values of the neutral forms (log P(N)) (r(2) = 0.782) whereas no significant relationship (r(2) = 0.246) was found with lipophilicity values of the mixtures of ionized and neutral forms existing at the experimental pH 7.4 (log D(7.4)) as well as with either log kW(IAM) or Δlog kW(IAM) values. log P0(PAMPA-BBB) related moderately to log P0(in situ) values (r(2) = 0.604). The latter did not relate with either n-octanol lipophilicity indexes (log P(N) and log D(7.4)) or phospholipid affinity indexes (log kW(IAM)). In contrast, significant inverse linear relationships were observed between log P0(in situ) (38 data points) and Δlog kW(IAM) values for all the compounds but ibuprofen and chlorpromazine, which behaved as moderate outliers (r(2) = 0.656 and r(2) = 0.757 for values achieved on IAM.PC.MG and IAM.PC.DD2, respectively). Since log P0(in situ) refer to the "intrinsic permeability" of the analytes regardless their ionization degree, no correction for ionization of Δlog kW(IAM) values was needed. Furthermore, log P0(in situ) were found roughly linearly related to log BB values (i.e., the logarithm of the ratio brain concentration/blood concentration measured in vivo) for all the analytes but those predominantly present at the experimental pH 7.4 as anions. These results suggest that, at least for the data set considered, Δlog kW(IAM) parameters are more effective than log P0(PAMPA-BBB) at predicting log P0(in situ) values for all the analytes. Furthermore, ionization appears to affect differently, and much more markedly, BBB passage of acids (yielding anions) than that of the other ionizable compounds.
Morin, Roger H.; Williams, Trevor; Henry, Stuart; ,; Hansaraj, Dhiresh
2010-01-01
The Antarctic Drilling Program (ANDRILL) successfully drilled and cored a borehole, AND-1B, beneath the McMurdo Ice Shelf and into a flexural moat basin that surrounds Ross Island. Total drilling depth reached 1285 m below seafloor (mbsf) with 98 percent core recovery for the detailed study of glacier dynamics. With the goal of obtaining complementary information regarding heat flow and permeability, which is vital to understanding the nature of marine hydrogeologic systems, a succession of three temperature logs was recorded over a five-day span to monitor the gradual thermal recovery toward equilibrium conditions. These data were extrapolated to true, undisturbed temperatures, and they define a linear geothermal gradient of 76.7 K/km from the seafloor to 647 mbsf. Bulk thermal conductivities of the sedimentary rocks were derived from empirical mixing models and density measurements performed on core, and an average value of 1.5 W/mK ± 10 percent was determined. The corresponding estimate of heat flow at this site is 115 mW/m2. This value is relatively high but is consistent with other elevated heat-flow data associated with the Erebus Volcanic Province. Information regarding the origin and frequency of pathways for subsurface fluid flow is gleaned from drillers' records, complementary geophysical logs, and core descriptions. Only two prominent permeable zones are identified and these correspond to two markedly different features within the rift basin; one is a distinct lithostratigraphic subunit consisting of a thin lava flow and the other is a heavily fractured interval within a single thick subunit.
Paule-Mercado, M A; Ventura, J S; Memon, S A; Jahng, D; Kang, J-H; Lee, C-H
2016-04-15
While the urban runoff are increasingly being studied as a source of fecal indicator bacteria (FIB), less is known about the occurrence of FIB in watershed with mixed land use and ongoing land use and land cover (LULC) change. In this study, Escherichia coli (EC) and fecal streptococcus (FS) were monitored from 2012 to 2013 in agricultural, mixed and urban LULC and analyzed according to the most probable number (MPN). Pearson correlation was used to determine the relationship between FIB and environmental parameters (physicochemical and hydrometeorological). Multiple linear regressions (MLR) were used to identify the significant parameters that affect the FIB concentrations and to predict the response of FIB in LULC change. Overall, the FIB concentrations were higher in urban LULC (EC=3.33-7.39; FS=3.30-7.36log10MPN/100mL) possibly because of runoff from commercial market and 100% impervious cover (IC). Also, during early-summer season; this reflects a greater persistence and growth rate of FIB in a warmer environment. During intra-event, however, the FIB concentrations varied according to site condition. Anthropogenic activities and IC influenced the correlation between the FIB concentrations and environmental parameters. Stormwater temperature (TEMP), turbidity, and TSS positively correlated with the FIB concentrations (p>0.01), since IC increased, implying an accumulation of bacterial sources in urban activities. TEMP, BOD5, turbidity, TSS, and antecedent dry days (ADD) were the most significant explanatory variables for FIB as determined in MLR, possibly because they promoted the FIB growth and survival. The model confirmed the FIB concentrations: EC (R(2)=0.71-0.85; NSE=0.72-0.86) and FS (R(2)=0.65-0.83; NSE=0.66-0.84) are predicted to increase due to urbanization. Therefore, these findings will help in stormwater monitoring strategies, designing the best management practice for FIB removal and as input data for stormwater models. Copyright © 2016 Elsevier B.V. All rights reserved.
Fast Mix Table Construction for Material Discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Seth R
2013-01-01
An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less
Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco
2017-10-01
The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was <4 PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2016-01-15
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zhao, H.; Hao, Y.; Liu, X.; Hou, M.; Zhao, X.
2018-04-01
Hyperspectral remote sensing is a completely non-invasive technology for measurement of cultural relics, and has been successfully applied in identification and analysis of pigments of Chinese historical paintings. Although the phenomenon of mixing pigments is very usual in Chinese historical paintings, the quantitative analysis of the mixing pigments in the ancient paintings is still unsolved. In this research, we took two typical mineral pigments, vermilion and stone yellow as example, made precisely mixed samples using these two kinds of pigments, and measured their spectra in the laboratory. For the mixing spectra, both fully constrained least square (FCLS) method and derivative of ratio spectroscopy (DRS) were performed. Experimental results showed that the mixing spectra of vermilion and stone yellow had strong nonlinear mixing characteristics, but at some bands linear unmixing could also achieve satisfactory results. DRS using strong linear bands can reach much higher accuracy than that of FCLS using full bands.
Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX
2015-07-01
exercise was to evaluate the importance of chemical -specific model input parameters, the impacts of their uncertainty, and the potential benefits of... chemical -specific inputs for RDX that were determined to be sensitive with relatively high uncertainty: these included the soil-water linear...Koc for organic chemicals . The EFS values provided for log Koc of RDX were 1.72 and 1.95. OBJECTIVE: TREECS™ (http://el.erdc.usace.army.mil/treecs
Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data
Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2017-01-01
Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741
Linear models for assessing mechanisms of sperm competition: the trouble with transformations.
Eggert, Anne-Katrin; Reinhardt, Klaus; Sakaluk, Scott K
2003-01-01
Although sperm competition is a pervasive selective force shaping the reproductive tactics of males, the mechanisms underlying different patterns of sperm precedence remain obscure. Parker et al. (1990) developed a series of linear models designed to identify two of the more basic mechanisms: sperm lotteries and sperm displacement; the models can be tested experimentally by manipulating the relative numbers of sperm transferred by rival males and determining the paternity of offspring. Here we show that tests of the model derived for sperm lotteries can result in misleading inferences about the underlying mechanism of sperm precedence because the required inverse transformations may lead to a violation of fundamental assumptions of linear regression. We show that this problem can be remedied by reformulating the model using the actual numbers of offspring sired by each male, and log-transforming both sides of the resultant equation. Reassessment of data from a previous study (Sakaluk and Eggert 1996) using the corrected version of the model revealed that we should not have excluded a simple sperm lottery as a possible mechanism of sperm competition in decorated crickets, Gryllodes sigillatus.
Kipka, Undine; Di Toro, Dominic M
2011-09-01
Predicting the association of contaminants with both particulate and dissolved organic matter is critical in determining the fate and bioavailability of chemicals in environmental risk assessment. To date, the association of a contaminant to particulate organic matter is considered in many multimedia transport models, but the effect of dissolved organic matter is typically ignored due to a lack of either reliable models or experimental data. The partition coefficient to dissolved organic carbon (K(DOC)) may be used to estimate the fraction of a contaminant that is associated with dissolved organic matter. Models relating K(DOC) to the octanol-water partition coefficient (K(OW)) have not been successful for many types of dissolved organic carbon in the environment. Instead, linear solvation energy relationships are proposed to model the association of chemicals with dissolved organic matter. However, more chemically diverse K(DOC) data are needed to produce a more robust model. For humic acid dissolved organic carbon, the linear solvation energy relationship predicts log K(DOC) with a root mean square error of 0.43. Copyright © 2011 SETAC.
Nguyen, N H; Whatmore, P; Miller, A; Knibb, W
2016-02-01
The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.
Multilaboratory comparison of hepatitis C virus viral load assays.
Caliendo, A M; Valsamakis, A; Zhou, Y; Yen-Lieberman, B; Andersen, J; Young, S; Ferreira-Gonzalez, A; Tsongalis, G J; Pyles, R; Bremer, J W; Lurain, N S
2006-05-01
We report a multilaboratory evaluation of hepatitis C virus (HCV) viral load assays to determine their linear range, reproducibility, subtype detection, and agreement. A panel of HCV RNA samples ranging in nominal concentration from 1.0 to 7.0 log10 IU/ml was constructed by diluting a clinical specimen (genotype 1b). Replicates of the panel were tested in multiple laboratories using the Abbott TaqMan analyte-specific reagent (Abbott reverse transcription-PCR [RT-PCR]), Roche TaqMan RUO (Roche RT-PCR), Roche Amplicor Monitor HCV 2.0 (Roche Monitor), and Bayer VERSANT HCV RNA 3.0 (Bayer bDNA) assays. Bayer bDNA-negative specimens were tested reflexively using the Bayer VERSANT HCV RNA qualitative assay (Bayer TMA). Abbott RT-PCR and Roche RT-PCR detected all 28 replicates with a concentration of 1.0 log10 IU/ml and were linear to 7.0 log10 IU/ml. Roche Monitor and Bayer bDNA detected 27 out of 28 and 13 out of 28 replicates, respectively, of 3.0 log10 IU/ml. Bayer TMA detected all seven replicates with 1.0 log10 IU/ml. Bayer bDNA was the most reproducible of the four assays. The mean viral load values for panel members in the linear ranges of the assays were within 0.5 log10 for the different tests. Eighty-nine clinical specimens of various genotypes (1 through 4) were tested in the Bayer bDNA, Abbott RT-PCR, and Roche RT-PCR assays. For Abbott RT-PCR, mean viral load values were 0.61 to 0.96 log10 greater than the values for Bayer bDNA assay for samples with genotype 1, 2, or 3 samples and 0.08 log10 greater for genotype 4 specimens. The Roche RT-PCR assay gave mean viral load values that were 0.28 to 0.82 log10 greater than those obtained with the Bayer bDNA assay for genotype 1, 2, and 3 samples. However, for genotype 4 samples the mean viral load value obtained with the Roche RT-PCR assay was, on average, 0.15 log10 lower than that of the Bayer bDNA. Based on these data, we conclude that the sensitivity and linear range of the Abbott and Roche RT-PCR assays enable them to be used for HCV diagnostics and therapeutic monitoring. However, the differences in the viral load values obtained with the different assays underscore the importance of using one assay when monitoring response to therapy.
Fatigue shifts and scatters heart rate variability in elite endurance athletes.
Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire
2013-01-01
This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in 'fatigue' or in 'no-fatigue' state in 'real life' conditions. 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms(2) and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). 172 trials were identified as in a 'fatigue' and 891 as in 'no-fatigue' state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between 'fatigue' and 'no-fatigue': HRSU (+6.27±0.61 bpm), logTPSU (-0.36±0.04), logLFSU (-0.27±0.04), logHFSU (-0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (-9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (-0.28±0.03), logLFST (-0.29±0.03), logHFST (-0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the 'fatigue' state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern.
Knot probabilities in random diagrams
NASA Astrophysics Data System (ADS)
Cantarella, Jason; Chapman, Harrison; Mastin, Matt
2016-10-01
We consider a natural model of random knotting—choose a knot diagram at random from the finite set of diagrams with n crossings. We tabulate diagrams with 10 and fewer crossings and classify the diagrams by knot type, allowing us to compute exact probabilities for knots in this model. As expected, most diagrams with 10 and fewer crossings are unknots (about 78% of the roughly 1.6 billion 10 crossing diagrams). For these crossing numbers, the unknot fraction is mostly explained by the prevalence of ‘tree-like’ diagrams which are unknots for any assignment of over/under information at crossings. The data shows a roughly linear relationship between the log of knot type probability and the log of the frequency rank of the knot type, analogous to Zipf’s law for word frequency. The complete tabulation and all knot frequencies are included as supplementary data.
NASA Astrophysics Data System (ADS)
Jarzyna, Jadwiga A.; Krakowska, Paulina I.; Puskarczyk, Edyta; Wawrzyniak-Guz, Kamila; Zych, Marcin
2018-03-01
More than 70 rock samples from so-called sweet spots, i.e. the Ordovician Sa Formation and Silurian Ja Member of Pa Formation from the Baltic Basin (North Poland) were examined in the laboratory to determine bulk and grain density, total and effective/dynamic porosity, absolute permeability, pore diameters size, total surface area, and natural radioactivity. Results of the pyrolysis, i.e., TOC (Total Organic Carbon) together with S1 and S2 - parameters used to determine the hydrocarbon generation potential of rocks, were also considered. Elemental composition from chemical analyses and mineral composition from XRD measurements were also included. SCAL analysis, NMR experiments, Pressure Decay Permeability measurements together with water immersion porosimetry and adsorption/ desorption of nitrogen vapors method were carried out along with the comprehensive interpretation of the outcomes. Simple and multiple linear statistical regressions were used to recognize mutual relationships between parameters. Observed correlations and in some cases big dispersion of data and discrepancies in the property values obtained from different methods were the basis for building shale gas rock model for well logging interpretation. The model was verified by the result of the Monte Carlo modelling of spectral neutron-gamma log response in comparison with GEM log results.
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
Statistics of Advective Stretching in Three-dimensional Incompressible Flows
NASA Astrophysics Data System (ADS)
Subramanian, Natarajan; Kellogg, Louise H.; Turcotte, Donald L.
2009-09-01
We present a method to quantify kinematic stretching in incompressible, unsteady, isoviscous, three-dimensional flows. We extend the method of Kellogg and Turcotte (J. Geophys. Res. 95:421-432, 1990) to compute the axial stretching/thinning experienced by infinitesimal ellipsoidal strain markers in arbitrary three-dimensional incompressible flows and discuss the differences between our method and the computation of Finite Time Lyapunov Exponent (FTLE). We use the cellular flow model developed in Solomon and Mezic (Nature 425:376-380, 2003) to study the statistics of stretching in a three-dimensional unsteady cellular flow. We find that the probability density function of the logarithm of normalised cumulative stretching (log S) for a globally chaotic flow, with spatially heterogeneous stretching behavior, is not Gaussian and that the coefficient of variation of the Gaussian distribution does not decrease with time as t^{-1/2} . However, it is observed that stretching becomes exponential log S˜ t and the probability density function of log S becomes Gaussian when the time dependence of the flow and its three-dimensionality are increased to make the stretching behaviour of the flow more spatially uniform. We term these behaviors weak and strong chaotic mixing respectively. We find that for strongly chaotic mixing, the coefficient of variation of the Gaussian distribution decreases with time as t^{-1/2} . This behavior is consistent with a random multiplicative stretching process.
Salmonella Inactivation During Extrusion of an Oat Flour Model Food.
Anderson, Nathan M; Keller, Susanne E; Mishra, Niharika; Pickens, Shannon; Gradl, Dana; Hartter, Tim; Rokey, Galen; Dohl, Christopher; Plattner, Brian; Chirtel, Stuart; Grasso-Kelley, Elizabeth M
2017-03-01
Little research exists on Salmonella inactivation during extrusion processing, yet many outbreaks associated with low water activity foods since 2006 were linked to extruded foods. The aim of this research was to study Salmonella inactivation during extrusion of a model cereal product. Oat flour was inoculated with Salmonella enterica serovar Agona, an outbreak strain isolated from puffed cereals, and processed using a single-screw extruder at a feed rate of 75 kg/h and a screw speed of 500 rpm. Extrudate samples were collected from the barrel outlet in sterile bags and immediately cooled in an ice-water bath. Populations were determined using standard plate count methods or a modified most probable number when populations were low. Reductions in population were determined and analyzed using a general linear model. The regression model obtained for the response surface tested was Log (N R /N O ) = 20.50 + 0.82T - 141.16a w - 0.0039T 2 + 87.91a w 2 (R 2 = 0.69). The model showed significant (p < 0.05) linear and quadratic effects of a w and temperature and enabled an assessment of critical control parameters. Reductions of 0.67 ± 0.14 to 7.34 ± 0.02 log CFU/g were observed over ranges of a w (0.72 to 0.96) and temperature (65 to 100 °C) tested. Processing conditions above 82 °C and 0.89 a w achieved on average greater than a 5-log reduction of Salmonella. Results indicate that extrusion is an effective means for reducing Salmonella as most processes commonly employed to produce cereals and other low water activity foods exceed these parameters. Thus, contamination of an extruded food product would most likely occur postprocessing as a result of environmental contamination or through the addition of coatings and flavorings. © 2017 Institute of Food Technologists®.
Jenkins, Marion W; Tiwari, Sangam K; Darby, Jeannie
2011-11-15
A two-factor three-block experimental design was developed to permit rigorous evaluation and modeling of the main effects and interactions of sand size (d(10) of 0.17 and 0.52 mm) and hydraulic head (10, 20, and 30 cm) on removal of fecal coliform (FC) bacteria, MS2 bacteriophage virus, and turbidity, under two batch operating modes ('long' and 'short') in intermittent slow sand filters (ISSFs). Long operation involved an overnight pause time between feeding of two successive 20 L batches (16 h average batch residence time (RT)). Short operation involved no pause between two 20 L batch feeds (5h average batch RT). Conditions tested were representative of those encountered in developing country field settings. Over a ten week period, the 18 experimental filters were fed river water augmented with wastewater (influent turbidity of 5.4-58.6 NTU) and maintained with the wet harrowing method. Linear mixed modeling allowed systematic estimates of the independent marginal effects of each independent variable on each performance outcome of interest while controlling for the effects of variations in a batch's actual residence time, days since maintenance, and influent turbidity. This is the first study in which simultaneous measurement of bacteria, viruses and turbidity removal at the batch level over an extended duration has been undertaken with a large number of replicate units to permit rigorous modeling of ISSF performance variability within and across a range of likely filter design configurations and operating conditions. On average, the experimental filters removed 1.40 log fecal coliform CFU (SD 0.40 log, N=249), 0.54 log MS2 PFU (SD 0.42 log, N=245) and 89.0 percent turbidity (SD 6.9 percent, N=263). Effluent turbidity averaged 1.24 NTU (SD 0.53 NTU, N=263) and always remained below 3 NTU. Under the best performing design configuration and operating mode (fine sand, 10 cm head, long operation, initial HLR of 0.01-0.03 m/h), mean 1.82 log removal of bacteria (98.5%) and mean 0.94 log removal of MS2 viruses (88.5%) were achieved. Results point to new recommendations regarding filter design, manufacture, and operation for implementing ISSFs in local settings in developing countries. Sand size emerged as a critical design factor on performance. A single layer of river sand used in this investigation demonstrated removals comparable to those reported for 2 layers of crushed sand. Pause time and increased residence time each emerged as highly beneficial for improving removal performance on all four outcomes. A relatively large and significant negative effect of influent turbidity on MS2 viral removal in the ISSF was measured in parallel with a much smaller weaker positive effect of influent turbidity on FC bacterial removal. Disturbance of the schmutzdecke by wet harrowing showed no effect on virus removal and a modest reductive effect on the bacterial and turbidity removal as measured 7 days or more after the disturbance. For existing coarse sand ISSFs, this research indicates that a reduction in batch feed volume, effectively reducing the operating head and increasing the pore:batch volume ratio, could improve their removal performance by increasing batch residence time. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zozulya, A. A.
1988-12-01
A theoretical model is constructed for four-wave mixing in a photorefractive crystal where a transmission grating is formed by the drift-diffusion nonlinearity mechanism in the absence of an external electrostatic field and the response of the medium is nonlinear in respect of the modulation parameter. A comparison is made with a model in which the response of the medium is linear in respect of the modulation parameter. Theoretical models of four-wave and two-wave mixing are also compared with experiments.
Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki
2014-05-01
A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/µm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log-log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE.
Optimal sensor placement for control of a supersonic mixed-compression inlet with variable geometry
NASA Astrophysics Data System (ADS)
Moore, Kenneth Thomas
A method of using fluid dynamics models for the generation of models that are useable for control design and analysis is investigated. The problem considered is the control of the normal shock location in the VDC inlet, which is a mixed-compression, supersonic, variable-geometry inlet of a jet engine. A quasi-one-dimensional set of fluid equations incorporating bleed and moving walls is developed. An object-oriented environment is developed for simulation of flow systems under closed-loop control. A public interface between the controller and fluid classes is defined. A linear model representing the dynamics of the VDC inlet is developed from the finite difference equations, and its eigenstructure is analyzed. The order of this model is reduced using the square root balanced model reduction method to produce a reduced-order linear model that is suitable for control design and analysis tasks. A modification to this method that improves the accuracy of the reduced-order linear model for the purpose of sensor placement is presented and analyzed. The reduced-order linear model is used to develop a sensor placement method that quantifies as a function of the sensor location the ability of a sensor to provide information on the variable of interest for control. This method is used to develop a sensor placement metric for the VDC inlet. The reduced-order linear model is also used to design a closed loop control system to control the shock position in the VDC inlet. The object-oriented simulation code is used to simulate the nonlinear fluid equations under closed-loop control.
Huang, Jian; Zhang, Cun-Hui
2013-01-01
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100
Bioconcentration of lipophilic compounds by some aquatic organisms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawker, D.W.; Connell, D.W.
1986-04-01
With nondegradable, lipophilic compounds having log P values ranging from 2 to 6, direct linear relationships have been found between the logarithms of the equilibrium bioconcentration factors, and also reciprocal clearance rate constants, with log P for daphnids and molluscs. These relationships permit calculation of the times required for equilibrium and significant bioconcentration of lipophilic chemicals. Compared with fish, these time periods are successively shorter for molluscs, then daphnids. The equilibrium biotic concentration was found to decrease with increasing chemical hydrophobicity for both molluscs and daphnids. Also, new linear relationships between the logarithm of the bioconcentration factor and log Pmore » were found for compounds not attaining equilibrium within finite exposure times.« less
Parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Amin-Javaheri, Masoud; Orin, David E.
1989-01-01
The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.
Juliano, Pablo; Knoerzer, Kai; Fryer, Peter J; Versteeg, Cornelis
2009-01-01
High-pressure, high-temperature (HPHT) processing is effective for microbial spore inactivation using mild preheating, followed by rapid volumetric compression heating and cooling on pressure release, enabling much shorter processing times than conventional thermal processing for many food products. A computational thermal fluid dynamic (CTFD) model has been developed to model all processing steps, including the vertical pressure vessel, an internal polymeric carrier, and food packages in an axis-symmetric geometry. Heat transfer and fluid dynamic equations were coupled to four selected kinetic models for the inactivation of C. botulinum; the traditional first-order kinetic model, the Weibull model, an nth-order model, and a combined discrete log-linear nth-order model. The models were solved to compare the resulting microbial inactivation distributions. The initial temperature of the system was set to 90 degrees C and pressure was selected at 600 MPa, holding for 220 s, with a target temperature of 121 degrees C. A representation of the extent of microbial inactivation throughout all processing steps was obtained for each microbial model. Comparison of the models showed that the conventional thermal processing kinetics (not accounting for pressure) required shorter holding times to achieve a 12D reduction of C. botulinum spores than the other models. The temperature distribution inside the vessel resulted in a more uniform inactivation distribution when using a Weibull or an nth-order kinetics model than when using log-linear kinetics. The CTFD platform could illustrate the inactivation extent and uniformity provided by the microbial models. The platform is expected to be useful to evaluate models fitted into new C. botulinum inactivation data at varying conditions of pressure and temperature, as an aid for regulatory filing of the technology as well as in process and equipment design.
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Advantages and pitfalls in the application of mixed-model association methods.
Yang, Jian; Zaitlen, Noah A; Goddard, Michael E; Visscher, Peter M; Price, Alkes L
2014-02-01
Mixed linear models are emerging as a method of choice for conducting genetic association studies in humans and other organisms. The advantages of the mixed-linear-model association (MLMA) method include the prevention of false positive associations due to population or relatedness structure and an increase in power obtained through the application of a correction that is specific to this structure. An underappreciated point is that MLMA can also increase power in studies without sample structure by implicitly conditioning on associated loci other than the candidate locus. Numerous variations on the standard MLMA approach have recently been published, with a focus on reducing computational cost. These advances provide researchers applying MLMA methods with many options to choose from, but we caution that MLMA methods are still subject to potential pitfalls. Here we describe and quantify the advantages and pitfalls of MLMA methods as a function of study design and provide recommendations for the application of these methods in practical settings.
Xi, Zemin; Chen, Baoliang
2014-04-01
Removal of polycyclic aromatic hydrocarbons (PAHs), e.g., naphthalene, acenaphthene, phenanthrene and pyrene, from aqueous solution by raw and modified plant residues was investigated to develop low cost biosorbents for organic pollutant abatement. Bamboo wood, pine wood, pine needles and pine bark were selected as plant residues, and acid hydrolysis was used as an easily modification method. The raw and modified biosorbents were characterized by elemental analysis, Fourier transform infrared spectroscopy and scanning electron microscopy. The sorption isotherms of PAHs to raw biosorbents were apparently linear, and were dominated by a partitioning process. In comparison, the isotherms of the hydrolyzed biosorbents displayed nonlinearity, which was controlled by partitioning and the specific interaction mechanism. The sorption kinetic curves of PAHs to the raw and modified plant residues fit well with the pseudo second-order kinetics model. The sorption rates were faster for the raw biosorbents than the corresponding hydrolyzed biosorbents, which was attributed to the latter having more condensed domains (i.e., exposed aromatic core). By the consumption of the amorphous cellulose component under acid hydrolysis, the sorption capability of the hydrolyzed biosorbents was notably enhanced, i.e., 6-18 fold for phenanthrene, 6-8 fold for naphthalene and pyrene and 5-8 fold for acenaphthene. The sorption coefficients (Kd) were negatively correlated with the polarity index [(O+N)/C], and positively correlated with the aromaticity of the biosorbents. For a given biosorbent, a positive linear correlation between logKoc and logKow for different PAHs was observed. Interestingly, the linear plots of logKoc-logKow were parallel for different biosorbents. These observations suggest that the raw and modified plant residues have great potential as biosorbents to remove PAHs from wastewater. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
de Andrés, Javier; Landajo, Manuel; Lorca, Pedro; Labra, Jose; Ordóñez, Patricia
Artificial neural networks have proven to be useful tools for solving financial analysis problems such as financial distress prediction and audit risk assessment. In this paper we focus on the performance of robust (least absolute deviation-based) neural networks on measuring liquidity of firms. The problem of learning the bivariate relationship between the components (namely, current liabilities and current assets) of the so-called current ratio is analyzed, and the predictive performance of several modelling paradigms (namely, linear and log-linear regressions, classical ratios and neural networks) is compared. An empirical analysis is conducted on a representative data base from the Spanish economy. Results indicate that classical ratio models are largely inadequate as a realistic description of the studied relationship, especially when used for predictive purposes. In a number of cases, especially when the analyzed firms are microenterprises, the linear specification is improved by considering the flexible non-linear structures provided by neural networks.
Infrared weak corrections to strongly interacting gauge boson scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciafaloni, Paolo; Urbano, Alfredo
2010-04-15
We evaluate the impact of electroweak corrections of infrared origin on strongly interacting longitudinal gauge boson scattering, calculating all-order resummed expressions at the double log level. As a working example, we consider the standard model with a heavy Higgs. At energies typical of forthcoming experiments (LHC, International Linear Collider, Compact Linear Collider), the corrections are in the 10%-40% range, with the relative sign depending on the initial state considered and on whether or not additional gauge boson emission is included. We conclude that the effect of radiative electroweak corrections should be included in the analysis of longitudinal gauge boson scattering.
New method for calculating a mathematical expression for streamflow recession
Rutledge, Albert T.
1991-01-01
An empirical method has been devised to calculate the master recession curve, which is a mathematical expression for streamflow recession during times of negligible direct runoff. The method is based on the assumption that the storage-delay factor, which is the time per log cycle of streamflow recession, varies linearly with the logarithm of streamflow. The resulting master recession curve can be nonlinear. The method can be executed by a computer program that reads a data file of daily mean streamflow, then allows the user to select several near-linear segments of streamflow recession. The storage-delay factor for each segment is one of the coefficients of the equation that results from linear least-squares regression. Using results for each recession segment, a mathematical expression of the storage-delay factor as a function of the log of streamflow is determined by linear least-squares regression. The master recession curve, which is a second-order polynomial expression for time as a function of log of streamflow, is then derived using the coefficients of this function.
Neurobehavioral Function in School-Age Children Exposed to Manganese in Drinking Water
Oulhote, Youssef; Mergler, Donna; Barbeau, Benoit; Bellinger, David C.; Bouffard, Thérèse; Brodeur, Marie-Ève; Saint-Amour, Dave; Legrand, Melissa; Sauvé, Sébastien
2014-01-01
Background: Manganese neurotoxicity is well documented in individuals occupationally exposed to airborne particulates, but few data are available on risks from drinking-water exposure. Objective: We examined associations of exposure from concentrations of manganese in water and hair with memory, attention, motor function, and parent- and teacher-reported hyperactive behaviors. Methods: We recruited 375 children and measured manganese in home tap water (MnW) and hair (MnH). We estimated manganese intake from water ingestion. Using structural equation modeling, we estimated associations between neurobehavioral functions and MnH, MnW, and manganese intake from water. We evaluated exposure–response relationships using generalized additive models. Results: After adjusting for potential confounders, a 1-SD increase in log10 MnH was associated with a significant difference of –24% (95% CI: –36, –12%) SD in memory and –25% (95% CI: –41, –9%) SD in attention. The relations between log10 MnH and poorer memory and attention were linear. A 1-SD increase in log10 MnW was associated with a significant difference of –14% (95% CI: –24, –4%) SD in memory, and this relation was nonlinear, with a steeper decline in performance at MnW > 100 μg/L. A 1-SD increase in log10 manganese intake from water was associated with a significant difference of –11% (95% CI: –21, –0.4%) SD in motor function. The relation between log10 manganese intake and poorer motor function was linear. There was no significant association between manganese exposure and hyperactivity. Conclusion: Exposure to manganese in water was associated with poorer neurobehavioral performances in children, even at low levels commonly encountered in North America. Citation: Oulhote Y, Mergler D, Barbeau B, Bellinger DC, Bouffard T, Brodeur ME, Saint-Amour D, Legrand M, Sauvé S, Bouchard MF. 2014. Neurobehavioral function in school-age children exposed to manganese in drinking water. Environ Health Perspect 122:1343–1350; http://dx.doi.org/10.1289/ehp.1307918 PMID:25260096
Yuan, Jintao; Yu, Shuling; Zhang, Ting; Yuan, Xuejie; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu
2016-06-01
Octanol/water (K(OW)) and octanol/air (K(OA)) partition coefficients are two important physicochemical properties of organic substances. In current practice, K(OW) and K(OA) values of some polychlorinated biphenyls (PCBs) are measured using generator column method. Quantitative structure-property relationship (QSPR) models can serve as a valuable alternative method of replacing or reducing experimental steps in the determination of K(OW) and K(OA). In this paper, two different methods, i.e., multiple linear regression based on dragon descriptors and hologram quantitative structure-activity relationship, were used to predict generator-column-derived log K(OW) and log K(OA) values of PCBs. The predictive ability of the developed models was validated using a test set, and the performances of all generated models were compared with those of three previously reported models. All results indicated that the proposed models were robust and satisfactory and can thus be used as alternative models for the rapid assessment of the K(OW) and K(OA) of PCBs. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, S; Martinez-Lage, M; Sakai, Y; Chawla, S; Kim, S G; Alonso-Basanta, M; Lustig, R A; Brem, S; Mohan, S; Wolf, R L; Desai, A; Poptani, H
2016-01-01
Early assessment of treatment response is critical in patients with glioblastomas. A combination of DTI and DSC perfusion imaging parameters was evaluated to distinguish glioblastomas with true progression from mixed response and pseudoprogression. Forty-one patients with glioblastomas exhibiting enhancing lesions within 6 months after completion of chemoradiation therapy were retrospectively studied. All patients underwent surgery after MR imaging and were histologically classified as having true progression (>75% tumor), mixed response (25%-75% tumor), or pseudoprogression (<25% tumor). Mean diffusivity, fractional anisotropy, linear anisotropy coefficient, planar anisotropy coefficient, spheric anisotropy coefficient, and maximum relative cerebral blood volume values were measured from the enhancing tissue. A multivariate logistic regression analysis was used to determine the best model for classification of true progression from mixed response or pseudoprogression. Significantly elevated maximum relative cerebral blood volume, fractional anisotropy, linear anisotropy coefficient, and planar anisotropy coefficient and decreased spheric anisotropy coefficient were observed in true progression compared with pseudoprogression (P < .05). There were also significant differences in maximum relative cerebral blood volume, fractional anisotropy, planar anisotropy coefficient, and spheric anisotropy coefficient measurements between mixed response and true progression groups. The best model to distinguish true progression from non-true progression (pseudoprogression and mixed) consisted of fractional anisotropy, linear anisotropy coefficient, and maximum relative cerebral blood volume, resulting in an area under the curve of 0.905. This model also differentiated true progression from mixed response with an area under the curve of 0.901. A combination of fractional anisotropy and maximum relative cerebral blood volume differentiated pseudoprogression from nonpseudoprogression (true progression and mixed) with an area under the curve of 0.807. DTI and DSC perfusion imaging can improve accuracy in assessing treatment response and may aid in individualized treatment of patients with glioblastomas. © 2016 by American Journal of Neuroradiology.
Dolan, Anthony; Burgess, Catherine M; Barry, Thomas B; Fanning, Seamus; Duffy, Geraldine
2009-04-01
A sensitive quantitative reverse-transcription PCR (qRT-PCR) method was developed for enumeration of total bacteria. Using two sets of primers separately to target the ribonuclease-P (RNase P) RNA transcripts of gram positive and gram negative bacteria. Standard curves were generated using SYBR Green I kits for the LightCycler 2.0 instrument (Roche Diagnostics) to allow quantification of mixed microflora in liquid media. RNA standards were used and extracted from known cell equivalents and subsequently converted to cDNA for the construction of standard curves. The number of mixed bacteria in culture was determined by qRT-PCR, and the results correlated (r(2)=0.88, rsd=0.466) with the total viable count over the range from approx. Log(10) 3 to approx. Log(10) 7 CFU ml(-1). The rapid nature of this assay (8 h) and its potential as an alternative method to the standard plate count method to predict total viable counts and shelf life are discussed.
Tara N. Jennings; Jane E. Smith; Kermit Cromack; Elizabeth W. Sulzman; Donaraye McKay; Bruce A. Caldwell; Sarah I. Beldin
2012-01-01
Postfire logging recoups the economic value of timber killed by wildfire, but whether such forest management activity supports or impedes forest recovery in stands differing in structure from historic conditions remains unclear. The aim of this study was to determine the impact of mechanical logging after wildfire on soil bacterial and fungal communities and other...
Row, Jeffrey R.; Knick, Steven T.; Oyler-McCance, Sara J.; Lougheed, Stephen C.; Fedy, Bradley C.
2017-01-01
Dispersal can impact population dynamics and geographic variation, and thus, genetic approaches that can establish which landscape factors influence population connectivity have ecological and evolutionary importance. Mixed models that account for the error structure of pairwise datasets are increasingly used to compare models relating genetic differentiation to pairwise measures of landscape resistance. A model selection framework based on information criteria metrics or explained variance may help disentangle the ecological and landscape factors influencing genetic structure, yet there are currently no consensus for the best protocols. Here, we develop landscape-directed simulations and test a series of replicates that emulate independent empirical datasets of two species with different life history characteristics (greater sage-grouse; eastern foxsnake). We determined that in our simulated scenarios, AIC and BIC were the best model selection indices and that marginal R2 values were biased toward more complex models. The model coefficients for landscape variables generally reflected the underlying dispersal model with confidence intervals that did not overlap with zero across the entire model set. When we controlled for geographic distance, variables not in the underlying dispersal models (i.e., nontrue) typically overlapped zero. Our study helps establish methods for using linear mixed models to identify the features underlying patterns of dispersal across a variety of landscapes.
Alternative mathematical programming formulations for FSS synthesis
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J. A.; Levis, C. A.
1986-01-01
A variety of mathematical programming models and two solution strategies are suggested for the problem of allocating orbital positions to (synthesizing) satellites in the Fixed Satellite Service. Mixed integer programming and almost linear programming formulations are presented in detail for each of two objectives: (1) positioning satellites as closely as possible to specified desired locations, and (2) minimizing the total length of the geostationary arc allocated to the satellites whose positions are to be determined. Computational results for mixed integer and almost linear programming models, with the objective of positioning satellites as closely as possible to their desired locations, are reported for three six-administration test problems and a thirteen-administration test problem.
Coarse-Grained Models for Automated Fragmentation and Parametrization of Molecular Databases.
Fraaije, Johannes G E M; van Male, Jan; Becherer, Paul; Serral Gracià, Rubèn
2016-12-27
We calibrate coarse-grained interaction potentials suitable for screening large data sets in top-down fashion. Three new algorithms are introduced: (i) automated decomposition of molecules into coarse-grained units (fragmentation); (ii) Coarse-Grained Reference Interaction Site Model-Hypernetted Chain (CG RISM-HNC) as an intermediate proxy for dissipative particle dynamics (DPD); and (iii) a simple top-down coarse-grained interaction potential/model based on activity coefficient theories from engineering (using COSMO-RS). We find that the fragment distribution follows Zipf and Heaps scaling laws. The accuracy in Gibbs energy of mixing calculations is a few tenths of a kilocalorie per mole. As a final proof of principle, we use full coarse-grained sampling through DPD thermodynamics integration to calculate log P OW for 4627 compounds with an average error of 0.84 log unit. The computational speeds per calculation are a few seconds for CG RISM-HNC and a few minutes for DPD thermodynamic integration.
NASA Technical Reports Server (NTRS)
Herskovits, E. H.; Itoh, R.; Melhem, E. R.
2001-01-01
OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.
Statistical methodology for the analysis of dye-switch microarray experiments
Mary-Huard, Tristan; Aubert, Julie; Mansouri-Attia, Nadera; Sandra, Olivier; Daudin, Jean-Jacques
2008-01-01
Background In individually dye-balanced microarray designs, each biological sample is hybridized on two different slides, once with Cy3 and once with Cy5. While this strategy ensures an automatic correction of the gene-specific labelling bias, it also induces dependencies between log-ratio measurements that must be taken into account in the statistical analysis. Results We present two original statistical procedures for the statistical analysis of individually balanced designs. These procedures are compared with the usual ML and REML mixed model procedures proposed in most statistical toolboxes, on both simulated and real data. Conclusion The UP procedure we propose as an alternative to usual mixed model procedures is more efficient and significantly faster to compute. This result provides some useful guidelines for the analysis of complex designs. PMID:18271965
ɛ-mechanism driven pulsations in hot subdwarf stars with mixed H-He atmospheres
NASA Astrophysics Data System (ADS)
Battich, Tiara; Miller Bertolami, Marcelo M.; Córsico, Alejandro H.; Althaus, Leandro G.
2017-12-01
The ɛ mechanism is a self-excitation mechanism of stellar pulsations which acts in regions where nuclear burning takes place. It has been shown that the ɛ mechanism can excite pulsations in hot pre-horizontal branch stars before they settle into the stable helium core-burning phase and that the shortest periods of LS IV-14º116 could be explained that way.We aim to study the ɛ mechanism in stellar models appropriate for hot pre-horizontal branch stars to predict their pulsational properties.We perform detailed computations of non-adiabatic non-radial pulsations on such stellar models.We predict a new instability domain of long-period gravity modes in the log g - log Teff plane at roughly 22000 K ≲ Teff ≲ 50000 K and 4.67 ≲ log g ≲ 6.15, with a period range from 200 to 2000 s. Comparison with the three known pulsating He-rich subdwarfs shows that the ɛ mechanism can excite pulsations in models with similar surface properties except for modes with the shortest observed periods. Based on simple estimates we expect at least 3 stars in the current samples of hot-subdwarf stars to be pulsating by the ɛ mechanism. Our results could constitute a theoretical basis for future searches of pulsators in the Galactic field.