Sample records for log linear models

  1. Latent log-linear models for handwritten digit classification.

    PubMed

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  2. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    ERIC Educational Resources Information Center

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  3. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  4. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    PubMed Central

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  5. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  6. Log-Multiplicative Association Models as Item Response Models

    ERIC Educational Resources Information Center

    Anderson, Carolyn J.; Yu, Hsiu-Ting

    2007-01-01

    Log-multiplicative association (LMA) models, which are special cases of log-linear models, have interpretations in terms of latent continuous variables. Two theoretical derivations of LMA models based on item response theory (IRT) arguments are presented. First, we show that Anderson and colleagues (Anderson & Vermunt, 2000; Anderson & Bockenholt,…

  7. The Impact of Model Misspecification on Parameter Estimation and Item-Fit Assessment in Log-Linear Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver

    2012-01-01

    Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…

  8. Testing the dose-response specification in epidemiology: public health and policy consequences for lead.

    PubMed

    Rothenberg, Stephen J; Rothenberg, Jesse C

    2005-09-01

    Statistical evaluation of the dose-response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose-response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear-linear dose response) and natural-log-transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose-response relationship. We found that a log-linear lead-IQ relationship was a significantly better fit than was a linear-linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead-IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 microg/dL to 2.0 microg/dL) was 2.2 times (319 billion dollars) that calculated using a linear-linear dose-response function (149 billion dollars). The Centers for Disease Control and Prevention action limit of 10 microg/dL for children fails to protect against most damage and economic cost attributable to lead exposure.

  9. A comparison of methods to handle skew distributed cost variables in the analysis of the resource consumption in schizophrenia treatment.

    PubMed

    Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C

    2002-03-01

    Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.

  10. Delineating chalk sand distribution of Ekofisk formation using probabilistic neural network (PNN) and stepwise regression (SWR): Case study Danish North Sea field

    NASA Astrophysics Data System (ADS)

    Haris, A.; Nafian, M.; Riyanto, A.

    2017-07-01

    Danish North Sea Fields consist of several formations (Ekofisk, Tor, and Cromer Knoll) that was started from the age of Paleocene to Miocene. In this study, the integration of seismic and well log data set is carried out to determine the chalk sand distribution in the Danish North Sea field. The integration of seismic and well log data set is performed by using the seismic inversion analysis and seismic multi-attribute. The seismic inversion algorithm, which is used to derive acoustic impedance (AI), is model-based technique. The derived AI is then used as external attributes for the input of multi-attribute analysis. Moreover, the multi-attribute analysis is used to generate the linear and non-linear transformation of among well log properties. In the case of the linear model, selected transformation is conducted by weighting step-wise linear regression (SWR), while for the non-linear model is performed by using probabilistic neural networks (PNN). The estimated porosity, which is resulted by PNN shows better suited to the well log data compared with the results of SWR. This result can be understood since PNN perform non-linear regression so that the relationship between the attribute data and predicted log data can be optimized. The distribution of chalk sand has been successfully identified and characterized by porosity value ranging from 23% up to 30%.

  11. Functional forms and price elasticities in a discrete continuous choice model of the residential water demand

    NASA Astrophysics Data System (ADS)

    Vásquez Lavín, F. A.; Hernandez, J. I.; Ponce, R. D.; Orrego, S. A.

    2017-07-01

    During recent decades, water demand estimation has gained considerable attention from scholars. From an econometric perspective, the most used functional forms include log-log and linear specifications. Despite the advances in this field and the relevance for policymaking, little attention has been paid to the functional forms used in these estimations, and most authors have not provided justifications for their selection of functional forms. A discrete continuous choice model of the residential water demand is estimated using six functional forms (log-log, full-log, log-quadratic, semilog, linear, and Stone-Geary), and the expected consumption and price elasticity are evaluated. From a policy perspective, our results highlight the relevance of functional form selection for both the expected consumption and price elasticity.

  12. Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.

    2013-01-01

    Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689

  13. Use of simulation tools to illustrate the effect of data management practices for low and negative plate counts on the estimated parameters of microbial reduction models.

    PubMed

    Garcés-Vega, Francisco; Marks, Bradley P

    2014-08-01

    In the last 20 years, the use of microbial reduction models has expanded significantly, including inactivation (linear and nonlinear), survival, and transfer models. However, a major constraint for model development is the impossibility to directly quantify the number of viable microorganisms below the limit of detection (LOD) for a given study. Different approaches have been used to manage this challenge, including ignoring negative plate counts, using statistical estimations, or applying data transformations. Our objective was to illustrate and quantify the effect of negative plate count data management approaches on parameter estimation for microbial reduction models. Because it is impossible to obtain accurate plate counts below the LOD, we performed simulated experiments to generate synthetic data for both log-linear and Weibull-type microbial reductions. We then applied five different, previously reported data management practices and fit log-linear and Weibull models to the resulting data. The results indicated a significant effect (α = 0.05) of the data management practices on the estimated model parameters and performance indicators. For example, when the negative plate counts were replaced by the LOD for log-linear data sets, the slope of the subsequent log-linear model was, on average, 22% smaller than for the original data, the resulting model underpredicted lethality by up to 2.0 log, and the Weibull model was erroneously selected as the most likely correct model for those data. The results demonstrate that it is important to explicitly report LODs and related data management protocols, which can significantly affect model results, interpretation, and utility. Ultimately, we recommend using only the positive plate counts to estimate model parameters for microbial reduction curves and avoiding any data value substitutions or transformations when managing negative plate counts to yield the most accurate model parameters.

  14. Effect of the shape of the exposure-response function on estimated hospital costs in a study on non-elective pneumonia hospitalizations related to particulate matter.

    PubMed

    Devos, Stefanie; Cox, Bianca; van Lier, Tom; Nawrot, Tim S; Putman, Koen

    2016-09-01

    We used log-linear and log-log exposure-response (E-R) functions to model the association between PM2.5 exposure and non-elective hospitalizations for pneumonia, and estimated the attributable hospital costs by using the effect estimates obtained from both functions. We used hospital discharge data on 3519 non-elective pneumonia admissions from UZ Brussels between 2007 and 2012 and we combined a case-crossover design with distributed lag models. The annual averted pneumonia hospitalization costs for a reduction in PM2.5 exposure from the mean (21.4μg/m(3)) to the WHO guideline for annual mean PM2.5 (10μg/m(3)) were estimated and extrapolated for Belgium. Non-elective hospitalizations for pneumonia were significantly associated with PM2.5 exposure in both models. Using a log-linear E-R function, the estimated risk reduction for pneumonia hospitalization associated with a decrease in mean PM2.5 exposure to 10μg/m(3) was 4.9%. The corresponding estimate for the log-log model was 10.7%. These estimates translate to an annual pneumonia hospital cost saving in Belgium of €15.5 million and almost €34 million for the log-linear and log-log E-R function, respectively. Although further research is required to assess the shape of the association between PM2.5 exposure and pneumonia hospitalizations, we demonstrated that estimates for health effects and associated costs heavily depend on the assumed E-R function. These results are important for policy making, as supra-linear E-R associations imply that significant health benefits may still be obtained from additional pollution control measures in areas where PM levels have already been reduced. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Deformation-Aware Log-Linear Models

    NASA Astrophysics Data System (ADS)

    Gass, Tobias; Deselaers, Thomas; Ney, Hermann

    In this paper, we present a novel deformation-aware discriminative model for handwritten digit recognition. Unlike previous approaches our model directly considers image deformations and allows discriminative training of all parameters, including those accounting for non-linear transformations of the image. This is achieved by extending a log-linear framework to incorporate a latent deformation variable. The resulting model has an order of magnitude less parameters than competing approaches to handling image deformations. We tune and evaluate our approach on the USPS task and show its generalization capabilities by applying the tuned model to the MNIST task. We gain interesting insights and achieve highly competitive results on both tasks.

  16. Evaluation of drought using SPEI drought class transitions and log-linear models for different agro-ecological regions of India

    NASA Astrophysics Data System (ADS)

    Alam, N. M.; Sharma, G. C.; Moreira, Elsa; Jana, C.; Mishra, P. K.; Sharma, N. K.; Mandal, D.

    2017-08-01

    Markov chain and 3-dimensional log-linear models were attempted to model drought class transitions derived from the newly developed drought index the Standardized Precipitation Evapotranspiration Index (SPEI) at a 12 month time scale for six major drought prone areas of India. Log-linear modelling approach has been used to investigate differences relative to drought class transitions using SPEI-12 time series derived form 48 yeas monthly rainfall and temperature data. In this study, the probabilities of drought class transition, the mean residence time, the 1, 2 or 3 months ahead prediction of average transition time between drought classes and the drought severity class have been derived. Seasonality of precipitation has been derived for non-homogeneous Markov chains which could be used to explain the effect of the potential retreat of drought. Quasi-association and Quasi-symmetry log-linear models have been fitted to the drought class transitions derived from SPEI-12 time series. The estimates of odds along with their confidence intervals were obtained to explain the progression of drought and estimation of drought class transition probabilities. For initial months as the drought severity increases the calculated odds shows lower value and the odds decreases for the succeeding months. This indicates that the ratio of expected frequencies of occurrence of transition from drought class to the non-drought class decreases as compared to transition to any drought class when the drought severity of the present class increases. From 3-dimensional log-linear model it is clear that during the last 24 years the drought probability has increased for almost all the six regions. The findings from the present study will immensely help to assess the impact of drought on the gross primary production and to develop future contingent planning in similar regions worldwide.

  17. The mathematical formulation of a generalized Hooke's law for blood vessels.

    PubMed

    Zhang, Wei; Wang, Chong; Kassab, Ghassan S

    2007-08-01

    It is well known that the stress-strain relationship of blood vessels is highly nonlinear. To linearize the relationship, the Hencky strain tensor is generalized to a logarithmic-exponential (log-exp) strain tensor to absorb the nonlinearity. A quadratic nominal strain potential is proposed to derive the second Piola-Kirchhoff stresses by differentiating the potential with respect to the log-exp strains. The resulting constitutive equation is a generalized Hooke's law. Ten material constants are needed for the three-dimensional orthotropic model. The nondimensional constant used in the log-exp strain definition is interpreted as a nonlinearity parameter. The other nine constants are the elastic moduli with respect to the log-exp strains. In this paper, the proposed linear stress-strain relation is shown to represent the pseudoelastic Fung model very well.

  18. Comparison of statistical models to estimate parasite growth rate in the induced blood stage malaria model.

    PubMed

    Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise

    2017-08-25

    The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites in IBSM studies appears to be a log-linear model fitted by individual and with the intercept estimated in the log-linear regression. Future studies should use this model to estimate parasite growth rates.

  19. Linear and nonlinear methods in modeling the aqueous solubility of organic compounds.

    PubMed

    Catana, Cornel; Gao, Hua; Orrenius, Christian; Stouten, Pieter F W

    2005-01-01

    Solubility data for 930 diverse compounds have been analyzed using linear Partial Least Square (PLS) and nonlinear PLS methods, Continuum Regression (CR), and Neural Networks (NN). 1D and 2D descriptors from MOE package in combination with E-state or ISIS keys have been used. The best model was obtained using linear PLS for a combination between 22 MOE descriptors and 65 ISIS keys. It has a correlation coefficient (r2) of 0.935 and a root-mean-square error (RMSE) of 0.468 log molar solubility (log S(w)). The model validated on a test set of 177 compounds not included in the training set has r2 0.911 and RMSE 0.475 log S(w). The descriptors were ranked according to their importance, and at the top of the list have been found the 22 MOE descriptors. The CR model produced results as good as PLS, and because of the way in which cross-validation has been done it is expected to be a valuable tool in prediction besides PLS model. The statistics obtained using nonlinear methods did not surpass those got with linear ones. The good statistic obtained for linear PLS and CR recommends these models to be used in prediction when it is difficult or impossible to make experimental measurements, for virtual screening, combinatorial library design, and efficient leads optimization.

  20. The word frequency effect during sentence reading: A linear or nonlinear effect of log frequency?

    PubMed

    White, Sarah J; Drieghe, Denis; Liversedge, Simon P; Staub, Adrian

    2016-10-20

    The effect of word frequency on eye movement behaviour during reading has been reported in many experimental studies. However, the vast majority of these studies compared only two levels of word frequency (high and low). Here we assess whether the effect of log word frequency on eye movement measures is linear, in an experiment in which a critical target word in each sentence was at one of three approximately equally spaced log frequency levels. Separate analyses treated log frequency as a categorical or a continuous predictor. Both analyses showed only a linear effect of log frequency on the likelihood of skipping a word, and on first fixation duration. Ex-Gaussian analyses of first fixation duration showed similar effects on distributional parameters in comparing high- and medium-frequency words, and medium- and low-frequency words. Analyses of gaze duration and the probability of a refixation suggested a nonlinear pattern, with a larger effect at the lower end of the log frequency scale. However, the nonlinear effects were small, and Bayes Factor analyses favoured the simpler linear models for all measures. The possible roles of lexical and post-lexical factors in producing nonlinear effects of log word frequency during sentence reading are discussed.

  1. A Linearized Model for Flicker and Contrast Thresholds at Various Retinal Illuminances

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Watson, Andrew

    2015-01-01

    We previously proposed a flicker visibility metric for bright displays, based on psychophysical data collected at a high mean luminance. Here we extend the metric to other mean luminances. This extension relies on a linear relation between log sensitivity and critical fusion frequency, and a linear relation between critical fusion frequency and log retina lilluminance. Consistent with our previous metric, the extended flicker visibility metric is measured in just-noticeable differences (JNDs).

  2. Log-Linear Modeling of Agreement among Expert Exposure Assessors

    PubMed Central

    Hunt, Phillip R.; Friesen, Melissa C.; Sama, Susan; Ryan, Louise; Milton, Donald

    2015-01-01

    Background: Evaluation of expert assessment of exposure depends, in the absence of a validation measurement, upon measures of agreement among the expert raters. Agreement is typically measured using Cohen’s Kappa statistic, however, there are some well-known limitations to this approach. We demonstrate an alternate method that uses log-linear models designed to model agreement. These models contain parameters that distinguish between exact agreement (diagonals of agreement matrix) and non-exact associations (off-diagonals). In addition, they can incorporate covariates to examine whether agreement differs across strata. Methods: We applied these models to evaluate agreement among expert ratings of exposure to sensitizers (none, likely, high) in a study of occupational asthma. Results: Traditional analyses using weighted kappa suggested potential differences in agreement by blue/white collar jobs and office/non-office jobs, but not case/control status. However, the evaluation of the covariates and their interaction terms in log-linear models found no differences in agreement with these covariates and provided evidence that the differences observed using kappa were the result of marginal differences in the distribution of ratings rather than differences in agreement. Differences in agreement were predicted across the exposure scale, with the likely moderately exposed category more difficult for the experts to differentiate from the highly exposed category than from the unexposed category. Conclusions: The log-linear models provided valuable information about patterns of agreement and the structure of the data that were not revealed in analyses using kappa. The models’ lack of dependence on marginal distributions and the ease of evaluating covariates allow reliable detection of observational bias in exposure data. PMID:25748517

  3. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Linearly Supporting Feature Extraction for Automated Estimation of Stellar Atmospheric Parameters

    NASA Astrophysics Data System (ADS)

    Li, Xiangru; Lu, Yu; Comte, Georges; Luo, Ali; Zhao, Yongheng; Wang, Yongjun

    2015-05-01

    We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)bs; third, estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both the wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate {{T}{\\tt{eff} }}, 62 features for log g, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Parameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log {{T}{\\tt{eff} }} (83 K for {{T}{\\tt{eff} }}), 0.2345 dex for log g, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log {{T}{\\tt{eff} }} (32 K for {{T}{\\tt{eff} }}), 0.0337 dex for log g, and 0.0268 dex for [Fe/H].

  5. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    PubMed

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2018-02-01

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE < 10.3%. The external data evaluation showed that the models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

  6. Use of biopartitioning micellar chromatography and RP-HPLC for the determination of blood-brain barrier penetration of α-adrenergic/imidazoline receptor ligands, and QSPR analysis.

    PubMed

    Vucicevic, J; Popovic, M; Nikolic, K; Filipic, S; Obradovic, D; Agbaba, D

    2017-03-01

    For this study, 31 compounds, including 16 imidazoline/α-adrenergic receptor (IRs/α-ARs) ligands and 15 central nervous system (CNS) drugs, were characterized in terms of the retention factors (k) obtained using biopartitioning micellar and classical reversed phase chromatography (log k BMC and log k wRP , respectively). Based on the retention factor (log k wRP ) and slope of the linear curve (S) the isocratic parameter (φ 0 ) was calculated. Obtained retention factors were correlated with experimental log BB values for the group of examined compounds. High correlations were obtained between logarithm of biopartitioning micellar chromatography (BMC) retention factor and effective permeability (r(log k BMC /log BB): 0.77), while for RP-HPLC system the correlations were lower (r(log k wRP /log BB): 0.58; r(S/log BB): -0.50; r(φ 0 /P e ): 0.61). Based on the log k BMC retention data and calculated molecular parameters of the examined compounds, quantitative structure-permeability relationship (QSPR) models were developed using partial least squares, stepwise multiple linear regression, support vector machine and artificial neural network methodologies. A high degree of structural diversity of the analysed IRs/α-ARs ligands and CNS drugs provides wide applicability domain of the QSPR models for estimation of blood-brain barrier penetration of the related compounds.

  7. Testing the Dose–Response Specification in Epidemiology: Public Health and Policy Consequences for Lead

    PubMed Central

    Rothenberg, Stephen J.; Rothenberg, Jesse C.

    2005-01-01

    Statistical evaluation of the dose–response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose–response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear–linear dose response) and natural-log–transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose–response relationship. We found that a log-linear lead–IQ relationship was a significantly better fit than was a linear–linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead–IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 μg/dL to 2.0 μg/dL) was 2.2 times ($319 billion) that calculated using a linear–linear dose–response function ($149 billion). The Centers for Disease Control and Prevention action limit of 10 μg/dL for children fails to protect against most damage and economic cost attributable to lead exposure. PMID:16140626

  8. Log-linear human chorionic gonadotropin elimination in cases of retained placenta percreta.

    PubMed

    Stitely, Michael L; Gerard Jackson, M; Holls, William H

    2014-02-01

    To describe the human chorionic gonadotropin (hCG) elimination rate in patients with intentionally retained placenta percreta. Medical records for cases of placenta percreta with intentional retention of the placenta were reviewed. The natural log of the hCG levels were plotted versus time and then the elimination rate equations were derived. The hCG elimination rate equations were log-linear in three cases individually (R (2) = 0.96-0.99) and in aggregate R (2) = 0.92). The mean half-life of hCG elimination was 146.3 h (6.1 days). The elimination of hCG in patients with intentionally retained placenta percreta is consistent with a two-compartment elimination model. The hCG elimination in retained placenta percreta is predictable in a log-linear manner that is similar to other reports of retained abnormally adherent placentae treated with or without methotrexate.

  9. Minimizing bias in biomass allometry: Model selection and log transformation of data

    Treesearch

    Joseph Mascaro; undefined undefined; Flint Hughes; Amanda Uowolo; Stefan A. Schnitzer

    2011-01-01

    Nonlinear regression is increasingly used to develop allometric equations for forest biomass estimation (i.e., as opposed to the raditional approach of log-transformation followed by linear regression). Most statistical software packages, however, assume additive errors by default, violating a key assumption of allometric theory and possibly producing spurious models....

  10. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  11. Mechanisms of action of (meth)acrylates in hemolytic activity, in vivo toxicity and dipalmitoylphosphatidylcholine (DPPC) liposomes determined using NMR spectroscopy.

    PubMed

    Fujisawa, Seiichiro; Kadoma, Yoshinori

    2012-01-01

    We investigated the quantitative structure-activity relationships between hemolytic activity (log 1/H(50)) or in vivo mouse intraperitoneal (ip) LD(50) using reported data for α,β-unsaturated carbonyl compounds such as (meth)acrylate monomers and their (13)C-NMR β-carbon chemical shift (δ). The log 1/H(50) value for methacrylates was linearly correlated with the δC(β) value. That for (meth)acrylates was linearly correlated with log P, an index of lipophilicity. The ipLD(50) for (meth)acrylates was linearly correlated with δC(β) but not with log P. For (meth)acrylates, the δC(β) value, which is dependent on the π-electron density on the β-carbon, was linearly correlated with PM3-based theoretical parameters (chemical hardness, η; electronegativity, χ; electrophilicity, ω), whereas log P was linearly correlated with heat of formation (HF). Also, the interaction between (meth)acrylates and DPPC liposomes in cell membrane molecular models was investigated using (1)H-NMR spectroscopy and differential scanning calorimetry (DSC). The log 1/H(50) value was related to the difference in chemical shift (ΔδHa) (Ha: H (trans) attached to the β-carbon) between the free monomer and the DPPC liposome-bound monomer. Monomer-induced DSC phase transition properties were related to HF for monomers. NMR chemical shifts may represent a valuable parameter for investigating the biological mechanisms of action of (meth)acrylates.

  12. Mechanisms of Action of (Meth)acrylates in Hemolytic Activity, in Vivo Toxicity and Dipalmitoylphosphatidylcholine (DPPC) Liposomes Determined Using NMR Spectroscopy

    PubMed Central

    Fujisawa, Seiichiro; Kadoma, Yoshinori

    2012-01-01

    We investigated the quantitative structure-activity relationships between hemolytic activity (log 1/H50) or in vivo mouse intraperitoneal (ip) LD50 using reported data for α,β-unsaturated carbonyl compounds such as (meth)acrylate monomers and their 13C-NMR β-carbon chemical shift (δ). The log 1/H50 value for methacrylates was linearly correlated with the δCβ value. That for (meth)acrylates was linearly correlated with log P, an index of lipophilicity. The ipLD50 for (meth)acrylates was linearly correlated with δCβ but not with log P. For (meth)acrylates, the δCβ value, which is dependent on the π-electron density on the β-carbon, was linearly correlated with PM3-based theoretical parameters (chemical hardness, η; electronegativity, χ; electrophilicity, ω), whereas log P was linearly correlated with heat of formation (HF). Also, the interaction between (meth)acrylates and DPPC liposomes in cell membrane molecular models was investigated using 1H-NMR spectroscopy and differential scanning calorimetry (DSC). The log 1/H50 value was related to the difference in chemical shift (ΔδHa) (Ha: H (trans) attached to the β-carbon) between the free monomer and the DPPC liposome-bound monomer. Monomer-induced DSC phase transition properties were related to HF for monomers. NMR chemical shifts may represent a valuable parameter for investigating the biological mechanisms of action of (meth)acrylates. PMID:22312284

  13. Comparison of two indices of exposure to polycyclic aromatic hydrocarbons in a retrospective aluminium smelter cohort.

    PubMed

    Friesen, Melissa C; Demers, Paul A; Spinelli, John J; Lorenzi, Maria F; Le, Nhu D

    2007-04-01

    The association between coal tar-derived substances, a complex mixture of polycyclic aromatic hydrocarbons, and cancer is well established. However, the specific aetiological agents are unknown. To compare the dose-response relationships for two common measures of coal tar-derived substances, benzene-soluble material (BSM) and benzo(a)pyrene (BaP), and to evaluate which among these is more strongly related to the health outcomes. The study population consisted of 6423 men with > or =3 years of work experience at an aluminium smelter (1954-97). Three health outcomes identified from national mortality and cancer databases were evaluated: incidence of bladder cancer (n = 90), incidence of lung cancer (n = 147) and mortality due to acute myocardial infarction (AMI, n = 184). The shape, magnitude and precision of the dose-response relationships and cumulative exposure levels for BSM and BaP were evaluated. Two model structures were assessed, where 1n(relative risk) increased with cumulative exposure (log-linear model) or with log-transformed cumulative exposure (log-log model). The BaP and BSM cumulative exposure metrics were highly correlated (r = 0.94). The increase in model precision using BaP over BSM was 14% for bladder cancer and 5% for lung cancer; no difference was observed for AMI. The log-linear BaP model provided the best fit for bladder cancer. The log-log dose-response models, where risk of disease plateaus at high exposure levels, were the best-fitting models for lung cancer and AMI. BaP and BSM were both strongly associated with bladder and lung cancer and modestly associated with AMI. Similar conclusions regarding the associations could be made regardless of the exposure metric.

  14. USING LINEAR AND POLYNOMIAL MODELS TO EXAMINE THE ENVIRONMENTAL STABILITY OF VIRUSES

    EPA Science Inventory

    The article presents the development of model equations for describing the fate of viral infectivity in environmental samples. Most of the models were based upon the use of a two-step linear regression approach. The first step employs regression of log base 10 transformed viral t...

  15. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  16. Determination of the n-octanol/water partition coefficients of weakly ionizable basic compounds by reversed-phase high-performance liquid chromatography with neutral model compounds.

    PubMed

    Liang, Chao; Han, Shu-ying; Qiao, Jun-qin; Lian, Hong-zhen; Ge, Xin

    2014-11-01

    A strategy to utilize neutral model compounds for lipophilicity measurement of ionizable basic compounds by reversed-phase high-performance liquid chromatography is proposed in this paper. The applicability of the novel protocol was justified by theoretical derivation. Meanwhile, the linear relationships between logarithm of apparent n-octanol/water partition coefficients (logKow '') and logarithm of retention factors corresponding to the 100% aqueous fraction of mobile phase (logkw ) were established for a basic training set, a neutral training set and a mixed training set of these two. As proved in theory, the good linearity and external validation results indicated that the logKow ''-logkw relationships obtained from a neutral model training set were always reliable regardless of mobile phase pH. Afterwards, the above relationships were adopted to determine the logKow of harmaline, a weakly dissociable alkaloid. As far as we know, this is the first report on experimental logKow data for harmaline (logKow = 2.28 ± 0.08). Introducing neutral compounds into a basic model training set or using neutral model compounds alone is recommended to measure the lipophilicity of weakly ionizable basic compounds especially those with high hydrophobicity for the advantages of more suitable model compound choices and convenient mobile phase pH control. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  18. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    PubMed

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  19. A log-linear model approach to estimation of population size using the line-transect sampling method

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1978-01-01

    The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.

  20. Economic policy optimization based on both one stochastic model and the parametric control theory

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  1. QSPR study of polychlorinated diphenyl ethers by molecular electronegativity distance vector (MEDV-4).

    PubMed

    Sun, Lili; Zhou, Liping; Yu, Yu; Lan, Yukun; Li, Zhiliang

    2007-01-01

    Polychlorinated diphenyl ethers (PCDEs) have received more and more concerns as a group of ubiquitous potential persistent organic pollutants (POPs). By using molecular electronegativity distance vector (MEDV-4), multiple linear regression (MLR) models are developed for sub-cooled liquid vapor pressures (P(L)), n-octanol/water partition coefficients (K(OW)) and sub-cooled liquid water solubilities (S(W,L)) of 209 PCDEs and diphenyl ether. The correlation coefficients (R) and the leave-one-out cross-validation (LOO) correlation coefficients (R(CV)) of all the 6-descriptor models for logP(L), logK(OW) and logS(W,L) are more than 0.98. By using stepwise multiple regression (SMR), the descriptors are selected and the resulting models are 5-descriptor model for logP(L), 4-descriptor model for logK(OW), and 6-descriptor model for logS(W,L), respectively. All these models exhibit excellent estimate capabilities for internal sample set and good predictive capabilities for external samples set. The consistency between observed and estimated/predicted values for logP(L) is the best (R=0.996, R(CV)=0.996), followed by logK(OW) (R=0.992, R(CV)=0.992) and logS(W,L) (R=0.983, R(CV)=0.980). By using MEDV-4 descriptors, the QSPR models can be used for prediction and the model predictions can hence extend the current database of experimental values.

  2. Posterior propriety for hierarchical models with log-likelihoods that have norm bounds

    DOE PAGES

    Michalak, Sarah E.; Morris, Carl N.

    2015-07-17

    Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less

  3. Model for multi-filamentary conduction in graphene/hexagonal-boron-nitride/graphene based resistive switching devices

    NASA Astrophysics Data System (ADS)

    Pan, Chengbin; Miranda, Enrique; Villena, Marco A.; Xiao, Na; Jing, Xu; Xie, Xiaoming; Wu, Tianru; Hui, Fei; Shi, Yuanyuan; Lanza, Mario

    2017-06-01

    Despite the enormous interest raised by graphene and related materials, recent global concern about their real usefulness in industry has raised, as there is a preoccupying lack of 2D materials based electronic devices in the market. Moreover, analytical tools capable of describing and predicting the behavior of the devices (which are necessary before facing mass production) are very scarce. In this work we synthesize a resistive random access memory (RRAM) using graphene/hexagonal-boron-nitride/graphene (G/h-BN/G) van der Waals structures, and we develop a compact model that accurately describes its functioning. The devices were fabricated using scalable methods (i.e. CVD for material growth and shadow mask for electrode patterning), and they show reproducible resistive switching (RS). The measured characteristics during the forming, set and reset processes were fitted using the model developed. The model is based on the nonlinear Landauer approach for mesoscopic conductors, in this case atomic-sized filaments formed within the 2D materials system. Besides providing excellent overall fitting results (which have been corroborated in log-log, log-linear and linear-linear plots), the model is able to explain the dispersion of the data obtained from cycle-to-cycle in terms of the particular features of the filamentary paths, mainly their confinement potential barrier height.

  4. Aircraft Airframe Cost Estimation Using a Random Coefficients Model

    DTIC Science & Technology

    1979-12-01

    approach will also be used here. 2 Model Formulation Several different types of equations could be used for the basic form of the CER, such as linear ...5) Marcotte developed several CER’s for fighter aircraft airframes using the log- linear model . A plot of the residuals from the CER for recurring...of the natural logarithm. Ordinary Least Squares The ordinary least squares procedure starts with the equation for the general linear model . The

  5. Chromatographic behaviour predicts the ability of potential nootropics to permeate the blood-brain barrier.

    PubMed

    Farsa, Oldřich

    2013-01-01

    The log BB parameter is the logarithm of the ratio of a compound's equilibrium concentrations in the brain tissue versus the blood plasma. This parameter is a useful descriptor in assessing the ability of a compound to permeate the blood-brain barrier. The aim of this study was to develop a Hansch-type linear regression QSAR model that correlates the parameter log BB and the retention time of drugs and other organic compounds on a reversed-phase HPLC containing an embedded amide moiety. The retention time was expressed by the capacity factor log k'. The second aim was to estimate the brain's absorption of 2-(azacycloalkyl)acetamidophenoxyacetic acids, which are analogues of piracetam, nefiracetam, and meclofenoxate. Notably, these acids may be novel nootropics. Two simple regression models that relate log BB and log k' were developed from an assay performed using a reversed-phase HPLC that contained an embedded amide moiety. Both the quadratic and linear models yielded statistical parameters comparable to previously published models of log BB dependence on various structural characteristics. The models predict that four members of the substituted phenoxyacetic acid series have a strong chance of permeating the barrier and being absorbed in the brain. The results of this study show that a reversed-phase HPLC system containing an embedded amide moiety is a functional in vitro surrogate of the blood-brain barrier. These results suggest that racetam-type nootropic drugs containing a carboxylic moiety could be more poorly absorbed than analogues devoid of the carboxyl group, especially if the compounds penetrate the barrier by a simple diffusion mechanism.

  6. Kinetics of Hydrothermal Inactivation of Endotoxins ▿

    PubMed Central

    Li, Lixiong; Wilbur, Chris L.; Mintz, Kathryn L.

    2011-01-01

    A kinetic model was established for the inactivation of endotoxins in water at temperatures ranging from 210°C to 270°C and a pressure of 6.2 × 106 Pa. Data were generated using a bench scale continuous-flow reactor system to process feed water spiked with endotoxin standard (Escherichia coli O113:H10). Product water samples were collected and quantified by the Limulus amebocyte lysate assay. At 250°C, 5-log endotoxin inactivation was achieved in about 1 s of exposure, followed by a lower inactivation rate. This non-log-linear pattern is similar to reported trends in microbial survival curves. Predictions and parameters of several non-log-linear models are presented. In the fast-reaction zone (3- to 5-log reduction), the Arrhenius rate constant fits well at temperatures ranging from 120°C to 250°C on the basis of data from this work and the literature. Both biphasic and modified Weibull models are comparable to account for both the high and low rates of inactivation in terms of prediction accuracy and the number of parameters used. A unified representation of thermal resistance curves for a 3-log reduction and a 3 D value associated with endotoxin inactivation and microbial survival, respectively, is presented. PMID:21193667

  7. The Effects of Q-Matrix Design on Classification Accuracy in the Log-Linear Cognitive Diagnosis Model.

    PubMed

    Madison, Matthew J; Bradshaw, Laine P

    2015-06-01

    Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other multidimensional measurement models. A priori specifications of which latent characteristics or attributes are measured by each item are a core element of the diagnostic assessment design. This item-attribute alignment, expressed in a Q-matrix, precedes and supports any inference resulting from the application of the diagnostic classification model. This study investigates the effects of Q-matrix design on classification accuracy for the log-linear cognitive diagnosis model. Results indicate that classification accuracy, reliability, and convergence rates improve when the Q-matrix contains isolated information from each measured attribute.

  8. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    PubMed

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  9. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso

    PubMed Central

    Kong, Shengchun; Nan, Bin

    2013-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328

  10. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    PubMed Central

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm. PMID:25691895

  11. On the equivalence of case-crossover and time series methods in environmental epidemiology.

    PubMed

    Lu, Yun; Zeger, Scott L

    2007-04-01

    The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.

  12. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  13. Use of Log-Linear Models in Classification Problems.

    DTIC Science & Technology

    1981-12-01

    polynomials. The second example involves infant hypoxic trauma, and many cells are empty. The existence conditions are used to find a model for which esti...mates of cell frequencies exist and are in good agreement with the ob- served data. Key Words: Classification problem, log-difference models, minimum 8...variates define k states, which are labeled consecutively. Thus, while MB define cells in their tables by an I-vector Z, we simply take Z to be a

  14. Chromatographic Behaviour Predicts the Ability of Potential Nootropics to Permeate the Blood-Brain Barrier

    PubMed Central

    Farsa, Oldřich

    2013-01-01

    The log BB parameter is the logarithm of the ratio of a compound’s equilibrium concentrations in the brain tissue versus the blood plasma. This parameter is a useful descriptor in assessing the ability of a compound to permeate the blood-brain barrier. The aim of this study was to develop a Hansch-type linear regression QSAR model that correlates the parameter log BB and the retention time of drugs and other organic compounds on a reversed-phase HPLC containing an embedded amide moiety. The retention time was expressed by the capacity factor log k′. The second aim was to estimate the brain’s absorption of 2-(azacycloalkyl)acetamidophenoxyacetic acids, which are analogues of piracetam, nefiracetam, and meclofenoxate. Notably, these acids may be novel nootropics. Two simple regression models that relate log BB and log k′ were developed from an assay performed using a reversed-phase HPLC that contained an embedded amide moiety. Both the quadratic and linear models yielded statistical parameters comparable to previously published models of log BB dependence on various structural characteristics. The models predict that four members of the substituted phenoxyacetic acid series have a strong chance of permeating the barrier and being absorbed in the brain. The results of this study show that a reversed-phase HPLC system containing an embedded amide moiety is a functional in vitro surrogate of the blood-brain barrier. These results suggest that racetam-type nootropic drugs containing a carboxylic moiety could be more poorly absorbed than analogues devoid of the carboxyl group, especially if the compounds penetrate the barrier by a simple diffusion mechanism. PMID:23641330

  15. Reliability Analysis of the Gradual Degradation of Semiconductor Devices.

    DTIC Science & Technology

    1983-07-20

    under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation

  16. Low-Cost Evaluation of EO-1 Hyperion and ALI for Detection and Biophysical Characterization of Forest Logging in Amazonia (NCC5-481)

    NASA Technical Reports Server (NTRS)

    Asner, Gregory P.; Keller, Michael M.; Silva, Jose Natalino; Zweede, Johan C.; Pereira, Rodrigo, Jr.

    2002-01-01

    Major uncertainties exist regarding the rate and intensity of logging in tropical forests worldwide: these uncertainties severely limit economic, ecological, and biogeochemical analyses of these regions. Recent sawmill surveys in the Amazon region of Brazil show that the area logged is nearly equal to total area deforested annually, but conversion of survey data to forest area, forest structural damage, and biomass estimates requires multiple assumptions about logging practices. Remote sensing could provide an independent means to monitor logging activity and to estimate the biophysical consequences of this land use. Previous studies have demonstrated that the detection of logging in Amazon forests is difficult and no studies have developed either the quantitative physical basis or remote sensing approaches needed to estimate the effects of various logging regimes on forest structure. A major reason for these limitations has been a lack of sufficient, well-calibrated optical satellite data, which in turn, has impeded the development and use of physically-based, quantitative approaches for detection and structural characterization of forest logging regimes. We propose to use data from the EO-1 Hyperion imaging spectrometer to greatly increase our ability to estimate the presence and structural attributes of selective logging in the Amazon Basin. Our approach is based on four "biogeophysical indicators" not yet derived simultaneously from any satellite sensor: 1) green canopy leaf area index; 2) degree of shadowing; 3) presence of exposed soil and; 4) non-photosynthetic vegetation material. Airborne, field and modeling studies have shown that the optical reflectance continuum (400-2500 nm) contains sufficient information to derive estimates of each of these indicators. Our ongoing studies in the eastern Amazon basin also suggest that these four indicators are sensitive to logging intensity. Satellite-based estimates of these indicators should provide a means to quantify both the presence and degree of structural disturbance caused by various logging regimes. Our quantitative assessment of Hyperion hyperspectral and ALI multi-spectral data for the detection and structural characterization of selective logging in Amazonia will benefit from data collected through an ongoing project run by the Tropical Forest Foundation, within which we have developed a study of the canopy and landscape biophysics of conventional and reduced-impact logging. We will add to our base of forest structural information in concert with an EO-1 overpass. Using a photon transport model inversion technique that accounts for non-linear mixing of the four biogeophysical indicators, we will estimate these parameters across a gradient of selective logging intensity provided by conventional and reduced impact logging sites. We will also compare our physical ly-based approach to both conventional (e.g., NDVI) and novel (e.g., SWIR-channel) vegetation indices as well as to linear mixture modeling methods. We will cross-compare these approaches using Hyperion and ALI imagers to determine the strengths and limitations of these two sensors for applications of forest biophysics. This effort will yield the first physical ly-based, quantitative analysis of the detection and intensity of selective logging in Amazonia, comparing hyperspectral and improved multi-spectral approaches as well as inverse modeling, linear mixture modeling, and vegetation index techniques.

  17. Permeability-porosity relationships in sedimentary rocks

    USGS Publications Warehouse

    Nelson, Philip H.

    1994-01-01

    In many consolidated sandstone and carbonate formations, plots of core data show that the logarithm of permeability (k) is often linearly proportional to porosity (??). The slope, intercept, and degree of scatter of these log(k)-?? trends vary from formation to formation, and these variations are attributed to differences in initial grain size and sorting, diagenetic history, and compaction history. In unconsolidated sands, better sorting systematically increases both permeability and porosity. In sands and sandstones, an increase in gravel and coarse grain size content causes k to increase even while decreasing ??. Diagenetic minerals in the pore space of sandstones, such as cement and some clay types, tend to decrease log(k) proportionately as ?? decreases. Models to predict permeability from porosity and other measurable rock parameters fall into three classes based on either grain, surface area, or pore dimension considerations. (Models that directly incorporate well log measurements but have no particular theoretical underpinnings from a fourth class.) Grain-based models show permeability proportional to the square of grain size times porosity raised to (roughly) the fifth power, with grain sorting as an additional parameter. Surface-area models show permeability proportional to the inverse square of pore surface area times porosity raised to (roughly) the fourth power; measures of surface area include irreducible water saturation and nuclear magnetic resonance. Pore-dimension models show permeability proportional to the square of a pore dimension times porosity raised to a power of (roughly) two and produce curves of constant pore size that transgress the linear data trends on a log(k)-?? plot. The pore dimension is obtained from mercury injection measurements and is interpreted as the pore opening size of some interconnected fraction of the pore system. The linear log(k)-?? data trends cut the curves of constant pore size from the pore-dimension models, which shows that porosity reduction is always accompanied by a reduction in characteristic pore size. The high powers of porosity of the grain-based and surface-area models are required to compensate for the inclusion of the small end of the pore size spectrum.

  18. Decomposition and model selection for large contingency tables.

    PubMed

    Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter

    2010-04-01

    Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.

  19. The Mid-Canada Radar Line and First Nations' people of the James Bay region, Canada: an evaluation using log-linear contingency modelling to analyze organochlorine frequency data.

    PubMed

    Tsuji, Leonard J S; Wainman, Bruce C; Martin, Ian D; Weber, Jean-Philippe; Sutherland, Celine; Elliott, J Richard; Nieboer, Evert

    2005-09-01

    Abandoned radar line stations in the North American arctic and sub-arctic regions are point sources of contamination, especially for PCBs. Few data exist with respect to human body burden of organochlorines (OCs) in residents of communities located in close proximity to these radar line sites. We compared plasma OC concentration (unadjusted for total lipids) frequency distribution data using log-linear contingency modelling for Fort Albany First Nation, the site of an abandoned Mid-Canada Radar Line station, and two comparison populations (the neighbouring community of Kashechewan First Nation without such a radar installation, and Hamilton, a city in southern Ontario, Canada). This type of analysis is important as it allows for an initial investigation of contaminant data without imputing any values. The two-state log-linear model (employing both non-detectable and detectable concentration frequencies and applicable to PCB congeners 28 and 105 and cis-nonachlor) and the four-state log-linear model (using quartile concentration frequencies for Aroclor 1260, PCB congeners [99,118,138,153,156,170,180,183,187], beta-HCH, p,p'-DDT +p,p'-DDE, HCB, mirex, oxychlordane, and trans-nonachlor) revealed that the effects of subject gender were inconsequential. Significant differences (p < 0.05) between the groups examined were attributable to the effect of location on the frequency of detection of OCs or on their differential distribution among the concentration quartiles. In general, people from Hamilton had higher frequencies of non-detections and of concentrations in the first quartile (p < 0.05) for most OCs compared to people from Fort Albany and Kashechewan (who consume a traditional diet of wild meats that does not include marine mammals). An unexpected finding was that, for Kashechewan males, the frequency of many OCs was significantly higher (p < 0.05) in the 4th concentration quartile than that predicted by the four-state log-linear model, but significantly lower than expected in the 1st quartile for beta-HCH. The levels of PCBs found for women in Fort Albany and Kashechewan were greater than those reported for Dene (First Nation people) and Métis (mixed heritage) of the western Northwest Territories (NWT) who did not consume marine mammals, and for Inuit living in the central NWT (occasional consumers of marine mammals). Moreover, the levels of total p,p'-DDT were greater for Fort Albany and Kashechewan women compared to these same aboriginal groups.

  20. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  1. UV-C light inactivation and modeling kinetics of Alicyclobacillus acidoterrestris spores in white grape and apple juices.

    PubMed

    Baysal, Ayse Handan; Molva, Celenk; Unluturk, Sevcan

    2013-09-16

    In the present study, the effect of short wave ultraviolet light (UV-C) on the inactivation of Alicyclobacillus acidoterrestris DSM 3922 spores in commercial pasteurized white grape and apple juices was investigated. The inactivation of A. acidoterrestris spores in juices was examined by evaluating the effects of UV light intensity (1.31, 0.71 and 0.38 mW/cm²) and exposure time (0, 3, 5, 7, 10, 12 and 15 min) at constant depth (0.15 cm). The best reduction (5.5-log) was achieved in grape juice when the UV intensity was 1.31 mW/cm². The maximum inactivation was approximately 2-log CFU/mL in apple juice under the same conditions. The results showed that first-order kinetics were not suitable for the estimation of spore inactivation in grape juice treated with UV-light. Since tailing was observed in the survival curves, the log-linear plus tail and Weibull models were compared. The results showed that the log-linear plus tail model was satisfactorily fitted to estimate the reductions. As a non-thermal technology, UV-C treatment could be an alternative to thermal treatment for grape juices or combined with other preservation methods for the pasteurization of apple juice. © 2013 Elsevier B.V. All rights reserved.

  2. QSAR models for predicting octanol/water and organic carbon/water partition coefficients of polychlorinated biphenyls.

    PubMed

    Yu, S; Gao, S; Gan, Y; Zhang, Y; Ruan, X; Wang, Y; Yang, L; Shi, J

    2016-04-01

    Quantitative structure-property relationship modelling can be a valuable alternative method to replace or reduce experimental testing. In particular, some endpoints such as octanol-water (KOW) and organic carbon-water (KOC) partition coefficients of polychlorinated biphenyls (PCBs) are easier to predict and various models have been already developed. In this paper, two different methods, which are multiple linear regression based on the descriptors generated using Dragon software and hologram quantitative structure-activity relationships, were employed to predict suspended particulate matter (SPM) derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of 209 PCBs. The predictive ability of the derived models was validated using a test set. The performances of all these models were compared with EPI Suite™ software. The results indicated that the proposed models were robust and satisfactory, and could provide feasible and promising tools for the rapid assessment of the SPM derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of PCBs.

  3. Predicting outgrowth and inactivation of Clostridium perfringens in meat products during low temperature long time heat treatment.

    PubMed

    Duan, Zhi; Hansen, Terese Holst; Hansen, Tina Beck; Dalgaard, Paw; Knøchel, Susanne

    2016-08-02

    With low temperature long time (LTLT) cooking it can take hours for meat to reach a final core temperature above 53°C and germination followed by growth of Clostridium perfringens is a concern. Available and new growth data in meats including 154 lag times (tlag), 224 maximum specific growth rates (μmax) and 25 maximum population densities (Nmax) were used to developed a model to predict growth of C. perfringens during the coming-up time of LTLT cooking. New data were generate in 26 challenge tests with chicken (pH6.8) and pork (pH5.6) at two different slowly increasing temperature (SIT) profiles (10°C to 53°C) followed by 53°C in up to 30h in total. Three inoculum types were studied including vegetative cells, non-heated spores and heat activated (75°C, 20min) spores of C. perfringens strain 790-94. Concentrations of vegetative cells in chicken increased 2 to 3logCFU/g during the SIT profiles. Similar results were found for non-heated and heated spores in chicken, whereas in pork C. perfringens 790-94 increased less than 1logCFU/g. At 53°C C. perfringens 790-94 was log-linearly inactivated. Observed and predicted concentrations of C. perfringens, at the time when 53°C (log(N53)) was reached, were used to evaluate the new growth model and three available predictive models previously published for C. perfringens growth during cooling rather than during SIT profiles. Model performance was evaluated by using mean deviation (MD), mean absolute deviation (MAD) and the acceptable simulation zone (ASZ) approach with a zone of ±0.5logCFU/g. The new model showed best performance with MD=0.27logCFU/g, MAD=0.66logCFU/g and ASZ=67%. The two growth models that performed best, were used together with a log-linear inactivation model and D53-values from the present study to simulate the behaviour of C. perfringens under the fast and slow SIT profiles investigated in the present study. Observed and predicted concentrations were compared using a new fail-safe acceptable zone (FSAZ) method. FSAZ was defined as the predicted concentration of C. perfringens plus 0.5logCFU/g. If at least 85% of the observed log-counts were below the FSAZ, the model was considered fail-safe. The two models showed similar performance but none of them performed satisfactorily for all conditions. It is recommended to use the models without a lag phase until more precise lag time models become available. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Lidar-Based Estimates of Above-Ground Biomass in the Continental US and Mexico Using Ground, Airborne, and Satellite Observations

    NASA Technical Reports Server (NTRS)

    Nelson, Ross; Margolis, Hank; Montesano, Paul; Sun, Guoqing; Cook, Bruce; Corp, Larry; Andersen, Hans-Erik; DeJong, Ben; Pellat, Fernando Paz; Fickel, Thaddeus; hide

    2016-01-01

    Existing national forest inventory plots, an airborne lidar scanning (ALS) system, and a space profiling lidar system (ICESat-GLAS) are used to generate circa 2005 estimates of total aboveground dry biomass (AGB) in forest strata, by state, in the continental United States (CONUS) and Mexico. The airborne lidar is used to link ground observations of AGB to space lidar measurements. Two sets of models are generated, the first relating ground estimates of AGB to airborne laser scanning (ALS) measurements and the second set relating ALS estimates of AGB (generated using the first model set) to GLAS measurements. GLAS then, is used as a sampling tool within a hybrid estimation framework to generate stratum-, state-, and national-level AGB estimates. A two-phase variance estimator is employed to quantify GLAS sampling variability and, additively, ALS-GLAS model variability in this current, three-phase (ground-ALS-space lidar) study. The model variance component characterizes the variability of the regression coefficients used to predict ALS-based estimates of biomass as a function of GLAS measurements. Three different types of predictive models are considered in CONUS to determine which produced biomass totals closest to ground-based national forest inventory estimates - (1) linear (LIN), (2) linear-no-intercept (LNI), and (3) log-linear. For CONUS at the national level, the GLAS LNI model estimate (23.95 +/- 0.45 Gt AGB), agreed most closely with the US national forest inventory ground estimate, 24.17 +/- 0.06 Gt, i.e., within 1%. The national biomass total based on linear ground-ALS and ALS-GLAS models (25.87 +/- 0.49 Gt) overestimated the national ground-based estimate by 7.5%. The comparable log-linear model result (63.29 +/-1.36 Gt) overestimated ground results by 261%. All three national biomass GLAS estimates, LIN, LNI, and log-linear, are based on 241,718 pulses collected on 230 orbits. The US national forest inventory (ground) estimates are based on 119,414 ground plots. At the US state level, the average absolute value of the deviation of LNI GLAS estimates from the comparable ground estimate of total biomass was 18.8% (range: Oregon,-40.8% to North Dakota, 128.6%). Log-linear models produced gross overestimates in the continental US, i.e., N2.6x, and the use of this model to predict regional biomass using GLAS data in temperate, western hemisphere forests is not appropriate. The best model form, LNI, is used to produce biomass estimates in Mexico. The average biomass density in Mexican forests is 53.10 +/- 0.88 t/ha, and the total biomass for the country, given a total forest area of 688,096 sq km, is 3.65 +/- 0.06 Gt. In Mexico, our GLAS biomass total underestimated a 2005 FAO estimate (4.152 Gt) by 12% and overestimated a 2007/8 radar study's figure (3.06 Gt) by 19%.

  5. Generating log-normal mock catalog of galaxies in redshift space

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  6. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    PubMed Central

    Sánchez-Pérez, Inma; Ibern, Pere; Coderch, Jordi; Inoriza, José María

    2016-01-01

    Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain) for the year 2012 (N = 92,498 individuals). A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG) patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals. PMID:28316542

  7. Regional variability among nonlinear chlorophyll-phosphorus relationships in lakes

    USGS Publications Warehouse

    Filstrup, Christopher T.; Wagner, Tyler; Soranno, Patricia A.; Stanley, Emily H.; Stow, Craig A.; Webster, Katherine E.; Downing, John A.

    2014-01-01

    The relationship between chlorophyll a (Chl a) and total phosphorus (TP) is a fundamental relationship in lakes that reflects multiple aspects of ecosystem function and is also used in the regulation and management of inland waters. The exact form of this relationship has substantial implications on its meaning and its use. We assembled a spatially extensive data set to examine whether nonlinear models are a better fit for Chl a—TP relationships than traditional log-linear models, whether there were regional differences in the form of the relationships, and, if so, which regional factors were related to these differences. We analyzed a data set from 2105 temperate lakes across 35 ecoregions by fitting and comparing two different nonlinear models and one log-linear model. The two nonlinear models fit the data better than the log-linear model. In addition, the parameters for the best-fitting model varied among regions: the maximum and lower Chl aasymptotes were positively and negatively related to percent regional pasture land use, respectively, and the rate at which chlorophyll increased with TP was negatively related to percent regional wetland cover. Lakes in regions with more pasture fields had higher maximum chlorophyll concentrations at high TP concentrations but lower minimum chlorophyll concentrations at low TP concentrations. Lakes in regions with less wetland cover showed a steeper Chl a—TP relationship than wetland-rich regions. Interpretation of Chl a—TP relationships depends on regional differences, and theory and management based on a monolithic relationship may be inaccurate.

  8. Statistical method to compare massive parallel sequencing pipelines.

    PubMed

    Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P

    2017-03-01

    Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.

  9. Response Strength in Extreme Multiple Schedules

    PubMed Central

    McLean, Anthony P; Grace, Randolph C; Nevin, John A

    2012-01-01

    Four pigeons were trained in a series of two-component multiple schedules. Reinforcers were scheduled with random-interval schedules. The ratio of arranged reinforcer rates in the two components was varied over 4 log units, a much wider range than previously studied. When performance appeared stable, prefeeding tests were conducted to assess resistance to change. Contrary to the generalized matching law, logarithms of response ratios in the two components were not a linear function of log reinforcer ratios, implying a failure of parameter invariance. Over a 2 log unit range, the function appeared linear and indicated undermatching, but in conditions with more extreme reinforcer ratios, approximate matching was observed. A model suggested by McLean (1991), originally for local contrast, predicts these changes in sensitivity to reinforcer ratios somewhat better than models by Herrnstein (1970) and by Williams and Wixted (1986). Prefeeding tests of resistance to change were conducted at each reinforcer ratio, and relative resistance to change was also a nonlinear function of log reinforcer ratios, again contrary to conclusions from previous work. Instead, the function suggests that resistance to change in a component may be determined partly by the rate of reinforcement and partly by the ratio of reinforcers to responses. PMID:22287804

  10. ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.

    PubMed

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems.

  11. Improving linear accelerator service response with a real- time electronic event reporting system.

    PubMed

    Hoisak, Jeremy D P; Pawlicki, Todd; Kim, Gwe-Ya; Fletcher, Richard; Moore, Kevin L

    2014-09-08

    To track linear accelerator performance issues, an online event recording system was developed in-house for use by therapists and physicists to log the details of technical problems arising on our institution's four linear accelerators. In use since October 2010, the system was designed so that all clinical physicists would receive email notification when an event was logged. Starting in October 2012, we initiated a pilot project in collaboration with our linear accelerator vendor to explore a new model of service and support, in which event notifications were also sent electronically directly to dedicated engineers at the vendor's technical help desk, who then initiated a response to technical issues. Previously, technical issues were reported by telephone to the vendor's call center, which then disseminated information and coordinated a response with the Technical Support help desk and local service engineers. The purpose of this work was to investigate the improvements to clinical operations resulting from this new service model. The new and old service models were quantitatively compared by reviewing event logs and the oncology information system database in the nine months prior to and after initiation of the project. Here, we focus on events that resulted in an inoperative linear accelerator ("down" machine). Machine downtime, vendor response time, treatment cancellations, and event resolution were evaluated and compared over two equivalent time periods. In 389 clinical days, there were 119 machine-down events: 59 events before and 60 after introduction of the new model. In the new model, median time to service response decreased from 45 to 8 min, service engineer dispatch time decreased 44%, downtime per event decreased from 45 to 20 min, and treatment cancellations decreased 68%. The decreased vendor response time and reduced number of on-site visits by a service engineer resulted in decreased downtime and decreased patient treatment cancellations.

  12. Three-parameter modeling of the soil sorption of acetanilide and triazine herbicide derivatives.

    PubMed

    Freitas, Mirlaine R; Matias, Stella V B G; Macedo, Renato L G; Freitas, Matheus P; Venturin, Nelson

    2014-02-01

    Herbicides have widely variable toxicity and many of them are persistent soil contaminants. Acetanilide and triazine family of herbicides have widespread use, but increasing interest for the development of new herbicides has been rising to increase their effectiveness and to diminish environmental hazard. The environmental risk of new herbicides can be accessed by estimating their soil sorption (logKoc), which is usually correlated to the octanol/water partition coefficient (logKow). However, earlier findings have shown that this correlation is not valid for some acetanilide and triazine herbicides. Thus, easily accessible quantitative structure-property relationship models are required to predict logKoc of analogues of the these compounds. Octanol/water partition coefficient, molecular weight and volume were calculated and then regressed against logKoc for two series of acetanilide and triazine herbicides using multiple linear regression, resulting in predictive and validated models.

  13. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  14. Model evaluation of plant metal content and biomass yield for the phytoextraction of heavy metals by switchgrass.

    PubMed

    Chen, Bo-Ching; Lai, Hung-Yu; Juang, Kai-Wei

    2012-06-01

    To better understand the ability of switchgrass (Panicum virgatum L.), a perennial grass often relegated to marginal agricultural areas with minimal inputs, to remove cadmium, chromium, and zinc by phytoextraction from contaminated sites, the relationship between plant metal content and biomass yield is expressed in different models to predict the amount of metals switchgrass can extract. These models are reliable in assessing the use of switchgrass for phytoremediation of heavy-metal-contaminated sites. In the present study, linear and exponential decay models are more suitable for presenting the relationship between plant cadmium and dry weight. The maximum extractions of cadmium using switchgrass, as predicted by the linear and exponential decay models, approached 40 and 34 μg pot(-1), respectively. The log normal model was superior in predicting the relationship between plant chromium and dry weight. The predicted maximum extraction of chromium by switchgrass was about 56 μg pot(-1). In addition, the exponential decay and log normal models were better than the linear model in predicting the relationship between plant zinc and dry weight. The maximum extractions of zinc by switchgrass, as predicted by the exponential decay and log normal models, were about 358 and 254 μg pot(-1), respectively. To meet the maximum removal of Cd, Cr, and Zn, one can adopt the optimal timing of harvest as plant Cd, Cr, and Zn approach 450 and 526 mg kg(-1), 266 mg kg(-1), and 3022 and 5000 mg kg(-1), respectively. Due to the well-known agronomic characteristics of cultivation and the high biomass production of switchgrass, it is practicable to use switchgrass for the phytoextraction of heavy metals in situ. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    PubMed Central

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  16. Characterizing Sleep Structure Using the Hypnogram

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian; Bandeen-Roche, Karen; Punjabi, Naresh M.

    2008-01-01

    Objectives: Research on the effects of sleep-disordered breathing (SDB) on sleep structure has traditionally been based on composite sleep-stage summaries. The primary objective of this investigation was to demonstrate the utility of log-linear and multistate analysis of the sleep hypnogram in evaluating differences in nocturnal sleep structure in subjects with and without SDB. Methods: A community-based sample of middle-aged and older adults with and without SDB matched on age, sex, race, and body mass index was identified from the Sleep Heart Health Study. Sleep was assessed with home polysomnography and categorized into rapid eye movement (REM) and non-REM (NREM) sleep. Log-linear and multistate survival analysis models were used to quantify the frequency and hazard rates of transitioning, respectively, between wakefulness, NREM sleep, and REM sleep. Results: Whereas composite sleep-stage summaries were similar between the two groups, subjects with SDB had higher frequencies and hazard rates for transitioning between the three states. Specifically, log-linear models showed that subjects with SDB had more wake-to-NREM sleep and NREM sleep-to-wake transitions, compared with subjects without SDB. Multistate survival models revealed that subjects with SDB transitioned more quickly from wake-to-NREM sleep and NREM sleep-to-wake than did subjects without SDB. Conclusions: The description of sleep continuity with log-linear and multistate analysis of the sleep hypnogram suggests that such methods can identify differences in sleep structure that are not evident with conventional sleep-stage summaries. Detailed characterization of nocturnal sleep evolution with event history methods provides additional means for testing hypotheses on how specific conditions impact sleep continuity and whether sleep disruption is associated with adverse health outcomes. Citation: Swihart BJ; Caffo B; Bandeen-Roche K; Punjabi NM. Characterizing sleep structure using the hypnogram. J Clin Sleep Med 2008;4(4):349–355. PMID:18763427

  17. Demonstration of the Web-based Interspecies Correlation Estimation (Web-ICE) modeling application

    EPA Science Inventory

    The Web-based Interspecies Correlation Estimation (Web-ICE) modeling application is available to the risk assessment community through a user-friendly internet platform (http://epa.gov/ceampubl/fchain/webice/). ICE models are log-linear least square regressions that predict acute...

  18. Identifying Plant Part Composition of Forest Logging Residue Using Infrared Spectral Data and Linear Discriminant Analysis

    PubMed Central

    Acquah, Gifty E.; Via, Brian K.; Billor, Nedret; Fasina, Oladiran O.; Eckhardt, Lori G.

    2016-01-01

    As new markets, technologies and economies evolve in the low carbon bioeconomy, forest logging residue, a largely untapped renewable resource will play a vital role. The feedstock can however be variable depending on plant species and plant part component. This heterogeneity can influence the physical, chemical and thermochemical properties of the material, and thus the final yield and quality of products. Although it is challenging to control compositional variability of a batch of feedstock, it is feasible to monitor this heterogeneity and make the necessary changes in process parameters. Such a system will be a first step towards optimization, quality assurance and cost-effectiveness of processes in the emerging biofuel/chemical industry. The objective of this study was therefore to qualitatively classify forest logging residue made up of different plant parts using both near infrared spectroscopy (NIRS) and Fourier transform infrared spectroscopy (FTIRS) together with linear discriminant analysis (LDA). Forest logging residue harvested from several Pinus taeda (loblolly pine) plantations in Alabama, USA, were classified into three plant part components: clean wood, wood and bark and slash (i.e., limbs and foliage). Five-fold cross-validated linear discriminant functions had classification accuracies of over 96% for both NIRS and FTIRS based models. An extra factor/principal component (PC) was however needed to achieve this in FTIRS modeling. Analysis of factor loadings of both NIR and FTIR spectra showed that, the statistically different amount of cellulose in the three plant part components of logging residue contributed to their initial separation. This study demonstrated that NIR or FTIR spectroscopy coupled with PCA and LDA has the potential to be used as a high throughput tool in classifying the plant part makeup of a batch of forest logging residue feedstock. Thus, NIR/FTIR could be employed as a tool to rapidly probe/monitor the variability of forest biomass so that the appropriate online adjustments to parameters can be made in time to ensure process optimization and product quality. PMID:27618901

  19. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies

    ERIC Educational Resources Information Center

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-01-01

    Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…

  20. A FORTRAN program for multivariate survival analysis on the personal computer.

    PubMed

    Mulder, P G

    1988-01-01

    In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.

  1. Generating log-normal mock catalog of galaxies in redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less

  2. Linear Equations with the Euler Totient Function

    DTIC Science & Technology

    2007-02-13

    unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 2 FLORIAN LUCA, PANTELIMON STĂNICĂ...of positive integers n such that φ(n) = φ(n+ 1), and that the set of Phibonacci numbers is A(1,1,−1) + 2. Theorem 2.1. Let C (t, a) = t3 logH(a). Then...the estimate #Aa(x) C (t, a) x log log log x√ log log x LINEAR EQUATIONS WITH THE EULER TOTIENT FUNCTION 3 holds uniformly in a and 1 ≤ t < y. Note

  3. Bayesian Model Comparison for the Order Restricted RC Association Model

    ERIC Educational Resources Information Center

    Iliopoulos, G.; Kateri, M.; Ntzoufras, I.

    2009-01-01

    Association models constitute an attractive alternative to the usual log-linear models for modeling the dependence between classification variables. They impose special structure on the underlying association by assigning scores on the levels of each classification variable, which can be fixed or parametric. Under the general row-column (RC)…

  4. The Umov effect in application to an optically thin two-component cloud of cosmic dust

    NASA Astrophysics Data System (ADS)

    Zubko, Evgenij; Videen, Gorden; Zubko, Nataliya; Shkuratov, Yuriy

    2018-04-01

    The Umov effect is an inverse correlation between linear polarization of the sunlight scattered by an object and its geometric albedo. The Umov effect has been observed in particulate surfaces, such as planetary regoliths, and recently it also was found in single-scattering small dust particles. Using numerical modeling, we study the Umov effect in a two-component mixture of small irregularly shaped particles. Such a complex chemical composition is suggested in cometary comae and other types of optically thin clouds of cosmic dust. We find that the two-component mixtures of small particles also reveal the Umov effect regardless of the chemical composition of their end-member components. The interrelation between log(Pmax) and log(A) in a two-component mixture of small irregularly shaped particles appears either in a straight linear form or in a slightly curved form. This curvature tends to decrease while the index n in a power-law size distribution r-n grows; at n > 2.5, the log(Pmax)-log(A) diagrams are almost straight linear in appearance. The curvature also noticeably decreases with the packing density of constituent material in irregularly shaped particles forming the mixture. That such a relation exists suggest the Umov effect may also be observed in more complex mixtures.

  5. The Umov effect in application to an optically thin two-component cloud of cosmic dust

    NASA Astrophysics Data System (ADS)

    Zubko, Evgenij; Videen, Gorden; Zubko, Nataliya; Shkuratov, Yuriy

    2018-07-01

    The Umov effect is an inverse correlation between linear polarization of the sunlight scattered by an object and its geometric albedo. The Umov effect has been observed in particulate surfaces, such as planetary regoliths, and recently it also was found in single-scattering small dust particles. Using numerical modelling, we study the Umov effect in a two-component mixture of small irregularly shaped particles. Such a complex chemical composition is suggested in cometary comae and other types of optically thin clouds of cosmic dust. We find that the two-component mixtures of small particles also reveal the Umov effect regardless of the chemical composition of their end-member components. The interrelation between log(Pmax) and log(A) in a two-component mixture of small irregularly shaped particles appears either in a straight linear form or in a slightly curved form. This curvature tends to decrease while the index n in a power-law size distribution r-n grows; at n > 2.5, the log(Pmax)-log(A) diagrams are almost straight linear in appearance. The curvature also noticeably decreases with the packing density of constituent material in irregularly shaped particles forming the mixture. That such a relation exists suggests the Umov effect may also be observed in more complex mixtures.

  6. Prediction of passive blood-brain partitioning: straightforward and effective classification models based on in silico derived physicochemical descriptors

    PubMed Central

    Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano

    2010-01-01

    The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of log BB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (log P), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental log BB data had been determined in vivo. In particular, since molecules with log BB > 0.3 cross the blood-brain barrier (BBB) readily while molecules with log BB < −1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the log BB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. PMID:20427217

  7. Prediction of passive blood-brain partitioning: straightforward and effective classification models based on in silico derived physicochemical descriptors.

    PubMed

    Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano

    2010-06-01

    The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of logBB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (logP), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental logBB data had been determined in vivo. In particular, since molecules with logBB>0.3 cross the blood-brain barrier (BBB) readily while molecules with logBB<-1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the logBB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. Published by Elsevier Inc.

  8. Log-gamma linear-mixed effects models for multiple outcomes with application to a longitudinal glaucoma study

    PubMed Central

    Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.

    2015-01-01

    Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565

  9. In Search of Optimal Cognitive Diagnostic Model(s) for ESL Grammar Test Data

    ERIC Educational Resources Information Center

    Yi, Yeon-Sook

    2017-01-01

    This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model…

  10. Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Gleason, John M.

    1993-01-01

    This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)

  11. Factors Influencing M.S.W. Students' Interest in Clinical Practice

    ERIC Educational Resources Information Center

    Perry, Robin

    2009-01-01

    This study utilizes linear and log-linear stochastic models to examine the impact that a variety of variables (including graduate education) have on M.S.W. students' desires to work in clinical practice. Data was collected biannually (between 1992 and 1998) from a complete population sample of all students entering and exiting accredited graduate…

  12. The Effects of Q-Matrix Design on Classification Accuracy in the Log-Linear Cognitive Diagnosis Model

    ERIC Educational Resources Information Center

    Madison, Matthew J.; Bradshaw, Laine P.

    2015-01-01

    Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other…

  13. Modeling of inactivation of surface borne microorganisms occurring on seeds by cold atmospheric plasma (CAP)

    NASA Astrophysics Data System (ADS)

    Mitra, Anindita; Li, Y.-F.; Shimizu, T.; Klämpfl, Tobias; Zimmermann, J. L.; Morfill, G. E.

    2012-10-01

    Cold Atmospheric Plasma (CAP) is a fast, low cost, simple, easy to handle technology for biological application. Our group has developed a number of different CAP devices using the microwave technology and the surface micro discharge (SMD) technology. In this study, FlatPlaSter2.0 at different time intervals (0.5 to 5 min) is used for microbial inactivation. There is a continuous demand for deactivation of microorganisms associated with raw foods/seeds without loosing their properties. This research focuses on the kinetics of CAP induced microbial inactivation of naturally growing surface microorganisms on seeds. The data were assessed for log- linear and non-log-linear models for survivor curves as a function of time. The Weibull model showed the best fitting performance of the data. No shoulder and tail was observed. The models are focused in terms of the number of log cycles reduction rather than on classical D-values with statistical measurements. The viability of seeds was not affected for CAP treatment times up to 3 min with our device. The optimum result was observed at 1 min with increased percentage of germination from 60.83% to 89.16% compared to the control. This result suggests the advantage and promising role of CAP in food industry.

  14. Power calculations for likelihood ratio tests for offspring genotype risks, maternal effects, and parent-of-origin (POO) effects in the presence of missing parental genotypes when unaffected siblings are available.

    PubMed

    Rampersaud, E; Morris, R W; Weinberg, C R; Speer, M C; Martin, E R

    2007-01-01

    Genotype-based likelihood-ratio tests (LRT) of association that examine maternal and parent-of-origin effects have been previously developed in the framework of log-linear and conditional logistic regression models. In the situation where parental genotypes are missing, the expectation-maximization (EM) algorithm has been incorporated in the log-linear approach to allow incomplete triads to contribute to the LRT. We present an extension to this model which we call the Combined_LRT that incorporates additional information from the genotypes of unaffected siblings to improve assignment of incompletely typed families to mating type categories, thereby improving inference of missing parental data. Using simulations involving a realistic array of family structures, we demonstrate the validity of the Combined_LRT under the null hypothesis of no association and provide power comparisons under varying levels of missing data and using sibling genotype data. We demonstrate the improved power of the Combined_LRT compared with the family-based association test (FBAT), another widely used association test. Lastly, we apply the Combined_LRT to a candidate gene analysis in Autism families, some of which have missing parental genotypes. We conclude that the proposed log-linear model will be an important tool for future candidate gene studies, for many complex diseases where unaffected siblings can often be ascertained and where epigenetic factors such as imprinting may play a role in disease etiology.

  15. Retention equations of nonionic organic chemicals in soil column chromatography with methanol-water eluents.

    PubMed

    Xu, Feng; Liang, Xinmiao; Lin, Bingcheng

    2002-01-01

    Research efforts dealing with chemical transportation in soils are needed to prevent damage to ground water. Methanol-containing solvents can increase the translocation of nonionic organic chemicals (NOCs). In this study, a general log-linear retention equation, log k' = log k'w - Sphi (Eq. [1]), was developed to describe the mobilities of NOCs in soil column chromatography (SCC). The term phi denotes the volume fraction of methanol in eluent, k' is the capacity factor of a solute at a certain phi value, and log k'w and -S are the intercept and slope of the log k' vs. phi plot. Two reference soils (GSE 17204 and GSE 17205) were used as packing materials, and were eluted by isocratic methanol-water mixtures. A model of linear solvation energy relationships (LSER) was applied to analyze the k' from molecular interactions. The most important factor determining the transportation was found to be the solute hydrophobic partition in soils, and the second-most important factor was the solute hydrogen-bond basicity (hydrogen-bond accepting ability), while the less important factor was the solute dipolarity-polarizability. The solute hydrogen-bond acidity (hydrogen-bond donating ability) was statistically unimportant and deletable. From the LSER model, one could also obtain Eq. [1]. The experimental k' data of 121 NOCs can be accurately explained by Eq. [1]. The equation is promising to estimate the solute mobility in pure water by extrapolating from lower-capacity factors obtained in methanol-water mixed eluents.

  16. ELASTIC NET FOR COX’S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM

    PubMed Central

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox’s proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox’s proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932

  17. Modeling the Geographic Consequence and Pattern of Dengue Fever Transmission in Thailand.

    PubMed

    Bekoe, Collins; Pansombut, Tatdow; Riyapan, Pakwan; Kakchapati, Sampurna; Phon-On, Aniruth

    2017-05-04

    Dengue fever is one of the infectious diseases that is still a public health problem in Thailand. This study considers in detail, the geographic consequence, seasonal and pattern of dengue fever transmission among the 76 provinces of Thailand from 2003 to 2015. A cross-sectional study. The data for the study was from the Department of Disease Control under the Bureau of Epidemiology, Thailand. The quarterly effects and location on the transmission of dengue was modeled using an alternative additive log-linear model. The model fitted well as illustrated by the residual plots and the  Again, the model showed that dengue fever is high in the second quarter of every year from May to August. There was an evidence of an increase in the trend of dengue annually from 2003 to 2015. There was a difference in the distribution of dengue fever within and between provinces. The areas of high risks were the central and southern regions of Thailand. The log-linear model provided a simple medium of modeling dengue fever transmission. The results are very important in the geographic distribution of dengue fever patterns.

  18. voom: precision weights unlock linear model analysis tools for RNA-seq read counts

    PubMed Central

    2014-01-01

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249

  19. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.

    PubMed

    Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K

    2014-02-03

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.

  20. A class of non-linear exposure-response models suitable for health impact assessment applicable to large cohort studies of ambient air pollution.

    PubMed

    Nasari, Masoud M; Szyszkowicz, Mieczysław; Chen, Hong; Crouse, Daniel; Turner, Michelle C; Jerrett, Michael; Pope, C Arden; Hubbell, Bryan; Fann, Neal; Cohen, Aaron; Gapstur, Susan M; Diver, W Ryan; Stieb, David; Forouzanfar, Mohammad H; Kim, Sun-Young; Olives, Casey; Krewski, Daniel; Burnett, Richard T

    2016-01-01

    The effectiveness of regulatory actions designed to improve air quality is often assessed by predicting changes in public health resulting from their implementation. Risk of premature mortality from long-term exposure to ambient air pollution is the single most important contributor to such assessments and is estimated from observational studies generally assuming a log-linear, no-threshold association between ambient concentrations and death. There has been only limited assessment of this assumption in part because of a lack of methods to estimate the shape of the exposure-response function in very large study populations. In this paper, we propose a new class of variable coefficient risk functions capable of capturing a variety of potentially non-linear associations which are suitable for health impact assessment. We construct the class by defining transformations of concentration as the product of either a linear or log-linear function of concentration multiplied by a logistic weighting function. These risk functions can be estimated using hazard regression survival models with currently available computer software and can accommodate large population-based cohorts which are increasingly being used for this purpose. We illustrate our modeling approach with two large cohort studies of long-term concentrations of ambient air pollution and mortality: the American Cancer Society Cancer Prevention Study II (CPS II) cohort and the Canadian Census Health and Environment Cohort (CanCHEC). We then estimate the number of deaths attributable to changes in fine particulate matter concentrations over the 2000 to 2010 time period in both Canada and the USA using both linear and non-linear hazard function models.

  1. Community air pollution and mortality: Analysis of 1980 data from US metropolitan areas. 1: Particulate air pollution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipfert, F.W.

    1992-11-01

    1980 data from up to 149 metropolitan areas were used to define cross-sectional associations between community air pollution and excess human mortality. The regression model proposed by Oezkaynak and Thurston, which accounted for age, race, education, poverty, and population density, was evaluated and several new models were developed. The new models also accounted for population change, drinking water hardness, and smoking, and included a more detailed description of race. Cause-of-death categories analyzed include all causes, all non-external causes, major cardiovascular diseases, and chronic obstructive pulmonary diseases (COPD). Both annual mortality rates and their logarithms were analyzed. The data on particulatesmore » were averaged across all monitoring stations available for each SMSA and the TSP data were restricted to the year 1980. The associations between mortality and air pollution were found to be dependent on the socioeconomic factors included in the models, the specific locations included din the data set, and the type of statistical model used. Statistically significant associations were found between TSP and mortality due to non-external causes with log-linear models, but not with a linear model, and between TS and COPD mortality for both linear and log-linear models. When the sulfate contribution to TSP was subtracted, the relationship with COPD mortality was strengthened. Scatter plots and quintile analyses suggested a TSP threshold for COPD mortality at around 65 ug/m{sup 3} (annual average). SO{sub 4}{sup {minus}2}, Mn, PM{sup 15}, and PM{sub 2.5} were not significantly associated with mortality using the new models.« less

  2. The allometry of coarse root biomass: log-transformed linear regression or nonlinear regression?

    PubMed

    Lai, Jiangshan; Yang, Bo; Lin, Dunmei; Kerkhoff, Andrew J; Ma, Keping

    2013-01-01

    Precise estimation of root biomass is important for understanding carbon stocks and dynamics in forests. Traditionally, biomass estimates are based on allometric scaling relationships between stem diameter and coarse root biomass calculated using linear regression (LR) on log-transformed data. Recently, it has been suggested that nonlinear regression (NLR) is a preferable fitting method for scaling relationships. But while this claim has been contested on both theoretical and empirical grounds, and statistical methods have been developed to aid in choosing between the two methods in particular cases, few studies have examined the ramifications of erroneously applying NLR. Here, we use direct measurements of 159 trees belonging to three locally dominant species in east China to compare the LR and NLR models of diameter-root biomass allometry. We then contrast model predictions by estimating stand coarse root biomass based on census data from the nearby 24-ha Gutianshan forest plot and by testing the ability of the models to predict known root biomass values measured on multiple tropical species at the Pasoh Forest Reserve in Malaysia. Based on likelihood estimates for model error distributions, as well as the accuracy of extrapolative predictions, we find that LR on log-transformed data is superior to NLR for fitting diameter-root biomass scaling models. More importantly, inappropriately using NLR leads to grossly inaccurate stand biomass estimates, especially for stands dominated by smaller trees.

  3. Analytical methods in multivariate highway safety exposure data estimation

    DOT National Transportation Integrated Search

    1984-01-01

    Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...

  4. An empirical model for estimating annual consumption by freshwater fish populations

    USGS Publications Warehouse

    Liao, H.; Pierce, C.L.; Larscheid, J.G.

    2005-01-01

    Population consumption is an important process linking predator populations to their prey resources. Simple tools are needed to enable fisheries managers to estimate population consumption. We assembled 74 individual estimates of annual consumption by freshwater fish populations and their mean annual population size, 41 of which also included estimates of mean annual biomass. The data set included 14 freshwater fish species from 10 different bodies of water. From this data set we developed two simple linear regression models predicting annual population consumption. Log-transformed population size explained 94% of the variation in log-transformed annual population consumption. Log-transformed biomass explained 98% of the variation in log-transformed annual population consumption. We quantified the accuracy of our regressions and three alternative consumption models as the mean percent difference from observed (bioenergetics-derived) estimates in a test data set. Predictions from our population-size regression matched observed consumption estimates poorly (mean percent difference = 222%). Predictions from our biomass regression matched observed consumption reasonably well (mean percent difference = 24%). The biomass regression was superior to an alternative model, similar in complexity, and comparable to two alternative models that were more complex and difficult to apply. Our biomass regression model, log10(consumption) = 0.5442 + 0.9962??log10(biomass), will be a useful tool for fishery managers, enabling them to make reasonably accurate annual population consumption predictions from mean annual biomass estimates. ?? Copyright by the American Fisheries Society 2005.

  5. Predicting trace organic compound breakthrough in granular activated carbon using fluorescence and UV absorbance as surrogates.

    PubMed

    Anumol, Tarun; Sgroi, Massimiliano; Park, Minkyu; Roccaro, Paolo; Snyder, Shane A

    2015-06-01

    This study investigated the applicability of bulk organic parameters like dissolved organic carbon (DOC), UV absorbance at 254 nm (UV254), and total fluorescence (TF) to act as surrogates in predicting trace organic compound (TOrC) removal by granular activated carbon in water reuse applications. Using rapid small-scale column testing, empirical linear correlations for thirteen TOrCs were determined with DOC, UV254, and TF in four wastewater effluents. Linear correlations (R(2) > 0.7) were obtained for eight TOrCs in each water quality in the UV254 model, while ten TOrCs had R(2) > 0.7 in the TF model. Conversely, DOC was shown to be a poor surrogate for TOrC breakthrough prediction. When the data from all four water qualities was combined, good linear correlations were still obtained with TF having higher R(2) than UV254 especially for TOrCs with log Dow>1. Excellent linear relationship (R(2) > 0.9) between log Dow and the removal of TOrC at 0% surrogate removal (y-intercept) were obtained for the five neutral TOrCs tested in this study. Positively charged TOrCs had enhanced removals due to electrostatic interactions with negatively charged GAC that caused them to deviate from removals that would be expected with their log Dow. Application of the empirical linear correlation models to full-scale samples provided good results for six of seven TOrCs (except meprobamate) tested when comparing predicted TOrC removal by UV254 and TF with actual removals for GAC in all the five samples tested. Surrogate predictions using UV254 and TF provide valuable tools for rapid or on-line monitoring of GAC performance and can result in cost savings by extended GAC run times as compared to using DOC breakthrough to trigger regeneration or replacement. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Weighted log-linear models for service delivery points in Ethiopia: a case of modern contraceptive users at health facilities.

    PubMed

    Workie, Demeke Lakew; Zike, Dereje Tesfaye; Fenta, Haile Mekonnen; Mekonnen, Mulusew Admasu

    2018-05-10

    Ethiopia is among countries with low contraceptive usage prevalence rate and resulted in high total fertility rate and unwanted pregnancy which intern affects the maternal and child health status. This study aimed to investigate the major factors that affect the number of modern contraceptive users at service delivery point in Ethiopia. The Performance Monitoring and Accountability2020/Ethiopia data collected between March and April 2016 at round-4 from 461 eligible service delivery points were in this study. The weighted log-linear negative binomial model applied to analyze the service delivery point's data. Fifty percent of service delivery points in Ethiopia given service for 61 modern contraceptive users with the interquartile range of 0.62. The expected log number of modern contraceptive users at rural was 1.05 (95% Wald CI: - 1.42 to - 0.68) lower than the expected log number of modern contraceptive users at urban. In addition, the expected log count of modern contraceptive users at others facility type was 0.58 lower than the expected log count of modern contraceptive users at the health center. The numbers of nurses/midwives were affecting the number of modern contraceptive users. Since, the incidence rate of modern contraceptive users increased by one due to an additional nurse in the delivery point. Among different factors considered in this study, residence, region, facility type, the number of days per week family planning offered, the number of nurses/midwives and number of medical assistants were to be associated with the number of modern contraceptive users. Thus, the Government of Ethiopia would take immediate steps to address causes of the number of modern contraceptive users in Ethiopia.

  7. Partitioning of polar and non-polar neutral organic chemicals into human and cow milk.

    PubMed

    Geisler, Anett; Endo, Satoshi; Goss, Kai-Uwe

    2011-10-01

    The aim of this work was to develop a predictive model for milk/water partition coefficients of neutral organic compounds. Batch experiments were performed for 119 diverse organic chemicals in human milk and raw and processed cow milk at 37°C. No differences (<0.3 log units) in the partition coefficients of these types of milk were observed. The polyparameter linear free energy relationship model fit the calibration data well (SD=0.22 log units). An experimental validation data set including hormones and hormone active compounds was predicted satisfactorily by the model. An alternative modelling approach based on log K(ow) revealed a poorer performance. The model presented here provides a significant improvement in predicting enrichment of potentially hazardous chemicals in milk. In combination with physiologically based pharmacokinetic modelling this improvement in the estimation of milk/water partitioning coefficients may allow a better risk assessment for a wide range of neutral organic chemicals. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Analysis of amyotrophic lateral sclerosis as a multistep process: a population-based modelling study.

    PubMed

    Al-Chalabi, Ammar; Calvo, Andrea; Chio, Adriano; Colville, Shuna; Ellis, Cathy M; Hardiman, Orla; Heverin, Mark; Howard, Robin S; Huisman, Mark H B; Keren, Noa; Leigh, P Nigel; Mazzini, Letizia; Mora, Gabriele; Orrell, Richard W; Rooney, James; Scott, Kirsten M; Scotton, William J; Seelen, Meinie; Shaw, Christopher E; Sidle, Katie S; Swingler, Robert; Tsuda, Miho; Veldink, Jan H; Visser, Anne E; van den Berg, Leonard H; Pearce, Neil

    2014-11-01

    Amyotrophic lateral sclerosis shares characteristics with some cancers, such as onset being more common in later life, progression usually being rapid, the disease affecting a particular cell type, and showing complex inheritance. We used a model originally applied to cancer epidemiology to investigate the hypothesis that amyotrophic lateral sclerosis is a multistep process. We generated incidence data by age and sex from amyotrophic lateral sclerosis population registers in Ireland (registration dates 1995-2012), the Netherlands (2006-12), Italy (1995-2004), Scotland (1989-98), and England (2002-09), and calculated age and sex-adjusted incidences for each register. We regressed the log of age-specific incidence against the log of age with least squares regression. We did the analyses within each register, and also did a combined analysis, adjusting for register. We identified 6274 cases of amyotrophic lateral sclerosis from a catchment population of about 34 million people. We noted a linear relationship between log incidence and log age in all five registers: England r(2)=0·95, Ireland r(2)=0·99, Italy r(2)=0·95, the Netherlands r(2)=0·99, and Scotland r(2)=0·97; overall r(2)=0·99. All five registers gave similar estimates of the linear slope ranging from 4·5 to 5·1, with overlapping confidence intervals. The combination of all five registers gave an overall slope of 4·8 (95% CI 4·5-5·0), with similar estimates for men (4·6, 4·3-4·9) and women (5·0, 4·5-5·5). A linear relationship between the log incidence and log age of onset of amyotrophic lateral sclerosis is consistent with a multistage model of disease. The slope estimate suggests that amyotrophic lateral sclerosis is a six-step process. Identification of these steps could lead to preventive and therapeutic avenues. UK Medical Research Council; UK Economic and Social Research Council; Ireland Health Research Board; The Netherlands Organisation for Health Research and Development (ZonMw); the Ministry of Health and Ministry of Education, University, and Research in Italy; the Motor Neurone Disease Association of England, Wales, and Northern Ireland; and the European Commission (Seventh Framework Programme). Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. The Log-Linear Cognitive Diagnostic Model (LCDM) as a Special Case of The General Diagnostic Model (GDM). Research Report. ETS RR-14-40

    ERIC Educational Resources Information Center

    von Davier, Matthias

    2014-01-01

    Diagnostic models combine multiple binary latent variables in an attempt to produce a latent structure that provides more information about test takers' performance than do unidimensional latent variable models. Recent developments in diagnostic modeling emphasize the possibility that multiple skills may interact in a conjunctive way within the…

  10. Environmental factors and flow paths related to Escherichia coli concentrations at two beaches on Lake St. Clair, Michigan, 2002–2005

    USGS Publications Warehouse

    Holtschlag, David J.; Shively, Dawn; Whitman, Richard L.; Haack, Sheridan K.; Fogarty, Lisa R.

    2008-01-01

    Regression analyses and hydrodynamic modeling were used to identify environmental factors and flow paths associated with Escherichia coli (E. coli) concentrations at Memorial and Metropolitan Beaches on Lake St. Clair in Macomb County, Mich. Lake St. Clair is part of the binational waterway between the United States and Canada that connects Lake Huron with Lake Erie in the Great Lakes Basin. Linear regression, regression-tree, and logistic regression models were developed from E. coli concentration and ancillary environmental data. Linear regression models on log10 E. coli concentrations indicated that rainfall prior to sampling, water temperature, and turbidity were positively associated with bacteria concentrations at both beaches. Flow from Clinton River, changes in water levels, wind conditions, and log10 E. coli concentrations 2 days before or after the target bacteria concentrations were statistically significant at one or both beaches. In addition, various interaction terms were significant at Memorial Beach. Linear regression models for both beaches explained only about 30 percent of the variability in log10 E. coli concentrations. Regression-tree models were developed from data from both Memorial and Metropolitan Beaches but were found to have limited predictive capability in this study. The results indicate that too few observations were available to develop reliable regression-tree models. Linear logistic models were developed to estimate the probability of E. coli concentrations exceeding 300 most probable number (MPN) per 100 milliliters (mL). Rainfall amounts before bacteria sampling were positively associated with exceedance probabilities at both beaches. Flow of Clinton River, turbidity, and log10 E. coli concentrations measured before or after the target E. coli measurements were related to exceedances at one or both beaches. The linear logistic models were effective in estimating bacteria exceedances at both beaches. A receiver operating characteristic (ROC) analysis was used to determine cut points for maximizing the true positive rate prediction while minimizing the false positive rate. A two-dimensional hydrodynamic model was developed to simulate horizontal current patterns on Lake St. Clair in response to wind, flow, and water-level conditions at model boundaries. Simulated velocity fields were used to track hypothetical massless particles backward in time from the beaches along flow paths toward source areas. Reverse particle tracking for idealized steady-state conditions shows changes in expected flow paths and traveltimes with wind speeds and directions from 24 sectors. The results indicate that three to four sets of contiguous wind sectors have similar effects on flow paths in the vicinity of the beaches. In addition, reverse particle tracking was used for transient conditions to identify expected flow paths for 10 E. coli sampling events in 2004. These results demonstrate the ability to track hypothetical particles from the beaches, backward in time, to likely source areas. This ability, coupled with a greater frequency of bacteria sampling, may provide insight into changes in bacteria concentrations between source and sink areas.

  11. Effects of solvent composition in the normal-phase liquid chromatography of alkylphenols and naphthols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurtubise, R.J.; Hussain, A.; Silver, H.F.

    1981-11-01

    The normal-phase liquid chromatographic models of Scott, Snyder, and Soczewinski were considered for a ..mu..-Bondapak NH/sub 2/ stationary phase. n-Heptane:2-propanol and n-heptane:ethyl acetate mobile phases of different compositions were used. Linear relationships were obtained from graphs of log K' vs. log mole fraction of the strong solvent for both n-heptane:2-propanol and n-heptane:ethyl acetate mobile phases. A linear relationship was obtained between the reciprocal of corrected retention volume and % wt/v of 2-propanol but not between the reciprocal of corrected retention volume and % wt/v of ethyl acetate. The slopes and intercept terms from the Snyder and Soczewinski models were foundmore » to approximately describe interactions with ..mu..-Bondapak NH/sub 2/. Capacity factors can be predicted for the compounds by using the equations obtained from mobile phase composition variation experiments.« less

  12. Difference equation model for isothermal gas chromatography expresses retention behavior of homologues of n-alkanes excluding the influence of holdup time

    PubMed Central

    Wu, Liejun; Chen, Yongli; Caccamise, Sarah A.L.; Li, Qing X.

    2012-01-01

    A difference equation (DE) model is developed using the methylene retention increment (Δtz) of n-alkanes to avoid the influence of gas holdup time (tM). The effects of the equation orders (1st–5th) on the accuracy of a curve fitting show that a linear equation (LE) is less satisfactory and it is not necessary to use a complicated cubic or higher order equation. The relationship between the logarithm of Δtz and the carbon number (z) of the n-alkanes under isothermal conditions closely follows the quadratic equation for C3–C30 n-alkanes at column temperatures of 24–260 °C. The first and second order forward differences of the expression (Δlog Δtz and Δ2log Δtz, respectively) are linear and constant, respectively, which validates the DE model. This DE model lays a necessary foundation for further developing a retention model to accurately describe the relationship between the adjusted retention time and z of n-alkanes. PMID:22939376

  13. Stochastic theory of log-periodic patterns

    NASA Astrophysics Data System (ADS)

    Canessa, Enrique

    2000-12-01

    We introduce an analytical model based on birth-death clustering processes to help in understanding the empirical log-periodic corrections to power law scaling and the finite-time singularity as reported in several domains including rupture, earthquakes, world population and financial systems. In our stochastic theory log-periodicities are a consequence of transient clusters induced by an entropy-like term that may reflect the amount of co-operative information carried by the state of a large system of different species. The clustering completion rates for the system are assumed to be given by a simple linear death process. The singularity at t0 is derived in terms of birth-death clustering coefficients.

  14. Locomotor ecology of wild orangutans (Pongo pygmaeus abelii) in the Gunung Leuser Ecosystem, Sumatra, Indonesia: a multivariate analysis using log-linear modelling.

    PubMed

    Thorpe, Susannah K S; Crompton, Robin H

    2005-05-01

    The large body mass and exclusively arboreal lifestyle of Sumatran orangutans identify them as a key species in understanding the dynamic between primates and their environment. Increased knowledge of primate locomotor ecology, coupled with recent developments in the standardization of positional mode classifications (Hunt et al. [1996] Primates 37:363-387), opened the way for sophisticated multivariate statistical approaches, clarifying complex associations between multiple influences on locomotion. In this study we present a log-linear modelling approach used to identify key associations between orangutan locomotion, canopy level, support use, and contextual behavior. Log-linear modelling is particularly appropriate because it is designed for categorical data, provides a systematic method for testing alternative hypotheses regarding interactions between variables, and allows interactions to be ranked numerically in terms of relative importance. Support diameter and type were found to have the strongest associations with locomotor repertoire, suggesting that orangutans have evolved distinct locomotor modes to solve a variety of complex habitat problems. However, height in the canopy and contextual behavior do not directly influence locomotion: instead, their effect is modified by support type and support diameter, respectively. Contrary to classic predictions, age-sex category has only limited influence on orangutan support use and locomotion, perhaps reflecting the presence of arboreal pathways which individuals of all age-sex categories follow. Effects are primarily related to a tendency for adult, parous females to adopt a more cautious approach to locomotion than adult males and immature subjects. Copyright 2004 Wiley-Liss, Inc.

  15. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  16. Novel hybrid linear stochastic with non-linear extreme learning machine methods for forecasting monthly rainfall a tropical climate.

    PubMed

    Zeynoddin, Mohammad; Bonakdari, Hossein; Azari, Arash; Ebtehaj, Isa; Gharabaghi, Bahram; Riahi Madavar, Hossein

    2018-09-15

    A novel hybrid approach is presented that can more accurately predict monthly rainfall in a tropical climate by integrating a linear stochastic model with a powerful non-linear extreme learning machine method. This new hybrid method was then evaluated by considering four general scenarios. In the first scenario, the modeling process is initiated without preprocessing input data as a base case. While in other three scenarios, the one-step and two-step procedures are utilized to make the model predictions more precise. The mentioned scenarios are based on a combination of stationarization techniques (i.e., differencing, seasonal and non-seasonal standardization and spectral analysis), and normality transforms (i.e., Box-Cox, John and Draper, Yeo and Johnson, Johnson, Box-Cox-Mod, log, log standard, and Manly). In scenario 2, which is a one-step scenario, the stationarization methods are employed as preprocessing approaches. In scenario 3 and 4, different combinations of normality transform, and stationarization methods are considered as preprocessing techniques. In total, 61 sub-scenarios are evaluated resulting 11013 models (10785 linear methods, 4 nonlinear models, and 224 hybrid models are evaluated). The uncertainty of the linear, nonlinear and hybrid models are examined by Monte Carlo technique. The best preprocessing technique is the utilization of Johnson normality transform and seasonal standardization (respectively) (R 2  = 0.99; RMSE = 0.6; MAE = 0.38; RMSRE = 0.1, MARE = 0.06, UI = 0.03 &UII = 0.05). The results of uncertainty analysis indicated the good performance of proposed technique (d-factor = 0.27; 95PPU = 83.57). Moreover, the results of the proposed methodology in this study were compared with an evolutionary hybrid of adaptive neuro fuzzy inference system (ANFIS) with firefly algorithm (ANFIS-FFA) demonstrating that the new hybrid methods outperformed ANFIS-FFA method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. COSOLVENCY AND SOPRTION OF HYDROPHOBIC ORGANIC CHEMICALS

    EPA Science Inventory

    Sorption of hydrophobic organic chemicals (HOCs) by two soils was measured from mixed solvents containing water plus completely miscible organic solvents (CMOSs) and partially miscible organic solvents (PMOSs). The utility of the log-linear cosolvency model for predicting HOC sor...

  18. Koopman Mode Decomposition Methods in Dynamic Stall: Reduced Order Modeling and Control

    DTIC Science & Technology

    2015-11-10

    the flow phenomena by separating them into individual modes. The technique of Proper Orthogonal Decomposition (POD), see [ Holmes : 1998] is a popular...sampled values h(k), k = 0,…,2M-1, of the exponential sum 1. Solve the following linear system where 2. Compute all zeros zj  D, j = 1,…,M...of the Prony polynomial i.e., calculate all eigenvalues of the associated companion matrix and form fj = log zj for j = 1,…,M, where log is the

  19. Prediction of octanol-water partition coefficients of organic compounds by multiple linear regression, partial least squares, and artificial neural network.

    PubMed

    Golmohammadi, Hassan

    2009-11-30

    A quantitative structure-property relationship (QSPR) study was performed to develop models those relate the structure of 141 organic compounds to their octanol-water partition coefficients (log P(o/w)). A genetic algorithm was applied as a variable selection tool. Modeling of log P(o/w) of these compounds as a function of theoretically derived descriptors was established by multiple linear regression (MLR), partial least squares (PLS), and artificial neural network (ANN). The best selected descriptors that appear in the models are: atomic charge weighted partial positively charged surface area (PPSA-3), fractional atomic charge weighted partial positive surface area (FPSA-3), minimum atomic partial charge (Qmin), molecular volume (MV), total dipole moment of molecule (mu), maximum antibonding contribution of a molecule orbital in the molecule (MAC), and maximum free valency of a C atom in the molecule (MFV). The result obtained showed the ability of developed artificial neural network to prediction of partition coefficients of organic compounds. Also, the results revealed the superiority of ANN over the MLR and PLS models. Copyright 2009 Wiley Periodicals, Inc.

  20. (Draft) Community air pollution and mortality: Analysis of 1980 data from US metropolitan areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipfert, F.W.

    1992-11-01

    1980 data from up to 149 metropolitan areas were used to define cross-sectional associations between community air pollution and ``excess`` human mortality. The regression model proposed by Ozkaynak and Thurston (1987), which accounted for age, race, education, poverty, and population density, was evaluated and several new models were developed. The new models also accounted for migration, drinking water hardness, and smoking, and included a more detailed description of race. Cause-of-death categories analyzed include all causes, all ``non-external`` causes, major cardiovascular diseases, and chronic obstructive pulmonary diseases (COPD). Both annual mortality rates and their logarithms were analyzed. Air quality data weremore » obtained from the EPA AIRS database (TSP, SO{sub 4}{sup =}, Mn, and ozone) and from the inhalable particulate network (PM{sub 15}, PM{sub 2.5} and SO{sub 4}{sup =}, for 63{sup 4} locations). The data on particulates were averaged across all monitoring stations available for each SMSA and the TSP data were restricted to the year 1980. The associations between mortality and air pollution were found to be dependent on the socioeconomic factors included in the models, the specific locations included in the data set, and the type of statistical model used. Statistically significant associations were found as follows: between TSP and mortality due to non-external causes with log-linear models, but not with a linear model betweenestimated 10-year average (1980--90) ozone levels and 1980 non-external and cardiovascular deaths; and between TSP and COPD mortality for both linear and log-linear models. When the sulfate contribution to TSP was subtracted, the relationship with COPD mortality was strengthened.« less

  1. (Draft) Community air pollution and mortality: Analysis of 1980 data from US metropolitan areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipfert, F.W.

    1992-11-01

    1980 data from up to 149 metropolitan areas were used to define cross-sectional associations between community air pollution and excess'' human mortality. The regression model proposed by Ozkaynak and Thurston (1987), which accounted for age, race, education, poverty, and population density, was evaluated and several new models were developed. The new models also accounted for migration, drinking water hardness, and smoking, and included a more detailed description of race. Cause-of-death categories analyzed include all causes, all non-external'' causes, major cardiovascular diseases, and chronic obstructive pulmonary diseases (COPD). Both annual mortality rates and their logarithms were analyzed. Air quality data weremore » obtained from the EPA AIRS database (TSP, SO[sub 4][sup =], Mn, and ozone) and from the inhalable particulate network (PM[sub 15], PM[sub 2.5] and SO[sub 4][sup =], for 63[sup 4] locations). The data on particulates were averaged across all monitoring stations available for each SMSA and the TSP data were restricted to the year 1980. The associations between mortality and air pollution were found to be dependent on the socioeconomic factors included in the models, the specific locations included in the data set, and the type of statistical model used. Statistically significant associations were found as follows: between TSP and mortality due to non-external causes with log-linear models, but not with a linear model betweenestimated 10-year average (1980--90) ozone levels and 1980 non-external and cardiovascular deaths; and between TSP and COPD mortality for both linear and log-linear models. When the sulfate contribution to TSP was subtracted, the relationship with COPD mortality was strengthened.« less

  2. Linear separability in superordinate natural language concepts.

    PubMed

    Ruts, Wim; Storms, Gert; Hampton, James

    2004-01-01

    Two experiments are reported in which linear separability was investigated in superordinate natural language concept pairs (e.g., toiletry-sewing gear). Representations of the exemplars of semantically related concept pairs were derived in two to five dimensions using multidimensional scaling (MDS) of similarities based on possession of the concept features. Next, category membership, obtained from an exemplar generation study (in Experiment 1) and from a forced-choice classification task (in Experiment 2) was predicted from the coordinates of the MDS representation using log linear analysis. The results showed that all natural kind concept pairs were perfectly linearly separable, whereas artifact concept pairs showed several violations. Clear linear separability of natural language concept pairs is in line with independent cue models. The violations in the artifact pairs, however, yield clear evidence against the independent cue models.

  3. Atmospheric concentrations, sources and gas-particle partitioning of PAHs in Beijing after the 29th Olympic Games.

    PubMed

    Ma, Wan-Li; Sun, De-Zhi; Shen, Wei-Guo; Yang, Meng; Qi, Hong; Liu, Li-Yan; Shen, Ji-Min; Li, Yi-Fan

    2011-07-01

    A comprehensive sampling campaign was carried out to study atmospheric concentration of polycyclic aromatic hydrocarbons (PAHs) in Beijing and to evaluate the effectiveness of source control strategies in reducing PAHs pollution after the 29th Olympic Games. The sub-cooled liquid vapor pressure (logP(L)(o))-based model and octanol-air partition coefficient (K(oa))-based model were applied based on each seasonal dateset. Regression analysis among log K(P), logP(L)(o) and log K(oa) exhibited high significant correlations for four seasons. Source factors were identified by principle component analysis and contributions were further estimated by multiple linear regression. Pyrogenic sources and coke oven emission were identified as major sources for both the non-heating and heating seasons. As compared with literatures, the mean PAH concentrations before and after the 29th Olympic Games were reduced by more than 60%, indicating that the source control measures were effective for reducing PAHs pollution in Beijing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators.

    PubMed

    Kim, Pil-Jong; Kim, Hong-Gee; Cho, Byeong-Hoon

    2015-05-01

    The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC.

  5. Experimental and statistical study on fracture boundary of non-irradiated Zircaloy-4 cladding tube under LOCA conditions

    NASA Astrophysics Data System (ADS)

    Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki

    2018-02-01

    For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.

  6. Investigating the Metallicity–Mixing-length Relation

    NASA Astrophysics Data System (ADS)

    Viani, Lucas S.; Basu, Sarbani; Joel Ong J., M.; Bonaca, Ana; Chaplin, William J.

    2018-05-01

    Stellar models typically use the mixing-length approximation as a way to implement convection in a simplified manner. While conventionally the value of the mixing-length parameter, α, used is the solar-calibrated value, many studies have shown that other values of α are needed to properly model stars. This uncertainty in the value of the mixing-length parameter is a major source of error in stellar models and isochrones. Using asteroseismic data, we determine the value of the mixing-length parameter required to properly model a set of about 450 stars ranging in log g, {T}eff}, and [{Fe}/{{H}}]. The relationship between the value of α required and the properties of the star is then investigated. For Eddington atmosphere, non-diffusion models, we find that the value of α can be approximated by a linear model, in the form of α /{α }ȯ =5.426{--}0.101 {log}(g)-1.071 {log}({T}eff}) +0.437([{Fe}/{{H}}]). This process is repeated using a variety of model physics, as well as compared with previous studies and results from 3D convective simulations.

  7. Measures of model performance based on the log accuracy ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  8. Measures of model performance based on the log accuracy ratio

    DOE PAGES

    Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.

    2018-01-03

    Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less

  9. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.

    PubMed

    Austin, Peter C

    2017-08-01

    Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).

  10. Neurobehavioral function in school-age children exposed to manganese in drinking water.

    PubMed

    Oulhote, Youssef; Mergler, Donna; Barbeau, Benoit; Bellinger, David C; Bouffard, Thérèse; Brodeur, Marie-Ève; Saint-Amour, Dave; Legrand, Melissa; Sauvé, Sébastien; Bouchard, Maryse F

    2014-12-01

    Manganese neurotoxicity is well documented in individuals occupationally exposed to airborne particulates, but few data are available on risks from drinking-water exposure. We examined associations of exposure from concentrations of manganese in water and hair with memory, attention, motor function, and parent- and teacher-reported hyperactive behaviors. We recruited 375 children and measured manganese in home tap water (MnW) and hair (MnH). We estimated manganese intake from water ingestion. Using structural equation modeling, we estimated associations between neurobehavioral functions and MnH, MnW, and manganese intake from water. We evaluated exposure-response relationships using generalized additive models. After adjusting for potential confounders, a 1-SD increase in log10 MnH was associated with a significant difference of -24% (95% CI: -36, -12%) SD in memory and -25% (95% CI: -41, -9%) SD in attention. The relations between log10 MnH and poorer memory and attention were linear. A 1-SD increase in log10 MnW was associated with a significant difference of -14% (95% CI: -24, -4%) SD in memory, and this relation was nonlinear, with a steeper decline in performance at MnW > 100 μg/L. A 1-SD increase in log10 manganese intake from water was associated with a significant difference of -11% (95% CI: -21, -0.4%) SD in motor function. The relation between log10 manganese intake and poorer motor function was linear. There was no significant association between manganese exposure and hyperactivity. Exposure to manganese in water was associated with poorer neurobehavioral performances in children, even at low levels commonly encountered in North America.

  11. Single point dilution method for the quantitative analysis of antibodies to the gag24 protein of HIV-1.

    PubMed

    Palenzuela, D O; Benítez, J; Rivero, J; Serrano, R; Ganzó, O

    1997-10-13

    In the present work a concept proposed in 1992 by Dopotka and Giesendorf was applied to the quantitative analysis of antibodies to the p24 protein of HIV-1 in infected asymptomatic individuals and AIDS patients. Two approaches were analyzed, a linear model OD = b0 + b1.log(titer) and a nonlinear log(titer) = alpha.OD beta, similar to the Dopotka-Giesendorf's model. The above two proposed models adequately fit the dependence of the optical density values at a single point dilution, and titers achieved by the end point dilution method (EPDM). Nevertheless, the nonlinear model better fits the experimental data, according to residuals analysis. Classical EPDM was compared with the new single point dilution method (SPDM) using both models. The best correlation between titers calculated using both models and titers achieved by EPDM was obtained with the nonlinear model. The correlation coefficients for the nonlinear and linear models were r = 0.85 and r = 0.77, respectively. A new correction factor was introduced into the nonlinear model and this reduced the day-to-day variation of titer values. In general, SPDM saves time, reagents and is more precise and sensitive to changes in antibody levels, and therefore has a higher resolution than EPDM.

  12. Functional form and risk adjustment of hospital costs: Bayesian analysis of a Box-Cox random coefficients model.

    PubMed

    Hollenbeak, Christopher S

    2005-10-15

    While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.

  13. Linear modeling of the soil-water partition coefficient normalized to organic carbon content by reversed-phase thin-layer chromatography.

    PubMed

    Andrić, Filip; Šegan, Sandra; Dramićanin, Aleksandra; Majstorović, Helena; Milojković-Opsenica, Dušanka

    2016-08-05

    Soil-water partition coefficient normalized to the organic carbon content (KOC) is one of the crucial properties influencing the fate of organic compounds in the environment. Chromatographic methods are well established alternative for direct sorption techniques used for KOC determination. The present work proposes reversed-phase thin-layer chromatography (RP-TLC) as a simpler, yet equally accurate method as officially recommended HPLC technique. Several TLC systems were studied including octadecyl-(RP18) and cyano-(CN) modified silica layers in combination with methanol-water and acetonitrile-water mixtures as mobile phases. In total 50 compounds of different molecular shape, size, and various ability to establish specific interactions were selected (phenols, beznodiazepines, triazine herbicides, and polyaromatic hydrocarbons). Calibration set of 29 compounds with known logKOC values determined by sorption experiments was used to build simple univariate calibrations, Principal Component Regression (PCR) and Partial Least Squares (PLS) models between logKOC and TLC retention parameters. Models exhibit good statistical performance, indicating that CN-layers contribute better to logKOC modeling than RP18-silica. The most promising TLC methods, officially recommended HPLC method, and four in silico estimation approaches have been compared by non-parametric Sum of Ranking Differences approach (SRD). The best estimations of logKOC values were achieved by simple univariate calibration of TLC retention data involving CN-silica layers and moderate content of methanol (40-50%v/v). They were ranked far well compared to the officially recommended HPLC method which was ranked in the middle. The worst estimates have been obtained from in silico computations based on octanol-water partition coefficient. Linear Solvation Energy Relationship study revealed that increased polarity of CN-layers over RP18 in combination with methanol-water mixtures is the key to better modeling of logKOC through significant diminishing of dipolar and proton accepting influence of the mobile phase as well as enhancing molar refractivity in excess of the chromatographic systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Quantification of spore resistance for assessment and optimization of heating processes: a never-ending story.

    PubMed

    Mafart, P; Leguérinel, I; Couvert, O; Coroller, L

    2010-08-01

    The assessment and optimization of food heating processes require knowledge of the thermal resistance of target spores. Although the concept of spore resistance may seem simple, the establishment of a reliable quantification system for characterizing the heat resistance of spores has proven far more complex than imagined by early researchers. This paper points out the main difficulties encountered by reviewing the historical works on the subject. During an early period, the concept of individual spore resistance had not yet been considered and the resistance of a strain of spore-forming bacterium was related to a global population regarded as alive or dead. A second period was opened by the introduction of the well-known D parameter (decimal reduction time) associated with the previously introduced z-concept. The present period has introduced three new sources of complexity: consideration of non log-linear survival curves, consideration of environmental factors other than temperature, and awareness of the variability of resistance parameters. The occurrence of non log-linear survival curves makes spore resistance dependent on heating time. Consequently, spore resistance characterisation requires at least two parameters. While early resistance models took only heating temperature into account, new models consider other environmental factors such as pH and water activity ("horizontal extension"). Similarly the new generation of models also considers certain environmental factors of the recovery medium for quantifying "apparent heat resistance" ("vertical extension"). Because the conventional F-value is no longer additive in cases of non log-linear survival curves, the decimal reduction ratio should be preferred for assessing the efficiency of a heating process. Copyright 2010 Elsevier Ltd. All rights reserved.

  15. Partition of volatile organic compounds from air and from water into plant cuticular matrix: An LFER analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platts, J.A.; Abraham, M.H.

    The partitioning of organic compounds between air and foliage and between water and foliage is of considerable environmental interest. The purpose of this work is to show that partitioning into the cuticular matrix of one particular species can be satisfactorily modeled by general equations the authors have previously developed and, hence, that the same general equations could be used to model partitioning into other plant materials of the same or different species. The general equations are linear free energy relationships that employ descriptors for polarity/polarizability, hydrogen bond acidity and basicity, dispersive effects, and volume. They have been applied to themore » partition of 62 very varied organic compounds between cuticular matrix of the tomato fruit, Lycopersicon esculentum, and either air (MX{sub a}) or water (MX{sub w}). Values of log MX{sub a} covering a range of 12.4 log units are correlated with a standard deviation of 0.232 log unit, and values of log MX{sub w} covering a range of 7.6 log unit are correlated with an SD of 0.236 log unit. Possibilities are discussed for the prediction of new air-plant cuticular matrix and water-plant cuticular matrix partition values on the basis of the equations developed.« less

  16. Straightening Beta: Overdispersion of Lethal Chromosome Aberrations following Radiotherapeutic Doses Leads to Terminal Linearity in the Alpha–Beta Model

    PubMed Central

    Shuryak, Igor; Loucas, Bradford D.; Cornforth, Michael N.

    2017-01-01

    Recent technological advances allow precise radiation delivery to tumor targets. As opposed to more conventional radiotherapy—where multiple small fractions are given—in some cases, the preferred course of treatment may involve only a few (or even one) large dose(s) per fraction. Under these conditions, the choice of appropriate radiobiological model complicates the tasks of predicting radiotherapy outcomes and designing new treatment regimens. The most commonly used model for this purpose is the venerable linear-quadratic (LQ) formalism as it applies to cell survival. However, predictions based on the LQ model are frequently at odds with data following very high acute doses. In particular, although the LQ predicts a continuously bending dose–response relationship for the logarithm of cell survival, empirical evidence over the high-dose region suggests that the survival response is instead log-linear with dose. Here, we show that the distribution of lethal chromosomal lesions among individual human cells (lymphocytes and fibroblasts) exposed to gamma rays and X rays is somewhat overdispersed, compared with the Poisson distribution. Further, we show that such overdispersion affects the predicted dose response for cell survival (the fraction of cells with zero lethal lesions). This causes the dose response to approximate log-linear behavior at high doses, even when the mean number of lethal lesions per cell is well fitted by the continuously curving LQ model. Accounting for overdispersion of lethal lesions provides a novel, mechanistically based explanation for the observed shapes of cell survival dose responses that, in principle, may offer a tractable and clinically useful approach for modeling the effects of high doses per fraction. PMID:29312888

  17. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  18. A review of statistical estimators for risk-adjusted length of stay: analysis of the Australian and new Zealand Intensive Care Adult Patient Data-Base, 2008-2009.

    PubMed

    Moran, John L; Solomon, Patricia J

    2012-05-16

    For the analysis of length-of-stay (LOS) data, which is characteristically right-skewed, a number of statistical estimators have been proposed as alternatives to the traditional ordinary least squares (OLS) regression with log dependent variable. Using a cohort of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 2008-2009, 12 different methods were used for estimation of intensive care (ICU) length of stay. These encompassed risk-adjusted regression analysis of firstly: log LOS using OLS, linear mixed model [LMM], treatment effects, skew-normal and skew-t models; and secondly: unmodified (raw) LOS via OLS, generalised linear models [GLMs] with log-link and 4 different distributions [Poisson, gamma, negative binomial and inverse-Gaussian], extended estimating equations [EEE] and a finite mixture model including a gamma distribution. A fixed covariate list and ICU-site clustering with robust variance were utilised for model fitting with split-sample determination (80%) and validation (20%) data sets, and model simulation was undertaken to establish over-fitting (Copas test). Indices of model specification using Bayesian information criterion [BIC: lower values preferred] and residual analysis as well as predictive performance (R2, concordance correlation coefficient (CCC), mean absolute error [MAE]) were established for each estimator. The data-set consisted of 111663 patients from 131 ICUs; with mean(SD) age 60.6(18.8) years, 43.0% were female, 40.7% were mechanically ventilated and ICU mortality was 7.8%. ICU length-of-stay was 3.4(5.1) (median 1.8, range (0.17-60)) days and demonstrated marked kurtosis and right skew (29.4 and 4.4 respectively). BIC showed considerable spread, from a maximum of 509801 (OLS-raw scale) to a minimum of 210286 (LMM). R2 ranged from 0.22 (LMM) to 0.17 and the CCC from 0.334 (LMM) to 0.149, with MAE 2.2-2.4. Superior residual behaviour was established for the log-scale estimators. There was a general tendency for over-prediction (negative residuals) and for over-fitting, the exception being the GLM negative binomial estimator. The mean-variance function was best approximated by a quadratic function, consistent with log-scale estimation; the link function was estimated (EEE) as 0.152(0.019, 0.285), consistent with a fractional-root function. For ICU length of stay, log-scale estimation, in particular the LMM, appeared to be the most consistently performing estimator(s). Neither the GLM variants nor the skew-regression estimators dominated.

  19. Recall of past use of mobile phone handsets.

    PubMed

    Parslow, R C; Hepworth, S J; McKinney, P A

    2003-01-01

    Previous studies investigating health effects of mobile phones have based their estimation of exposure on self-reported levels of phone use. This UK validation study assesses the accuracy of reported voice calls made from mobile handsets. Data collected by postal questionnaire from 93 volunteers was compared to records obtained prospectively over 6 months from four network operators. Agreement was measured for outgoing calls using the kappa statistic, log-linear modelling, Spearman correlation coefficient and graphical methods. Agreement for number of calls gained moderate classification (kappa = 0.39) with better agreement for duration (kappa = 0.50). Log-linear modelling produced similar results. The Spearman correlation coefficient was 0.48 for number of calls and 0.60 for duration. Graphical agreement methods demonstrated patterns of over-reporting call numbers (by a factor of 1.7) and duration (by a factor of 2.8). These results suggest that self-reported mobile phone use may not fully represent patterns of actual use. This has implications for calculating exposures from questionnaire data.

  20. Global QSAR modeling of logP values of phenethylamines acting as adrenergic alpha-1 receptor agonists.

    PubMed

    Yadav, Mukesh; Joshi, Shobha; Nayarisseri, Anuraj; Jain, Anuja; Hussain, Aabid; Dubey, Tushar

    2013-06-01

    Global QSAR models predict biological response of molecular structures which are generic in particular class. A global QSAR dataset admits structural features derived from larger chemical space, intricate to model but more applicable in medicinal chemistry. The present work is global in either sense of structural diversity in QSAR dataset or large number of descriptor input. Forty phenethylamine structure derivatives were selected from a large pool (904) of similar phenethylamines available in Pubchem database. LogP values of selected candidates were collected from physical properties database (PHYSPROP) determined in identical set of conditions. Attempts to model logP value have produced significant QSAR models. MLR aided linear one-variable and two-variable QSAR models with their respective R(2) (0.866, 0.937), R(2)A (0.862, 0.932), F-stat (181.936, 199.812) and Standard Error (0.365, 0.255) are statistically fit and found predictive after internal validation and external validation. The descriptors chosen after improvisation and optimization reveal mechanistic part of work in terms of Verhaar model of Fish base-line toxicity from MLOGP, i.e. (BLTF96) and 3D-MoRSE -signal 15 /unweighted molecular descriptor calculated by summing atom weights viewed by a different angular scattering function (Mor15u) are crucial in regulation of logP values of phenethylamines.

  1. Finite difference modelling of dipole acoustic logs in a poroelastic formation with anisotropic permeability

    NASA Astrophysics Data System (ADS)

    He, Xiao; Hu, Hengshan; Wang, Xiuming

    2013-01-01

    Sedimentary rocks can exhibit strong permeability anisotropy due to layering, pre-stresses and the presence of aligned microcracks or fractures. In this paper, we develop a modified cylindrical finite-difference algorithm to simulate the borehole acoustic wavefield in a saturated poroelastic medium with transverse isotropy of permeability and tortuosity. A linear interpolation process is proposed to guarantee the leapfrog finite difference scheme for the generalized dynamic equations and Darcy's law for anisotropic porous media. First, the modified algorithm is validated by comparison against the analytical solution when the borehole axis is parallel to the symmetry axis of the formation. The same algorithm is then used to numerically model the dipole acoustic log in a borehole with its axis being arbitrarily deviated from the symmetry axis of transverse isotropy. The simulation results show that the amplitudes of flexural modes vary with the dipole orientation because the permeability tensor of the formation is dependent on the wellbore azimuth. It is revealed that the attenuation of the flexural wave increases approximately linearly with the radial permeability component in the direction of the transmitting dipole. Particularly, when the borehole axis is perpendicular to the symmetry axis of the formation, it is possible to estimate the anisotropy of permeability by evaluating attenuation of the flexural wave using a cross-dipole sonic logging tool according to the results of sensitivity analyses. Finally, the dipole sonic logs in a deviated borehole surrounded by a stratified porous formation are modelled using the proposed finite difference code. Numerical results show that the arrivals and amplitudes of transmitted flexural modes near the layer interface are sensitive to the wellbore inclination.

  2. Distribution of Animal Drugs between Skim Milk and Milk Fat Fractions in Spiked Whole Milk: Understanding the Potential Impact on Commercial Milk Products.

    PubMed

    Hakk, Heldur; Shappell, Nancy W; Lupton, Sara J; Shelver, Weilin L; Fanaselle, Wendy; Oryang, David; Yeung, Chi Yuen; Hoelzer, Karin; Ma, Yinqing; Gaalswyk, Dennis; Pouillot, Régis; Van Doren, Jane M

    2016-01-13

    Seven animal drugs [penicillin G (PENG), sulfadimethoxine (SDMX), oxytetracycline (OTET), erythromycin (ERY), ketoprofen (KETO), thiabendazole (THIA), and ivermectin (IVR)] were used to evaluate the drug distribution between milk fat and skim milk fractions of cow milk. More than 90% of the radioactivity was distributed into the skim milk fraction for ERY, KETO, OTET, PENG, and SDMX, approximately 80% for THIA, and 13% for IVR. The distribution of drug between milk fat and skim milk fractions was significantly correlated to the drug's lipophilicity (partition coefficient, log P, or distribution coefficient, log D, which includes ionization). Data were fit with linear mixed effects models; the best fit was obtained within this data set with log D versus observed drug distribution ratios. These candidate empirical models serve for assisting to predict the distribution and concentration of these drugs in a variety of milk and milk products.

  3. Using Log Linear Analysis for Categorical Family Variables.

    ERIC Educational Resources Information Center

    Moen, Phyllis

    The Goodman technique of log linear analysis is ideal for family research, because it is designed for categorical (non-quantitative) variables. Variables are dichotomized (for example, married/divorced, childless/with children) or otherwise categorized (for example, level of permissiveness, life cycle stage). Contingency tables are then…

  4. Sample Introduction Using the Hildebrand Grid Nebulizer for Plasma Spectrometry

    DTIC Science & Technology

    1988-01-01

    linear dynamic ranges, precision, and peak width were de- termined for elements in methanol and acetonitrile solutions. , (1)> The grid nebulizer was...FIA) with ICP-OES detection were evaluated. Detec- tion limits, linear dynamic ranges, precision, and peak width were de- termined for elements in...Concentration vs. Log Peak Area for Mn, 59 Cd, Zn, Au, Ni in Methanol (CMSC) 3-28 Log Concentration vs. Log Peak Area for Mn, 60 Cd, Au, Ni in

  5. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.; Marino, J. T., Jr.

    1974-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.

  6. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.

    1975-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.

  7. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    PubMed

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  8. Strange mode instabilities and mass loss in evolved massive primordial stars

    NASA Astrophysics Data System (ADS)

    Yadav, Abhay Pratap; Kühnrich Biavatti, Stefan Henrique; Glatzel, Wolfgang

    2018-04-01

    A linear stability analysis of models for evolved primordial stars with masses between 150 and 250 M⊙ is presented. Strange mode instabilities with growth rates in the dynamical range are identified for stellar models with effective temperatures below log Teff = 4.5. For selected models, the final fate of the instabilities is determined by numerical simulation of their evolution into the non-linear regime. As a result, the instabilities lead to finite amplitude pulsations. Associated with them are acoustic energy fluxes capable of driving stellar winds with mass-loss rates in the range between 7.7 × 10-7 and 3.5 × 10-4 M⊙ yr-1.

  9. WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarpelli, M; Eickhoff, J; Perlman, S

    Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less

  10. Evaluating and improving the representation of heteroscedastic errors in hydrological models

    NASA Astrophysics Data System (ADS)

    McInerney, D. J.; Thyer, M. A.; Kavetski, D.; Kuczera, G. A.

    2013-12-01

    Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic predictions. In particular, residual errors of hydrological models are often heteroscedastic, with large errors associated with high rainfall and runoff events. Recent studies have shown that using a weighted least squares (WLS) approach - where the magnitude of residuals are assumed to be linearly proportional to the magnitude of the flow - captures some of this heteroscedasticity. In this study we explore a range of Bayesian approaches for improving the representation of heteroscedasticity in residual errors. We compare several improved formulations of the WLS approach, the well-known Box-Cox transformation and the more recent log-sinh transformation. Our results confirm that these approaches are able to stabilize the residual error variance, and that it is possible to improve the representation of heteroscedasticity compared with the linear WLS approach. We also find generally good performance of the Box-Cox and log-sinh transformations, although as indicated in earlier publications, the Box-Cox transform sometimes produces unrealistically large prediction limits. Our work explores the trade-offs between these different uncertainty characterization approaches, investigates how their performance varies across diverse catchments and models, and recommends practical approaches suitable for large-scale applications.

  11. Symmetric log-domain diffeomorphic Registration: a demons-based approach.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2008-01-01

    Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.

  12. Reduced density gradient as a novel approach for estimating QSAR descriptors, and its application to 1, 4-dihydropyridine derivatives with potential antihypertensive effects.

    PubMed

    Jardínez, Christiaan; Vela, Alberto; Cruz-Borbolla, Julián; Alvarez-Mendez, Rodrigo J; Alvarado-Rodríguez, José G

    2016-12-01

    The relationship between the chemical structure and biological activity (log IC 50 ) of 40 derivatives of 1,4-dihydropyridines (DHPs) was studied using density functional theory (DFT) and multiple linear regression analysis methods. With the aim of improving the quantitative structure-activity relationship (QSAR) model, the reduced density gradient s( r) of the optimized equilibrium geometries was used as a descriptor to include weak non-covalent interactions. The QSAR model highlights the correlation between the log IC 50 with highest molecular orbital energy (E HOMO ), molecular volume (V), partition coefficient (log P), non-covalent interactions NCI(H4-G) and the dual descriptor [Δf(r)]. The model yielded values of R 2 =79.57 and Q 2 =69.67 that were validated with the next four internal analytical validations DK=0.076, DQ=-0.006, R P =0.056, and R N =0.000, and the external validation Q 2 boot =64.26. The QSAR model found can be used to estimate biological activity with high reliability in new compounds based on a DHP series. Graphical abstract The good correlation between the log IC 50 with the NCI (H4-G) estimated by the reduced density gradient approach of the DHP derivatives.

  13. Reflectance of micron-sized dust particles retrieved with the Umov law

    NASA Astrophysics Data System (ADS)

    Zubko, Evgenij; Videen, Gorden; Zubko, Nataliya; Shkuratov, Yuriy

    2017-03-01

    The maximum positive polarization Pmax that initially unpolarized light acquires when scattered from a particulate surface inversely correlates with its geometric albedo A. In the literature, this phenomenon is known as the Umov law. We investigate the Umov law in application to single-scattering submicron and micron-sized agglomerated debris particles, model particles that have highly irregular morphology. We find that if the complex refractive index m is constrained to Re(m)=1.4-1.7 and Im(m)=0-0.15, model particles of a given size distribution have a linear inverse correlation between log(Pmax) and log(A). This correlation resembles what is measured in particulate surfaces, suggesting a similar mechanism governing the Umov law in both systems. We parameterize the dependence of log(A) on log(Pmax) of single-scattering particles and analyze the airborne polarimetric measurements of atmospheric aerosols reported by Dolgos & Martins in [1]. We conclude that Pmax ≈ 50% measured by Dolgos & Martins corresponds to very dark aerosols having geometric albedo A=0.019 ± 0.005.

  14. An approach to checking case-crossover analyses based on equivalence with time-series methods.

    PubMed

    Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L

    2008-03-01

    The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.

  15. Comparison of three strong ion models used for quantifying the acid-base status of human plasma with special emphasis on the plasma weak acids.

    PubMed

    Anstey, Chris M

    2005-06-01

    Currently, three strong ion models exist for the determination of plasma pH. Mathematically, they vary in their treatment of weak acids, and this study was designed to determine whether any significant differences exist in the simulated performance of these models. The models were subjected to a "metabolic" stress either in the form of variable strong ion difference and fixed weak acid effect, or vice versa, and compared over the range 25 < or = Pco(2) < or = 135 Torr. The predictive equations for each model were iteratively solved for pH at each Pco(2) step, and the results were plotted as a series of log(Pco(2))-pH titration curves. The results were analyzed for linearity by using ordinary least squares regression and for collinearity by using correlation. In every case, the results revealed a linear relationship between log(Pco(2)) and pH over the range 6.8 < or = pH < or = 7.8, and no significant difference between the curve predictions under metabolic stress. The curves were statistically collinear. Ultimately, their clinical utility will be determined both by acceptance of the strong ion framework for describing acid-base physiology and by the ease of measurement of the independent model parameters.

  16. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  17. Quasi-equilibrium analysis of the ion-pair mediated membrane transport of low-permeability drugs.

    PubMed

    Miller, Jonathan M; Dahan, Arik; Gupta, Deepak; Varghese, Sheeba; Amidon, Gordon L

    2009-07-01

    The aim of this research was to gain a mechanistic understanding of ion-pair mediated membrane transport of low-permeability drugs. Quasi-equilibrium mass transport analyses were developed to describe the ion-pair mediated octanol-buffer partitioning and hydrophobic membrane permeation of the model basic drug phenformin. Three lipophilic counterions were employed: p-toluenesulfonic acid, 2-naphthalenesulfonic acid, and 1-hydroxy-2-naphthoic acid (HNAP). Association constants and intrinsic octanol-buffer partition coefficients (Log P(AB)) of the ion-pairs were obtained by fitting a transport model to double reciprocal plots of apparent octanol-buffer distribution coefficients versus counterion concentration. All three counterions enhanced the lipophilicity of phenformin, with HNAP providing the greatest increase in Log P(AB), 3.7 units over phenformin alone. HNAP also enhanced the apparent membrane permeability of phenformin, 27-fold in the PAMPA model, and 4.9-fold across Caco-2 cell monolayers. As predicted from a quasi-equilibrium analysis of ion-pair mediated membrane transport, an order of magnitude increase in phenformin flux was observed per log increase in counterion concentration, such that log-log plots of phenformin flux versus HNAP concentration gave linear relationships. These results provide increased understanding of the underlying mechanisms of ion-pair mediated membrane transport, emphasizing the potential of this approach to enable oral delivery of low-permeability drugs.

  18. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  19. A Comparison of Strategies for Estimating Conditional DIF

    ERIC Educational Resources Information Center

    Moses, Tim; Miao, Jing; Dorans, Neil J.

    2010-01-01

    In this study, the accuracies of four strategies were compared for estimating conditional differential item functioning (DIF), including raw data, logistic regression, log-linear models, and kernel smoothing. Real data simulations were used to evaluate the estimation strategies across six items, DIF and No DIF situations, and four sample size…

  20. An MCMC method for the evaluation of the Fisher information matrix for non-linear mixed effect models.

    PubMed

    Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France

    2016-10-01

    Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Inflammation, homocysteine and carotid intima-media thickness.

    PubMed

    Baptista, Alexandre P; Cacdocar, Sanjiva; Palmeiro, Hugo; Faísca, Marília; Carrasqueira, Herménio; Morgado, Elsa; Sampaio, Sandra; Cabrita, Ana; Silva, Ana Paula; Bernardo, Idalécio; Gome, Veloso; Neves, Pedro L

    2008-01-01

    Cardiovascular disease is the main cause of morbidity and mortality in chronic renal patients. Carotid intima-media thickness (CIMT) is one of the most accurate markers of atherosclerosis risk. In this study, the authors set out to evaluate a population of chronic renal patients to determine which factors are associated with an increase in intima-media thickness. We included 56 patients (F=22, M=34), with a mean age of 68.6 years, and an estimated glomerular filtration rate of 15.8 ml/min (calculated by the MDRD equation). Various laboratory and inflammatory parameters (hsCRP, IL-6 and TNF-alpha) were evaluated. All subjects underwent measurement of internal carotid artery intima-media thickness by high-resolution real-time B-mode ultrasonography using a 10 MHz linear transducer. Intima-media thickness was used as a dependent variable in a simple linear regression model, with the various laboratory parameters as independent variables. Only parameters showing a significant correlation with CIMT were evaluated in a multiple regression model: age (p=0.001), hemoglobin (p=00.3), logCRP (p=0.042), logIL-6 (p=0.004) and homocysteine (p=0.002). In the multiple regression model we found that age (p=0.001) and homocysteine (p=0.027) were independently correlated with CIMT. LogIL-6 did not reach statistical significance (p=0.057), probably due to the small population size. The authors conclude that age and homocysteine correlate with carotid intima-media thickness, and thus can be considered as markers/risk factors in chronic renal patients.

  2. Linear solvation energy relationships regarding sorption and retention properties of hydrophobic organic compounds in soil leaching column chromatography.

    PubMed

    Xu, Feng; Liang, Xinmiao; Lin, Bingcheng; Su, Fan; Schramm, Karl-Werner; Kettrup, Antonius

    2002-08-01

    The capacity factors of a series of hydrophobic organic compounds (HOCs) were measured in soil leaching column chromatography (SLCC) on a soil column, and in reversed-phase liquid chromatography on a C18 column with different volumetric fractions (phi) of methanol in methanol-water mixtures. A general equation of linear solvation energy relationships, log(XYZ) XYZ0 + mV(I)/100 + spi + bbetam + aalpham, was applied to analyze capacity factors (k'), soil organic partition coefficients (Koc) and octanol-water partition coefficients (P). The analyses exhibited high accuracy. The chief solute factors that control logKoc, log P, and logk' (on soil and on C18) are the solute size (V(I)/100) and hydrogen-bond basicity (betam). Less important solute factors are the dipolarity/polarizability (pi*) and hydrogen-bond acidity (alpham). Log k' on soil and log Koc have similar signs in four fitting coefficients (m, s, b and a) and similar ratios (m:s:b:a), while log k' on C18 and logP have similar signs in coefficients (m, s, b and a) and similar ratios (m:s:b:a). Consequently, logk' values on C18 have good correlations with logP (r > 0.97), while logk' values on soil have good correlations with logKoc (r > 0.98). Two Koc estimation methods were developed, one through solute solvatochromic parameters, and the other through correlations with k' on soil. For HOCs, a linear relationship between logarithmic capacity factor and methanol composition in methanol-water mixtures could also be derived in SLCC.

  3. Kinetic Behavior of Escherichia coli on Various Cheeses under Constant and Dynamic Temperature.

    PubMed

    Kim, K; Lee, H; Gwak, E; Yoon, Y

    2014-07-01

    In this study, we developed kinetic models to predict the growth of pathogenic Escherichia coli on cheeses during storage at constant and changing temperatures. A five-strain mixture of pathogenic E. coli was inoculated onto natural cheeses (Brie and Camembert) and processed cheeses (sliced Mozzarella and sliced Cheddar) at 3 to 4 log CFU/g. The inoculated cheeses were stored at 4, 10, 15, 25, and 30°C for 1 to 320 h, with a different storage time being used for each temperature. Total bacteria and E. coli cells were enumerated on tryptic soy agar and MacConkey sorbitol agar, respectively. E. coli growth data were fitted to the Baranyi model to calculate the maximum specific growth rate (μ max; log CFU/g/h), lag phase duration (LPD; h), lower asymptote (log CFU/g), and upper asymptote (log CFU/g). The kinetic parameters were then analyzed as a function of storage temperature, using the square root model, polynomial equation, and linear equation. A dynamic model was also developed for varying temperature. The model performance was evaluated against observed data, and the root mean square error (RMSE) was calculated. At 4°C, E. coli cell growth was not observed on any cheese. However, E. coli growth was observed at 10°C to 30°C with a μ max of 0.01 to 1.03 log CFU/g/h, depending on the cheese. The μ max values increased as temperature increased, while LPD values decreased, and μ max and LPD values were different among the four types of cheese. The developed models showed adequate performance (RMSE = 0.176-0.337), indicating that these models should be useful for describing the growth kinetics of E. coli on various cheeses.

  4. Statistical analysis of dendritic spine distributions in rat hippocampal cultures

    PubMed Central

    2013-01-01

    Background Dendritic spines serve as key computational structures in brain plasticity. Much remains to be learned about their spatial and temporal distribution among neurons. Our aim in this study was to perform exploratory analyses based on the population distributions of dendritic spines with regard to their morphological characteristics and period of growth in dissociated hippocampal neurons. We fit a log-linear model to the contingency table of spine features such as spine type and distance from the soma to first determine which features were important in modeling the spines, as well as the relationships between such features. A multinomial logistic regression was then used to predict the spine types using the features suggested by the log-linear model, along with neighboring spine information. Finally, an important variant of Ripley’s K-function applicable to linear networks was used to study the spatial distribution of spines along dendrites. Results Our study indicated that in the culture system, (i) dendritic spine densities were "completely spatially random", (ii) spine type and distance from the soma were independent quantities, and most importantly, (iii) spines had a tendency to cluster with other spines of the same type. Conclusions Although these results may vary with other systems, our primary contribution is the set of statistical tools for morphological modeling of spines which can be used to assess neuronal cultures following gene manipulation such as RNAi, and to study induced pluripotent stem cells differentiated to neurons. PMID:24088199

  5. Organic Carbon/Water and Dissolved Organic Carbon/Water Partitioning of Cyclic Volatile Methylsiloxanes: Measurements and Polyparameter Linear Free Energy Relationships.

    PubMed

    Panagopoulos, Dimitri; Jahnke, Annika; Kierkegaard, Amelie; MacLeod, Matthew

    2015-10-20

    The sorption of cyclic volatile methyl siloxanes (cVMS) to organic matter has a strong influence on their fate in the aquatic environment. We report new measurements of the partition ratios between freshwater sediment organic carbon and water (KOC) and between Aldrich humic acid dissolved organic carbon and water (KDOC) for three cVMS, and for three polychlorinated biphenyls (PCBs) that were used as reference chemicals. Our measurements were made using a purge-and-trap method that employs benchmark chemicals to calibrate mass transfer at the air/water interface in a fugacity-based multimedia model. The measured log KOC of octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5), and dodecamethylcyclohexasiloxane (D6) were 5.06, 6.12, and 7.07, and log KDOC were 5.05, 6.13, and 6.79. To our knowledge, our measurements for KOC of D6 and KDOC of D4 and D6 are the first reported. Polyparameter linear free energy relationships (PP-LFERs) derived from training sets of empirical data that did not include cVMS generally did not predict our measured partition ratios of cVMS accurately (root-mean-squared-error (RMSE) for logKOC 0.76 and for logKDOC 0.73). We constructed new PP-LFERs that accurately describe partition ratios for the cVMS as well as for other chemicals by including our new measurements in the existing training sets (logKOC RMSEcVMS: 0.09, logKDOC RMSEcVMS: 0.12). The PP-LFERs we have developed here should be further evaluated and perhaps recalibrated when experimental data for other siloxanes become available.

  6. A time series analysis of the relationship of ambient temperature and common bacterial enteric infections in two Canadian provinces

    NASA Astrophysics Data System (ADS)

    Fleury, Manon; Charron, Dominique F.; Holt, John D.; Allen, O. Brian; Maarouf, Abdel R.

    2006-07-01

    The incidence of enteric infections in the Canadian population varies seasonally, and may be expected to be change in response to global climate changes. To better understand any potential impact of warmer temperature on enteric infections in Canada, we investigated the relationship between ambient temperature and weekly reports of confirmed cases of three pathogens in Canada: Salmonella, pathogenic Escherichia coli and Campylobacter, between 1992 and 2000 in two Canadian provinces. We used generalized linear models (GLMs) and generalized additive models (GAMs) to estimate the effect of seasonal adjustments on the estimated models. We found a strong non-linear association between ambient temperature and the occurrence of all three enteric pathogens in Alberta, Canada, and of Campylobacter in Newfoundland-Labrador. Threshold models were used to quantify the relationship of disease and temperature with thresholds chosen from 0 to -10°C depending on the pathogen modeled. For Alberta, the log relative risk of Salmonella weekly case counts increased by 1.2%, Campylobacter weekly case counts increased by 2.2%, and E. coli weekly case counts increased by 6.0% for every degree increase in weekly mean temperature. For Newfoundland-Labrador the log relative risk increased by 4.5% for Campylobacter for every degree increase in weekly mean temperature.

  7. Lagrangian Mixing in an Axisymmetric Hurricane Model

    DTIC Science & Technology

    2010-07-23

    The MMR r is found by tak - ing the log of the time-series 6ρ(t)−A1, where A1 is 90% of the minimum value of6ρ(t), and the slope of the linear func...Advective mixing in a nondivergent barotropic hurricane model, Atmos. Chem. Phys., 10, 475 –497, doi:10.5194/acp-10- 475 -2010, 2010. Salman, H., Ide, K

  8. A hierarchical model for estimating change in American Woodcock populations

    USGS Publications Warehouse

    Sauer, J.R.; Link, W.A.; Kendall, W.L.; Kelley, J.R.; Niven, D.K.

    2008-01-01

    The Singing-Ground Survey (SGS) is a primary source of information on population change for American woodcock (Scolopax minor). We analyzed the SGS using a hierarchical log-linear model and compared the estimates of change and annual indices of abundance to a route regression analysis of SGS data. We also grouped SGS routes into Bird Conservation Regions (BCRs) and estimated population change and annual indices using BCRs within states and provinces as strata. Based on the hierarchical model?based estimates, we concluded that woodcock populations were declining in North America between 1968 and 2006 (trend = -0.9%/yr, 95% credible interval: -1.2, -0.5). Singing-Ground Survey results are generally similar between analytical approaches, but the hierarchical model has several important advantages over the route regression. Hierarchical models better accommodate changes in survey efficiency over time and space by treating strata, years, and observers as random effects in the context of a log-linear model, providing trend estimates that are derived directly from the annual indices. We also conducted a hierarchical model analysis of woodcock data from the Christmas Bird Count and the North American Breeding Bird Survey. All surveys showed general consistency in patterns of population change, but the SGS had the shortest credible intervals. We suggest that population management and conservation planning for woodcock involving interpretation of the SGS use estimates provided by the hierarchical model.

  9. White noise analysis of Phycomyces light growth response system. I. Normal intensity range.

    PubMed Central

    Lipson, E D

    1975-01-01

    The Wiener-Lee-Schetzen method for the identification of a nonlinear system through white gaussian noise stimulation was applied to the transient light growth response of the sporangiophore of Phycomyces. In order to cover a moderate dynamic range of light intensity I, the imput variable was defined to be log I. The experiments were performed in the normal range of light intensity, centered about I0 = 10(-6) W/cm2. The kernels of the Wierner functionals were computed up to second order. Within the range of a few decades the system is reasonably linear with log I. The main nonlinear feature of the second-order kernel corresponds to the property of rectification. Power spectral analysis reveals that the slow dynamics of the system are of at least fifth order. The system can be represented approximately by a linear transfer function, including a first-order high-pass (adaptation) filter with a 4 min time constant and an underdamped fourth-order low-pass filter. Accordingly a linear electronic circuit was constructed to simulate the small scale response characteristics. In terms of the adaptation model of Delbrück and Reichardt (1956, in Cellular Mechanisms in Differentiation and Growth, Princeton University Press), kernels were deduced for the dynamic dependence of the growth velocity (output) on the "subjective intensity", a presumed internal variable. Finally the linear electronic simulator above was generalized to accommodate the large scale nonlinearity of the adaptation model and to serve as a tool for deeper test of the model. PMID:1203444

  10. Estimation of transformation parameters for microarray data.

    PubMed

    Durbin, Blythe; Rocke, David M

    2003-07-22

    Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.

  11. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  12. Correlation between Gas Bubble Formation and Hydrogen Evolution Reaction Kinetics at Nanoelectrodes.

    PubMed

    Chen, Qianjin; Luo, Long

    2018-04-17

    We report the correlation between H 2 gas bubble formation potential and hydrogen evolution reaction (HER) activity for Au and Pt nanodisk electrodes (NEs). Microkinetic models were formulated to obtain the HER kinetic information for individual Au and Pt NEs. We found that the rate-determining steps for the HER at Au and Pt NEs were the Volmer step and the Heyrovsky step, respectively. More interestingly, the standard rate constant ( k 0 ) of the rate-determining step was found to vary over 2 orders of magnitude for the same type of NEs. The observed variations indicate the HER activity heterogeneity at the nanoscale. Furthermore, we discovered a linear relationship between bubble formation potential ( E bubble ) and log( k 0 ) with a slope of 125 mV/decade for both Au and Pt NEs. As log ( k 0 ) increases, E bubble shifts linearly to more positive potentials, meaning NEs with higher HER activities form H 2 bubbles at less negative potentials. Our theoretical model suggests that such linear relationship is caused by the similar critical bubble formation condition for Au and Pt NEs with varied sizes. Our results have potential implications for using gas bubble formation to evaluate the HER activity distribution of nanoparticles in an ensemble.

  13. A Spreadsheet for a 2 x 3 x 2 Log-Linear Analysis. AIR 1991 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Saupe, Joe L.

    This paper describes a personal computer spreadsheet set up to carry out hierarchical log-linear analyses, a type of analysis useful for institutional research into multidimensional frequency tables formed from categorical variables such as faculty rank, student class level, gender, or retention status. The spreadsheet provides a concrete vehicle…

  14. Standard Errors of Equating for the Percentile Rank-Based Equipercentile Equating with Log-Linear Presmoothing

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2009-01-01

    Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…

  15. A combined QSAR and partial order ranking approach to risk assessment.

    PubMed

    Carlsen, L

    2006-04-01

    QSAR generated data appear as an attractive alternative to experimental data as foreseen in the proposed new chemicals legislation REACH. A preliminary risk assessment for the aquatic environment can be based on few factors, i.e. the octanol-water partition coefficient (Kow), the vapour pressure (VP) and the potential biodegradability of the compound in combination with the predicted no-effect concentration (PNEC) and the actual tonnage in which the substance is produced. Application of partial order ranking, allowing simultaneous inclusion of several parameters leads to a mutual prioritisation of the investigated substances, the prioritisation possibly being further analysed through the concept of linear extensions and average ranks. The ranking uses endpoint values (log Kow and log VP) derived from strictly linear 'noise-deficient' QSAR models as input parameters. Biodegradation estimates were adopted from the BioWin module of the EPI Suite. The population growth impairment of Tetrahymena pyriformis was used as a surrogate for fish lethality.

  16. Psychometric functions for pure-tone frequency discrimination.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2011-07-01

    The form of the psychometric function (PF) for auditory frequency discrimination is of theoretical interest and practical importance. In this study, PFs for pure-tone frequency discrimination were measured for several standard frequencies (200-8000 Hz) and levels [35-85 dB sound pressure level (SPL)] in normal-hearing listeners. The proportion-correct data were fitted using a cumulative-Gaussian function of the sensitivity index, d', computed as a power transformation of the frequency difference, Δf. The exponent of the power function corresponded to the slope of the PF on log(d')-log(Δf) coordinates. The influence of attentional lapses on PF-slope estimates was investigated. When attentional lapses were not taken into account, the estimated PF slopes on log(d')-log(Δf) coordinates were found to be significantly lower than 1, suggesting a nonlinear relationship between d' and Δf. However, when lapse rate was included as a free parameter in the fits, PF slopes were found not to differ significantly from 1, consistent with a linear relationship between d' and Δf. This was the case across the wide ranges of frequencies and levels tested in this study. Therefore, spectral and temporal models of frequency discrimination must account for a linear relationship between d' and Δf across a wide range of frequencies and levels. © 2011 Acoustical Society of America

  17. Flow-covariate prediction of stream pesticide concentrations.

    PubMed

    Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin

    2018-01-01

    Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.

  18. The prisoner's dilemma as a cancer model.

    PubMed

    West, Jeffrey; Hasnain, Zaki; Mason, Jeremy; Newton, Paul K

    2016-09-01

    Tumor development is an evolutionary process in which a heterogeneous population of cells with different growth capabilities compete for resources in order to gain a proliferative advantage. What are the minimal ingredients needed to recreate some of the emergent features of such a developing complex ecosystem? What is a tumor doing before we can detect it? We outline a mathematical model, driven by a stochastic Moran process, in which cancer cells and healthy cells compete for dominance in the population. Each are assigned payoffs according to a Prisoner's Dilemma evolutionary game where the healthy cells are the cooperators and the cancer cells are the defectors. With point mutational dynamics, heredity, and a fitness landscape controlling birth and death rates, natural selection acts on the cell population and simulated 'cancer-like' features emerge, such as Gompertzian tumor growth driven by heterogeneity, the log-kill law which (linearly) relates therapeutic dose density to the (log) probability of cancer cell survival, and the Norton-Simon hypothesis which (linearly) relates tumor regression rates to tumor growth rates. We highlight the utility, clarity, and power that such models provide, despite (and because of) their simplicity and built-in assumptions.

  19. Applying a probabilistic seismic-petrophysical inversion and two different rock-physics models for reservoir characterization in offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia

    2018-01-01

    We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.

  20. Modelling of binary logistic regression for obesity among secondary students in a rural area of Kedah

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Ainur Amira; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Ahmad, Wan Muhamad Amir W.

    2014-07-01

    Logistic regression analysis examines the influence of various factors on a dichotomous outcome by estimating the probability of the event's occurrence. Logistic regression, also called a logit model, is a statistical procedure used to model dichotomous outcomes. In the logit model the log odds of the dichotomous outcome is modeled as a linear combination of the predictor variables. The log odds ratio in logistic regression provides a description of the probabilistic relationship of the variables and the outcome. In conducting logistic regression, selection procedures are used in selecting important predictor variables, diagnostics are used to check that assumptions are valid which include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers and a test statistic is calculated to determine the aptness of the model. This study used the binary logistic regression model to investigate overweight and obesity among rural secondary school students on the basis of their demographics profile, medical history, diet and lifestyle. The results indicate that overweight and obesity of students are influenced by obesity in family and the interaction between a student's ethnicity and routine meals intake. The odds of a student being overweight and obese are higher for a student having a family history of obesity and for a non-Malay student who frequently takes routine meals as compared to a Malay student.

  1. Determinants of Anabolic-Androgenic Steroid Risk Perceptions in Youth Populations: A Multivariate Analysis

    ERIC Educational Resources Information Center

    Denham, Bryan E.

    2009-01-01

    Grounded conceptually in social cognitive theory, this research examines how personal, behavioral, and environmental factors are associated with risk perceptions of anabolic-androgenic steroids. Ordinal logistic regression and logit log-linear models applied to data gathered from high-school seniors (N = 2,160) in the 2005 Monitoring the Future…

  2. Using Configural Frequency Analysis as a Person-Centered Analytic Approach with Categorical Data

    ERIC Educational Resources Information Center

    Stemmler, Mark; Heine, Jörg-Henrik

    2017-01-01

    Configural frequency analysis and log-linear modeling are presented as person-centered analytic approaches for the analysis of categorical or categorized data in multi-way contingency tables. Person-centered developmental psychology, based on the holistic interactionistic perspective of the Stockholm working group around David Magnusson and Lars…

  3. Full analogue electronic realisation of the Hodgkin-Huxley neuronal dynamics in weak-inversion CMOS.

    PubMed

    Lazaridis, E; Drakakis, E M; Barahona, M

    2007-01-01

    This paper presents a non-linear analog synthesis path towards the modeling and full implementation of the Hodgkin-Huxley neuronal dynamics in silicon. The proposed circuits have been realized in weak-inversion CMOS technology and take advantage of both log-domain and translinear transistor-level techniques.

  4. Using Discrete Loss Functions and Weighted Kappa for Classification: An Illustration Based on Bayesian Network Analysis

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Lenaburg, Lubella

    2009-01-01

    In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…

  5. Interrelation of creep and relaxation: a modeling approach for ligaments.

    PubMed

    Lakes, R S; Vanderby, R

    1999-12-01

    Experimental data (Thornton et al., 1997) show that relaxation proceeds more rapidly (a greater slope on a log-log scale) than creep in ligament, a fact not explained by linear viscoelasticity. An interrelation between creep and relaxation is therefore developed for ligaments based on a single-integral nonlinear superposition model. This interrelation differs from the convolution relation obtained by Laplace transforms for linear materials. We demonstrate via continuum concepts of nonlinear viscoelasticity that such a difference in rate between creep and relaxation phenomenologically occurs when the nonlinearity is of a strain-stiffening type, i.e., the stress-strain curve is concave up as observed in ligament. We also show that it is inconsistent to assume a Fung-type constitutive law (Fung, 1972) for both creep and relaxation. Using the published data of Thornton et al. (1997), the nonlinear interrelation developed herein predicts creep behavior from relaxation data well (R > or = 0.998). Although data are limited and the causal mechanisms associated with viscoelastic tissue behavior are complex, continuum concepts demonstrated here appear capable of interrelating creep and relaxation with fidelity.

  6. Air-sea exchange and gas-particle partitioning of polycyclic aromatic hydrocarbons over the northwestern Pacific Ocean: Role of East Asian continental outflow.

    PubMed

    Wu, Zilan; Lin, Tian; Li, Zhongxia; Jiang, Yuqing; Li, Yuanyuan; Yao, Xiaohong; Gao, Huiwang; Guo, Zhigang

    2017-11-01

    We measured 15 parent polycyclic aromatic hydrocarbons (PAHs) in atmosphere and water during a research cruise from the East China Sea (ECS) to the northwestern Pacific Ocean (NWP) in the spring of 2015 to investigate the occurrence, air-sea gas exchange, and gas-particle partitioning of PAHs with a particular focus on the influence of East Asian continental outflow. The gaseous PAH composition and identification of sources were consistent with PAHs from the upwind area, indicating that the gaseous PAHs (three-to five-ring PAHs) were influenced by upwind land pollution. In addition, air-sea exchange fluxes of gaseous PAHs were estimated to be -54.2-107.4 ng m -2 d -1 , and was indicative of variations of land-based PAH inputs. The logarithmic gas-particle partition coefficient (logK p ) of PAHs regressed linearly against the logarithmic subcooled liquid vapor pressure (logP L 0 ), with a slope of -0.25. This was significantly larger than the theoretical value (-1), implying disequilibrium between the gaseous and particulate PAHs over the NWP. The non-equilibrium of PAH gas-particle partitioning was shielded from the volatilization of three-ring gaseous PAHs from seawater and lower soot concentrations in particular when the oceanic air masses prevailed. Modeling PAH absorption into organic matter and adsorption onto soot carbon revealed that the status of PAH gas-particle partitioning deviated more from the modeling K p for oceanic air masses than those for continental air masses, which coincided with higher volatilization of three-ring PAHs and confirmed the influence of air-sea exchange. Meanwhile, significant linear regressions between logK p and logK oa (logK sa ) for PAHs were observed for continental air masses, suggesting the dominant effect of East Asian continental outflow on atmospheric PAHs over the NWP during the sampling campaign. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Main controlling factors and forecasting models of lead accumulation in earthworms based on low-level lead-contaminated soils.

    PubMed

    Tang, Ronggui; Ding, Changfeng; Ma, Yibing; Wan, Mengxue; Zhang, Taolin; Wang, Xingxiang

    2018-06-02

    To explore the main controlling factors in soil and build a predictive model between the lead concentrations in earthworms (Pb earthworm ) and the soil physicochemical parameters, 13 soils with low level of lead contamination were used to conduct toxicity experiments using earthworms. The results indicated that a relatively high bioaccumulation factor appeared in the soils with low pH values. The lead concentrations between earthworms and soils after log transformation had a significantly positive correlation (R 2  = 0.46, P < 0.0001, n = 39). Stepwise multiple linear regression analysis derived a fitting empirical model between Pb earthworm and the soil physicochemical properties: log(Pb earthworm ) = 0.96log(Pb soil ) - 0.74log(OC) - 0.22pH + 0.95, (R 2  = 0.66, n = 39). Furthermore, path analysis confirmed that the Pb concentrations in the soil (Pb soil ), soil pH, and soil organic carbon (OC) were the primary controlling factors of Pb earthworm with high pathway parameters (0.71, - 0.51, and - 0.49, respectively). The predictive model based on Pb earthworm in a nationwide range of soils with low-level lead contamination could provide a reference for the establishment of safety thresholds in Pb-contaminated soils from the perspective of soil-animal systems.

  8. Estradiol and inflammatory markers in older men.

    PubMed

    Maggio, Marcello; Ceda, Gian Paolo; Lauretani, Fulvio; Bandinelli, Stefania; Metter, E Jeffrey; Artoni, Andrea; Gatti, Elisa; Ruggiero, Carmelinda; Guralnik, Jack M; Valenti, Giorgio; Ling, Shari M; Basaria, Shehzad; Ferrucci, Luigi

    2009-02-01

    Aging is characterized by a mild proinflammatory state. In older men, low testosterone levels have been associated with increasing levels of proinflammatory cytokines. It is still unclear whether estradiol (E2), which generally has biological activities complementary to testosterone, affects inflammation. We analyzed data obtained from 399 men aged 65-95 yr enrolled in the Invecchiare in Chianti study with complete data on body mass index (BMI), serum E2, testosterone, IL-6, soluble IL-6 receptor, TNF-alpha, IL-1 receptor antagonist, and C-reactive protein. The relationship between E2 and inflammatory markers was examined using multivariate linear models adjusted for age, BMI, smoking, physical activity, chronic disease, and total testosterone. In age-adjusted analysis, log (E2) was positively associated with log (IL-6) (r = 0.19; P = 0.047), and the relationship was statistically significant (P = 0.032) after adjustments for age, BMI, smoking, physical activity, chronic disease, and serum testosterone levels. Log (E2) was not significantly associated with log (C-reactive protein), log (soluble IL-6 receptor), or log (TNF-alpha) in both age-adjusted and fully adjusted analyses. In older men, E2 is weakly positively associated with IL-6, independent of testosterone and other confounders including BMI.

  9. Interpretation of a compositional time series

    NASA Astrophysics Data System (ADS)

    Tolosana-Delgado, R.; van den Boogaart, K. G.

    2012-04-01

    Common methods for multivariate time series analysis use linear operations, from the definition of a time-lagged covariance/correlation to the prediction of new outcomes. However, when the time series response is a composition (a vector of positive components showing the relative importance of a set of parts in a total, like percentages and proportions), then linear operations are afflicted of several problems. For instance, it has been long recognised that (auto/cross-)correlations between raw percentages are spurious, more dependent on which other components are being considered than on any natural link between the components of interest. Also, a long-term forecast of a composition in models with a linear trend will ultimately predict negative components. In general terms, compositional data should not be treated in a raw scale, but after a log-ratio transformation (Aitchison, 1986: The statistical analysis of compositional data. Chapman and Hill). This is so because the information conveyed by a compositional data is relative, as stated in their definition. The principle of working in coordinates allows to apply any sort of multivariate analysis to a log-ratio transformed composition, as long as this transformation is invertible. This principle is of full application to time series analysis. We will discuss how results (both auto/cross-correlation functions and predictions) can be back-transformed, viewed and interpreted in a meaningful way. One view is to use the exhaustive set of all possible pairwise log-ratios, which allows to express the results into D(D - 1)/2 separate, interpretable sets of one-dimensional models showing the behaviour of each possible pairwise log-ratios. Another view is the interpretation of estimated coefficients or correlations back-transformed in terms of compositions. These two views are compatible and complementary. These issues are illustrated with time series of seasonal precipitation patterns at different rain gauges of the USA. In this data set, the proportion of annual precipitation falling in winter, spring, summer and autumn is considered a 4-component time series. Three invertible log-ratios are defined for calculations, balancing rainfall in autumn vs. winter, in summer vs. spring, and in autumn-winter vs. spring-summer. Results suggest a 2-year correlation range, and certain oscillatory behaviour in the last balance, which does not occur in the other two.

  10. Small-Sample DIF Estimation Using Log-Linear Smoothing: A SIBTEST Application. Research Report. ETS RR-07-10

    ERIC Educational Resources Information Center

    Puhan, Gautam; Moses, Tim P.; Yu, Lei; Dorans, Neil J.

    2007-01-01

    The purpose of the current study was to examine whether log-linear smoothing of observed score distributions in small samples results in more accurate differential item functioning (DIF) estimates under the simultaneous item bias test (SIBTEST) framework. Data from a teacher certification test were analyzed using White candidates in the reference…

  11. Using nonlinear quantile regression to estimate the self-thinning boundary curve

    Treesearch

    Quang V. Cao; Thomas J. Dean

    2015-01-01

    The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...

  12. Competing regression models for longitudinal data.

    PubMed

    Alencar, Airlane P; Singer, Julio M; Rocha, Francisco Marcelo M

    2012-03-01

    The choice of an appropriate family of linear models for the analysis of longitudinal data is often a matter of concern for practitioners. To attenuate such difficulties, we discuss some issues that emerge when analyzing this type of data via a practical example involving pretest-posttest longitudinal data. In particular, we consider log-normal linear mixed models (LNLMM), generalized linear mixed models (GLMM), and models based on generalized estimating equations (GEE). We show how some special features of the data, like a nonconstant coefficient of variation, may be handled in the three approaches and evaluate their performance with respect to the magnitude of standard errors of interpretable and comparable parameters. We also show how different diagnostic tools may be employed to identify outliers and comment on available software. We conclude by noting that the results are similar, but that GEE-based models may be preferable when the goal is to compare the marginal expected responses. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Influence of water activity on inactivation of Escherichia coli O157:H7, Salmonella Typhimurium and Listeria monocytogenes in peanut butter by microwave heating.

    PubMed

    Song, Won-Jae; Kang, Dong-Hyun

    2016-12-01

    This study evaluated the efficacy of a 915 MHz microwave with 3 different electric power levels to inactivate three pathogens in peanut butter with different aw. Peanut butter inoculated with Escherichia coli O157:H7, Salmonella enterica serovar Typhimurium and Listeria monocytogenes (0.3, 0.4, and 0.5 aw) were treated with a 915 MHz microwave with 2, 4, and 6 kW for up to 5 min. Six kW 915 MHz microwave treatment for 5 min reduced these three pathogens by 1.97 to >5.17 log CFU/g. Four kW 915 MHz microwave processing for 5 min reduced these pathogens by 0.41-1.98 log CFU/g. Two kW microwave heating did not inactivate pathogens in peanut butter. Weibull and Log-Linear + Shoulder models were used to describe the survival curves of three pathogens because they exhibited shouldering behavior. Td and T5d values were calculated based on the Weibull and Log-Linear + Shoulder models. Td values of the three pathogens were similar to D-values of Salmonella subjected to conventional heating at 90 °C but T5d values were much shorter than those of conventional heating at 90 °C. Generally, increased aw resulted in shorter T5d values of pathogens, but not shorter Td values. The results of this study can be used to optimize microwave heating pasteurization system of peanut butter. Copyright © 2016. Published by Elsevier Ltd.

  14. Bond-based linear indices of the non-stochastic and stochastic edge-adjacency matrix. 1. Theory and modeling of ChemPhys properties of organic molecules.

    PubMed

    Marrero-Ponce, Yovani; Martínez-Albelo, Eugenio R; Casañola-Martín, Gerardo M; Castillo-Garit, Juan A; Echevería-Díaz, Yunaimy; Zaldivar, Vicente Romero; Tygat, Jan; Borges, José E Rodriguez; García-Domenech, Ramón; Torrens, Francisco; Pérez-Giménez, Facundo

    2010-11-01

    Novel bond-level molecular descriptors are proposed, based on linear maps similar to the ones defined in algebra theory. The kth edge-adjacency matrix (E(k)) denotes the matrix of bond linear indices (non-stochastic) with regard to canonical basis set. The kth stochastic edge-adjacency matrix, ES(k), is here proposed as a new molecular representation easily calculated from E(k). Then, the kth stochastic bond linear indices are calculated using ES(k) as operators of linear transformations. In both cases, the bond-type formalism is developed. The kth non-stochastic and stochastic total linear indices are calculated by adding the kth non-stochastic and stochastic bond linear indices, respectively, of all bonds in molecule. First, the new bond-based molecular descriptors (MDs) are tested for suitability, for the QSPRs, by analyzing regressions of novel indices for selected physicochemical properties of octane isomers (first round). General performance of the new descriptors in this QSPR studies is evaluated with regard to the well-known sets of 2D/3D MDs. From the analysis, we can conclude that the non-stochastic and stochastic bond-based linear indices have an overall good modeling capability proving their usefulness in QSPR studies. Later, the novel bond-level MDs are also used for the description and prediction of the boiling point of 28 alkyl-alcohols (second round), and to the modeling of the specific rate constant (log k), partition coefficient (log P), as well as the antibacterial activity of 34 derivatives of 2-furylethylenes (third round). The comparison with other approaches (edge- and vertices-based connectivity indices, total and local spectral moments, and quantum chemical descriptors as well as E-state/biomolecular encounter parameters) exposes a good behavior of our method in this QSPR studies. Finally, the approach described in this study appears to be a very promising structural invariant, useful not only for QSPR studies but also for similarity/diversity analysis and drug discovery protocols.

  15. A method of improving sensitivity of carbon/oxygen well logging for low porosity formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Juntao; Zhang, Feng; Zhang, Quanying

    Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less

  16. A method of improving sensitivity of carbon/oxygen well logging for low porosity formation

    DOE PAGES

    Liu, Juntao; Zhang, Feng; Zhang, Quanying; ...

    2016-12-01

    Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less

  17. Numerical simulations of induction and MWD logging tools and data inversion method with X-window interface on a UNIX workstation

    NASA Astrophysics Data System (ADS)

    Tian, Xiang-Dong

    The purpose of this research is to simulate induction and measuring-while-drilling (MWD) logs. In simulation of logs, there are two tasks. The first task, the forward modeling procedure, is to compute the logs from known formation. The second task, the inversion procedure, is to determine the unknown properties of the formation from the measured field logs. In general, the inversion procedure requires the solution of a forward model. In this study, a stable numerical method to simulate induction and MWD logs is presented. The proposed algorithm is based on a horizontal eigenmode expansion method. Vertical propagation of modes is modeled by a three-layer module. The multilayer cases are treated as a cascade of these modules. The mode tracing algorithm possesses stable characteristics that are superior to other methods. This method is applied to simulate the logs in the formations with both vertical and horizontal layers, and also used to study the groove effects of the MWD tool. The results are very good. Two-dimensional inversion of induction logs is an nonlinear problem. Nonlinear functions of the apparent conductivity are expanded into a Taylor series. After truncating the high order terms in this Taylor series, the nonlinear functions are linearized. An iterative procedure is then devised to solve the inversion problem. In each iteration, the Jacobian matrix is calculated, and a small variation computed using the least-squares method is used to modify the background medium. Finally, the inverted medium is obtained. The horizontal eigenstate method is used to solve the forward problem. It is found that a good inverted formation can be obtained by using measurements. In order to help the user simulate the induction logs conveniently, a Wellog Simulator, based on the X-window system, is developed. The application software (FORTRAN codes) embedded in the Simulator is designed to simulate the responses of the induction tools in the layered formation with dipping beds. The graphic user-interface part of the Wellog Simulator is implemented with C and Motif. Through the user interface, the user can prepare the simulation data, select the tools, simulate the logs and plot the results.

  18. Comparison between Surrogate Indexes of Insulin Sensitivity/Resistance and Hyperinsulinemic Euglycemic Glucose Clamps in Rhesus Monkeys

    PubMed Central

    Lee, Ho-Won; Muniyappa, Ranganath; Yan, Xu; Yue, Lilly Q.; Linden, Ellen H.; Chen, Hui; Hansen, Barbara C.

    2011-01-01

    The euglycemic glucose clamp is the reference method for assessing insulin sensitivity in humans and animals. However, clamps are ill-suited for large studies because of extensive requirements for cost, time, labor, and technical expertise. Simple surrogate indexes of insulin sensitivity/resistance including quantitative insulin-sensitivity check index (QUICKI) and homeostasis model assessment (HOMA) have been developed and validated in humans. However, validation studies of QUICKI and HOMA in both rats and mice suggest that differences in metabolic physiology between rodents and humans limit their value in rodents. Rhesus monkeys are a species more similar to humans than rodents. Therefore, in the present study, we evaluated data from 199 glucose clamp studies obtained from a large cohort of 86 monkeys with a broad range of insulin sensitivity. Data were used to evaluate simple surrogate indexes of insulin sensitivity/resistance (QUICKI, HOMA, Log HOMA, 1/HOMA, and 1/Fasting insulin) with respect to linear regression, predictive accuracy using a calibration model, and diagnostic performance using receiver operating characteristic. Most surrogates had modest linear correlations with SIClamp (r ≈ 0.4–0.64) with comparable correlation coefficients. Predictive accuracy determined by calibration model analysis demonstrated better predictive accuracy of QUICKI than HOMA and Log HOMA. Receiver operating characteristic analysis showed equivalent sensitivity and specificity of most surrogate indexes to detect insulin resistance. Thus, unlike in rodents but similar to humans, surrogate indexes of insulin sensitivity/resistance including QUICKI and log HOMA may be reasonable to use in large studies of rhesus monkeys where it may be impractical to conduct glucose clamp studies. PMID:21209021

  19. Inactivation modeling of human enteric virus surrogates, MS2, Qβ, and ΦX174, in water using UVC-LEDs, a novel disinfecting system.

    PubMed

    Kim, Do-Kyun; Kim, Soo-Ji; Kang, Dong-Hyun

    2017-01-01

    In order to assure the microbial safety of drinking water, UVC-LED treatment has emerged as a possible technology to replace the use of conventional low pressure (LP) mercury vapor UV lamps. In this investigation, inactivation of Human Enteric Virus (HuEV) surrogates with UVC-LEDs was investigated in a water disinfection system, and kinetic model equations were applied to depict the surviving infectivities of the viruses. MS2, Qβ, and ΦX 174 bacteriophages were inoculated into sterile distilled water (DW) and irradiated with UVC-LED printed circuit boards (PCBs) (266nm and 279nm) or conventional LP lamps. Infectivities of bacteriophages were effectively reduced by up to 7-log after 9mJ/cm 2 treatment for MS2 and Qβ, and 1mJ/cm 2 for ΦX 174. UVC-LEDs showed a superior viral inactivation effect compared to conventional LP lamps at the same dose (1mJ/cm 2 ). Non-log linear plot patterns were observed, so that Weibull, Biphasic, Log linear-tail, and Weibull-tail model equations were used to fit the virus survival curves. For MS2 and Qβ, Weibull and Biphasic models fit well with R 2 values approximately equal to 0.97-0.99, and the Weibull-tail equation accurately described survival of ΦX 174. The level of UV-susceptibility among coliphages measured by the inactivation rate constant, k, was statistically different (ΦX 174 (ssDNA)>MS2, Qβ (ssRNA)), and indicated that sensitivity to UV was attributed to viral genetic material. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Walking training and cortisol to DHEA-S ratio in postmenopause: An intervention study.

    PubMed

    Di Blasio, Andrea; Izzicupo, Pascal; Di Baldassarre, Angela; Gallina, Sabina; Bucci, Ines; Giuliani, Cesidio; Di Santo, Serena; Di Iorio, Angelo; Ripari, Patrizio; Napolitano, Giorgio

    2018-04-01

    The literature indicates that the plasma cortisol-to-dehydroepiandrosterone-sulfate (DHEA-S) ratio is a marker of health status after menopause, when a decline in both estrogen and DHEA-S and an increase in cortisol occur. An increase in the cortisol-to-DHEA-S ratio has been positively correlated with metabolic syndrome, all-cause mortality, cancer, and other diseases. The aim of this study was to investigate the effects of a walking program on the plasma cortisol-to-DHEA-S ratio in postmenopausal women. Fifty-one postmenopausal women participated in a 13-week supervised walking program, in the metropolitan area of Pescara (Italy), from June to September 2013. Participants were evaluated in April-May and September-October of the same year. The linear mixed model showed that the variation of the log 10 Cortisol-to-log 10 DHEA-S ratio was associated with the volume of exercise (p = .03). Participants having lower adherence to the walking program did not have a significantly modified log 10 Cortisol or log 10 DHEA-S, while those having the highest adherence had a significant reduction in log 10 Cortisol (p = .016) and a nearly significant increase in log 10 DHEA-S (p = .084). Walking training appeared to reduce the plasma log 10 Cortisol-to-log 10 DHEA-S ratio, although a minimum level of training was necessary to achieve this significant reduction.

  1. Determination of reversed-phase high performance liquid chromatography based octanol-water partition coefficients for neutral and ionizable compounds: Methodology evaluation.

    PubMed

    Liang, Chao; Qiao, Jun-Qin; Lian, Hong-Zhen

    2017-12-15

    Reversed-phase liquid chromatography (RPLC) based octanol-water partition coefficient (logP) or distribution coefficient (logD) determination methods were revisited and assessed comprehensively. Classic isocratic and some gradient RPLC methods were conducted and evaluated for neutral, weak acid and basic compounds. Different lipophilicity indexes in logP or logD determination were discussed in detail, including the retention factor logk w corresponding to neat water as mobile phase extrapolated via linear solvent strength (LSS) model from isocratic runs and calculated with software from gradient runs, the chromatographic hydrophobicity index (CHI), apparent gradient capacity factor (k g ') and gradient retention time (t g ). Among the lipophilicity indexes discussed, logk w from whether isocratic or gradient elution methods best correlated with logP or logD. Therefore logk w is recommended as the preferred lipophilicity index for logP or logD determination. logk w easily calculated from methanol gradient runs might be the main candidate to replace logk w calculated from classic isocratic run as the ideal lipophilicity index. These revisited RPLC methods were not applicable for strongly ionized compounds that are hardly ion-suppressed. A previously reported imperfect ion-pair RPLC method was attempted and further explored for studying distribution coefficients (logD) of sulfonic acids that totally ionized in the mobile phase. Notably, experimental logD values of sulfonic acids were given for the first time. The IP-RPLC method provided a distinct way to explore logD values of ionized compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Structure-activity relationships for novel drug precursor N-substituted-6-acylbenzothiazolon derivatives: A theoretical approach

    NASA Astrophysics Data System (ADS)

    Sıdır, Yadigar Gülseven; Sıdır, İsa

    2013-08-01

    In this study, the twelve new modeled N-substituted-6-acylbenzothiazolon derivatives having analgesic analog structure have been investigated by quantum chemical methods using a lot of electronic parameters and structure-activity properties; such as molecular polarizability (α), dipole moment (μ), EHOMO, ELUMO, q-, qH+, molecular volume (Vm), ionization potential (IP), electron affinity (EA), electronegativity (χ), molecular hardness (η), molecular softness (S), electrophilic index (ω), heat of formation (HOF), molar refractivity (MR), octanol-water partition coefficient (log P), thermochemical properties (entropy (S), capacity of heat (Cv)); as to investigate activity relationships with molecular structure. The correlations of log P with Vm, MR, ω, EA, EHOMO - ELUMO (ΔE), HOF in aqueous phase, χ, μ, S, η parameters, respectively are obtained, while the linear relation of log P with IP, Cv, HOF in gas phase are not observed. The log P parameter is obtained to be depending on different properties of compounds due to their complexity.

  3. Joint T1 and brain fiber log-demons registration using currents to model geometry.

    PubMed

    Siless, Viviana; Glaunès, Joan; Guevara, Pamela; Mangin, Jean-François; Poupon, Cyril; Le Bihan, Denis; Thirion, Bertrand; Fillard, Pierre

    2012-01-01

    We present an extension of the diffeomorphic Geometric Demons algorithm which combines the iconic registration with geometric constraints. Our algorithm works in the log-domain space, so that one can efficiently compute the deformation field of the geometry. We represent the shape of objects of interest in the space of currents which is sensitive to both location and geometric structure of objects. Currents provides a distance between geometric structures that can be defined without specifying explicit point-to-point correspondences. We demonstrate this framework by registering simultaneously T1 images and 65 fiber bundles consistently extracted in 12 subjects and compare it against non-linear T1, tensor, and multi-modal T1 + Fractional Anisotropy (FA) registration algorithms. Results show the superiority of the Log-domain Geometric Demons over their purely iconic counterparts.

  4. Ultrafast CT scanning of an oak log for internal defects

    Treesearch

    Francis G. Wagner; Fred W. Taylor; Douglas S. Ladd; Charles W. McMillin; Fredrick L. Roder

    1989-01-01

    Detecting internal defects in sawlogs and veneer logs with computerized tomographic (CT) scanning is possible, but has been impractical due to the long scanning time required. This research investigated a new scanner able to acquire 34 cross-sectional log scans per second. This scanning rate translates to a linear log feed rate of 85 feet (25.91 m) per minute at one...

  5. Relationship between vitamin D and inflammatory markers in older individuals.

    PubMed

    De Vita, Francesca; Lauretani, Fulvio; Bauer, Juergen; Bautmans, Ivan; Shardell, Michelle; Cherubini, Antonio; Bondi, Giuliana; Zuliani, Giovanni; Bandinelli, Stefania; Pedrazzoni, Mario; Dall'Aglio, Elisabetta; Ceda, Gian Paolo; Maggio, Marcello

    2014-01-01

    In older persons, vitamin D insufficiency and a subclinical chronic inflammatory status frequently coexist. Vitamin D has immune-modulatory and in vitro anti-inflammatory properties. However, there is inconclusive evidence about the anti-inflammatory role of vitamin D in older subjects. Thus, we investigated the hypothesis of an inverse relationship between 25-hydroxyvitamin D (25(OH)D) and inflammatory markers in a population-based study of older individuals. After excluding participants with high-sensitivity C-reactive protein (hsCRP) ≥ 10 mg/dl and those who were on chronic anti-inflammatory treatment, we evaluated 867 older adults ≥65 years from the InCHIANTI Study. Participants had complete data on serum concentrations of 25(OH)D, hsCRP, tumor necrosis factor (TNF)-α, soluble TNF-α receptors 1 and 2, interleukin (IL)-1β, IL-1 receptor antagonist, IL-10, IL-18, IL-6, and soluble IL-6 receptors (sIL6r and sgp130). Two general linear models were fit (model 1-adjusted for age, sex, and parathyroid hormone (PTH); model 2-including covariates of model 1 plus dietary and smoking habits, physical activity, ADL disability, season, osteoporosis, depressive status, and comorbidities). The mean age was 75.1 ± 17.1 years ± SD. In model 1, log(25OH-D) was significantly and inversely associated with log(IL-6) (β ± SE = -0.11 ± 0.03, p = <0.0001) and log (hsCRP) (β ± SE = -0.04 ± 0.02, p = 0.04) and positively associated with log(sIL6r) (β ± SE = 0.11 ± 0.04, p = 0.003) but not with other inflammatory markers. In model 2, log (25OH-D) remained negatively associated with log (IL-6) (β ± SE = -0.10 ± 0.03, p = 0.0001) and positively associated with log(sIL6r) (β ± SE = 0.11 ± 0.03, p = 0.004) but not with log(hsCRP) (β ± SE = -0.01 ± 0.03, p = 0.07). 25(OH)D is independently and inversely associated with IL-6 and positively with sIL6r, suggesting a potential anti-inflammatory role for vitamin D in older individuals.

  6. Nonparametric Bayesian Multiple Imputation for Incomplete Categorical Variables in Large-Scale Assessment Surveys

    ERIC Educational Resources Information Center

    Si, Yajuan; Reiter, Jerome P.

    2013-01-01

    In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…

  7. An Empirical Comparison of DDF Detection Methods for Understanding the Causes of DIF in Multiple-Choice Items

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Talley, Anna E.

    2015-01-01

    This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…

  8. Context Effects in Multi-Alternative Decision Making: Empirical Data and a Bayesian Model

    ERIC Educational Resources Information Center

    Hawkins, Guy; Brown, Scott D.; Steyvers, Mark; Wagenmakers, Eric-Jan

    2012-01-01

    For decisions between many alternatives, the benchmark result is Hick's Law: that response time increases log-linearly with the number of choice alternatives. Even when Hick's Law is observed for response times, divergent results have been observed for error rates--sometimes error rates increase with the number of choice alternatives, and…

  9. Reassessing the Economic Value of Advanced Level Mathematics

    ERIC Educational Resources Information Center

    Adkins, Michael; Noyes, Andrew

    2016-01-01

    In the late 1990s, the economic return to Advanced level (A-level) mathematics was examined. The analysis was based upon a series of log-linear models of earnings in the 1958 National Child Development Survey (NCDS) and the National Survey of 1980 Graduates and Diplomates. The core finding was that A-level mathematics had a unique earnings premium…

  10. Interracial and Intraracial Patterns of Mate Selection among America's Diverse Black Populations

    ERIC Educational Resources Information Center

    Batson, Christie D.; Qian, Zhenchao; Lichter, Daniel T.

    2006-01-01

    Despite recent immigration from Africa and the Caribbean, Blacks in America are still viewed as a monolith in many previous studies. In this paper, we use newly released 2000 census data to estimate log-linear models that highlight patterns of interracial and intraracial marriage and cohabitation among African Americans, West Indians, Africans,…

  11. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  12. Modeling to predict growth/no growth boundaries and kinetic behavior of Salmonella on cutting board surfaces.

    PubMed

    Yoon, Hyunjoo; Lee, Joo-Yeon; Suk, Hee-Jin; Lee, Sunah; Lee, Heeyoung; Lee, Soomin; Yoon, Yohan

    2012-12-01

    This study developed models to predict the growth probabilities and kinetic behavior of Salmonella enterica strains on cutting boards. Polyethylene coupons (3 by 5 cm) were rubbed with pork belly, and pork purge was then sprayed on the coupon surface, followed by inoculation of a five-strain Salmonella mixture onto the surface of the coupons. These coupons were stored at 13 to 35°C for 12 h, and total bacterial and Salmonella cell counts were enumerated on tryptic soy agar and xylose lysine deoxycholate (XLD) agar, respectively, every 2 h, which produced 56 combinations. The combinations that had growth of ≥0.5 log CFU/cm(2) of Salmonella bacteria recovered on XLD agar were given the value 1 (growth), and the combinations that had growth of <0.5 log CFU/cm(2) were assigned the value 0 (no growth). These growth response data from XLD agar were analyzed by logistic regression for producing growth/no growth interfaces of Salmonella bacteria. In addition, a linear model was fitted to the Salmonella cell counts to calculate the growth rate (log CFU per square centimeter per hour) and initial cell count (log CFU per square centimeter), following secondary modeling with the square root model. All of the models developed were validated with observed data, which were not used for model development. Growth of total bacteria and Salmonella cells was observed at 28, 30, 33, and 35°C, but there was no growth detected below 20°C within the time frame investigated. Moreover, various indices indicated that the performance of the developed models was acceptable. The results suggest that the models developed in this study may be useful in predicting the growth/no growth interface and kinetic behavior of Salmonella bacteria on polyethylene cutting boards.

  13. Usage and users of online self-management programs for adult patients with atopic dermatitis and food allergy: an explorative study.

    PubMed

    van Os-Medendorp, Harmieke; van Leent-de Wit, Ilse; de Bruin-Weller, Marjolein; Knulst, André

    2015-05-23

    Two online self-management programs for patients with atopic dermatitis (AD) or food allergy (FA) were developed with the aim of helping patients cope with their condition, follow the prescribed treatment regimen, and deal with the consequences of their illness in daily life. Both programs consist of several modules containing information, personal stories by fellow patients, videos, and exercises with feedback. Health care professionals can refer their patients to the programs. However, the use of the program in daily practice is unknown. The aim of this study was to explore the use and characteristics of users of the online self-management programs "Living with eczema," and "Living with food allergy," and to investigate factors related to the use of the trainings. A cross-sectional design was carried out in which the outcome parameters were the number of log-ins by patients, the number of hits on the system's core features, disease severity, quality of life, and domains of self-management. Descriptive statistics were used to summarize sample characteristics and to describe number of log-ins and hits per module and per functionality. Correlation and regression analyses were used to explore the relation between the number of log-ins and patient characteristics. Since the start, 299 adult patients have been referred to the online AD program; 173 logged in for at least one occasion. Data from 75 AD patients were available for analyses. Mean number of log-ins was 3.1 (range 1-11). Linear regression with the number of log-ins as dependent variable showed that age and quality of life contributed most to the model, with betas of .35 ( P=.002) and .26 (P=.05), respectively, and an R(2) of .23. Two hundred fourteen adult FA patients were referred to the online FA training, 124 logged in for at least one occasion and data from 45 patients were available for analysis. Mean number of log-ins was 3.0 (range 1-11). Linear regression with the number of log-ins as dependent variable revealed that adding the self-management domain "social integration and support" to the model led to an R(2) of .13. The modules with information about the disease, diagnosis, and treatment were most visited. Most hits were on the information parts of the modules (55-58%), followed by exercises (30-32%). The online self-management programs "Living with eczema" and "Living with food allergy" were used by patients in addition to the usual face-to-face care. Almost 60% of all referred patients logged in, with an average of three log-ins. All modules seemed to be relevant, but there is room for improvement in the use of the training. Age, quality of life, and lower social integration and support were related to the use of the training, but only part of the variance in use could be explained by these variables.

  14. Smooth individual level covariates adjustment in disease mapping.

    PubMed

    Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise

    2018-05-01

    Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. A CORRELATION BETWEEN RADIATION TOLERANCE AND NUCLEAR SURFACE AREA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iversen, S.

    1962-09-22

    Sparrow and Miksche (Science, 134:282) determined the dose (r/day) required to produce severe growth inhibition in 23 species of plants and found a linear relationship between log nuclear volume and log dose. The following equations hold for 6 species: log nuclear volume - 4.42 -0.82 log dose and log nuclear volume = 1.66 + 0.66 log (DNA content). If all the nuclear DNA is distributed in two peripheral zones, the equations also hold: 2(log nuclear surface area) - 1.33(log nuclear volume) - 2.21 + 0.88 log(DNA content) and 5.88-- 1.09 log dose. For the 23 species, the equation was obtained:more » 2(log nuclear surface area) = 5.41 -- 0.97 log dose. All the slopes are close to the expected value of 1.00. (D.L.C.)« less

  16. Using the Logarithm of Odds to Define a Vector Space on Probabilistic Atlases

    PubMed Central

    Pohl, Kilian M.; Fisher, John; Bouix, Sylvain; Shenton, Martha; McCarley, Robert W.; Grimson, W. Eric L.; Kikinis, Ron; Wells, William M.

    2007-01-01

    The Logarithm of the Odds ratio (LogOdds) is frequently used in areas such as artificial neural networks, economics, and biology, as an alternative representation of probabilities. Here, we use LogOdds to place probabilistic atlases in a linear vector space. This representation has several useful properties for medical imaging. For example, it not only encodes the shape of multiple anatomical structures but also captures some information concerning uncertainty. We demonstrate that the resulting vector space operations of addition and scalar multiplication have natural probabilistic interpretations. We discuss several examples for placing label maps into the space of LogOdds. First, we relate signed distance maps, a widely used implicit shape representation, to LogOdds and compare it to an alternative that is based on smoothing by spatial Gaussians. We find that the LogOdds approach better preserves shapes in a complex multiple object setting. In the second example, we capture the uncertainty of boundary locations by mapping multiple label maps of the same object into the LogOdds space. Third, we define a framework for non-convex interpolations among atlases that capture different time points in the aging process of a population. We evaluate the accuracy of our representation by generating a deformable shape atlas that captures the variations of anatomical shapes across a population. The deformable atlas is the result of a principal component analysis within the LogOdds space. This atlas is integrated into an existing segmentation approach for MR images. We compare the performance of the resulting implementation in segmenting 20 test cases to a similar approach that uses a more standard shape model that is based on signed distance maps. On this data set, the Bayesian classification model with our new representation outperformed the other approaches in segmenting subcortical structures. PMID:17698403

  17. Linear Multivariable Regression Models for Prediction of Eddy Dissipation Rate from Available Meteorological Data

    NASA Technical Reports Server (NTRS)

    MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.

    2005-01-01

    Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.

  18. Estradiol and Inflammatory Markers in Older Men

    PubMed Central

    Maggio, Marcello; Ceda, Gian Paolo; Lauretani, Fulvio; Bandinelli, Stefania; Metter, E. Jeffrey; Artoni, Andrea; Gatti, Elisa; Ruggiero, Carmelinda; Guralnik, Jack M.; Valenti, Giorgio; Ling, Shari M.; Basaria, Shehzad; Ferrucci, Luigi

    2009-01-01

    Background: Aging is characterized by a mild proinflammatory state. In older men, low testosterone levels have been associated with increasing levels of proinflammatory cytokines. It is still unclear whether estradiol (E2), which generally has biological activities complementary to testosterone, affects inflammation. Methods: We analyzed data obtained from 399 men aged 65–95 yr enrolled in the Invecchiare in Chianti study with complete data on body mass index (BMI), serum E2, testosterone, IL-6, soluble IL-6 receptor, TNF-α, IL-1 receptor antagonist, and C-reactive protein. The relationship between E2 and inflammatory markers was examined using multivariate linear models adjusted for age, BMI, smoking, physical activity, chronic disease, and total testosterone. Results: In age-adjusted analysis, log (E2) was positively associated with log (IL-6) (r = 0.19; P = 0.047), and the relationship was statistically significant (P = 0.032) after adjustments for age, BMI, smoking, physical activity, chronic disease, and serum testosterone levels. Log (E2) was not significantly associated with log (C-reactive protein), log (soluble IL-6 receptor), or log (TNF-α) in both age-adjusted and fully adjusted analyses. Conclusions: In older men, E2 is weakly positively associated with IL-6, independent of testosterone and other confounders including BMI. PMID:19050054

  19. Estimating the number of injecting drug users in Scotland's HCV-diagnosed population using capture-recapture methods.

    PubMed

    McDonald, S A; Hutchinson, S J; Schnier, C; McLeod, A; Goldberg, D J

    2014-01-01

    In countries maintaining national hepatitis C virus (HCV) surveillance systems, a substantial proportion of individuals report no risk factors for infection. Our goal was to estimate the proportion of diagnosed HCV antibody-positive persons in Scotland (1991-2010) who probably acquired infection through injecting drug use (IDU), by combining data on IDU risk from four linked data sources using log-linear capture-recapture methods. Of 25,521 HCV-diagnosed individuals, 14,836 (58%) reported IDU risk with their HCV diagnosis. Log-linear modelling estimated a further 2484 HCV-diagnosed individuals with IDU risk, giving an estimated prevalence of 83. Stratified analyses indicated variation across birth cohort, with estimated prevalence as low as 49% in persons born before 1960 and greater than 90% for those born since 1960. These findings provide public-health professionals with a more complete profile of Scotland's HCV-infected population in terms of transmission route, which is essential for targeting educational, prevention and treatment interventions.

  20. Immobilized Artificial Membrane HPLC Derived Parameters vs PAMPA-BBB Data in Estimating in Situ Measured Blood-Brain Barrier Permeation of Drugs.

    PubMed

    Grumetto, Lucia; Russo, Giacomo; Barbato, Francesco

    2016-08-01

    The affinity indexes for phospholipids (log kW(IAM)) for 42 compounds were measured by high performance liquid chromatography (HPLC) on two different phospholipid-based stationary phases (immobilized artificial membrane, IAM), i.e., IAM.PC.MG and IAM.PC.DD2. The polar/electrostatic interaction forces between analytes and membrane phospholipids (Δlog kW(IAM)) were calculated as the differences between the experimental values of log kW(IAM) and those expected for isolipophilic neutral compounds having polar surface area (PSA) = 0. The values of passage through a porcine brain lipid extract (PBLE) artificial membrane for 36 out of the 42 compounds considered, measured by the so-called PAMPA-BBB technique, were taken from the literature (P0(PAMPA-BBB)). The values of blood-brain barrier (BBB) passage measured in situ, P0(in situ), for 38 out of the 42 compounds considered, taken from the literature, represented the permeability of the neutral forms on "efflux minimized" rodent models. The present work was aimed at verifying the soundness of Δlog kW(IAM) at describing the potential of passage through the BBB as compared to data achieved by the PAMPA-BBB technique. In a first instance, the values of log P0(PAMPA-BBB) (32 data points) were found significantly related to the n-octanol lipophilicity values of the neutral forms (log P(N)) (r(2) = 0.782) whereas no significant relationship (r(2) = 0.246) was found with lipophilicity values of the mixtures of ionized and neutral forms existing at the experimental pH 7.4 (log D(7.4)) as well as with either log kW(IAM) or Δlog kW(IAM) values. log P0(PAMPA-BBB) related moderately to log P0(in situ) values (r(2) = 0.604). The latter did not relate with either n-octanol lipophilicity indexes (log P(N) and log D(7.4)) or phospholipid affinity indexes (log kW(IAM)). In contrast, significant inverse linear relationships were observed between log P0(in situ) (38 data points) and Δlog kW(IAM) values for all the compounds but ibuprofen and chlorpromazine, which behaved as moderate outliers (r(2) = 0.656 and r(2) = 0.757 for values achieved on IAM.PC.MG and IAM.PC.DD2, respectively). Since log P0(in situ) refer to the "intrinsic permeability" of the analytes regardless their ionization degree, no correction for ionization of Δlog kW(IAM) values was needed. Furthermore, log P0(in situ) were found roughly linearly related to log BB values (i.e., the logarithm of the ratio brain concentration/blood concentration measured in vivo) for all the analytes but those predominantly present at the experimental pH 7.4 as anions. These results suggest that, at least for the data set considered, Δlog kW(IAM) parameters are more effective than log P0(PAMPA-BBB) at predicting log P0(in situ) values for all the analytes. Furthermore, ionization appears to affect differently, and much more markedly, BBB passage of acids (yielding anions) than that of the other ionizable compounds.

  1. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX

    DTIC Science & Technology

    2015-07-01

    exercise was to evaluate the importance of chemical -specific model input parameters, the impacts of their uncertainty, and the potential benefits of... chemical -specific inputs for RDX that were determined to be sensitive with relatively high uncertainty: these included the soil-water linear...Koc for organic chemicals . The EFS values provided for log Koc of RDX were 1.72 and 1.95. OBJECTIVE: TREECS™ (http://el.erdc.usace.army.mil/treecs

  2. Effect of stimulus configuration on crowding in strabismic amblyopia.

    PubMed

    Norgett, Yvonne; Siderov, John

    2017-11-01

    Foveal vision in strabismic amblyopia can show increased levels of crowding, akin to typical peripheral vision. Target-flanker similarity and visual-acuity test configuration may cause the magnitude of crowding to vary in strabismic amblyopia. We used custom-designed visual acuity tests to investigate crowding in observers with strabismic amblyopia. LogMAR was measured monocularly in both eyes of 11 adults with strabismic or mixed strabismic/anisometropic amblyopia using custom-designed letter tests. The tests used single-letter and linear formats with either bar or letter flankers to introduce crowding. Tests were presented monocularly on a high-resolution display at a test distance of 4 m, using standardized instructions. For each condition, five letters of each size were shown; testing continued until three letters of a given size were named incorrectly. Uncrowded logMAR was subtracted from logMAR in each of the crowded tests to highlight the crowding effect. Repeated-measures ANOVA showed that letter flankers and linear presentation individually resulted in poorer performance in the amblyopic eyes (respectively, mean normalized logMAR = 0.29, SE = 0.07, mean normalized logMAR = 0.27, SE = 0.07; p < 0.05) and together had an additive effect (mean = 0.42, SE = 0.09, p < 0.001). There was no difference across the tests in the fellow eyes (p > 0.05). Both linear presentation and letter rather than bar flankers increase crowding in the amblyopic eyes of people with strabismic amblyopia. These results suggest the influence of more than one mechanism contributing to crowding in linear visual-acuity charts with letter flankers.

  3. An experimental loop design for the detection of constitutional chromosomal aberrations by array CGH

    PubMed Central

    2009-01-01

    Background Comparative genomic hybridization microarrays for the detection of constitutional chromosomal aberrations is the application of microarray technology coming fastest into routine clinical application. Through genotype-phenotype association, it is also an important technique towards the discovery of disease causing genes and genomewide functional annotation in human. When using a two-channel microarray of genomic DNA probes for array CGH, the basic setup consists in hybridizing a patient against a normal reference sample. Two major disadvantages of this setup are (1) the use of half of the resources to measure a (little informative) reference sample and (2) the possibility that deviating signals are caused by benign copy number variation in the "normal" reference instead of a patient aberration. Instead, we apply an experimental loop design that compares three patients in three hybridizations. Results We develop and compare two statistical methods (linear models of log ratios and mixed models of absolute measurements). In an analysis of 27 patients seen at our genetics center, we observed that the linear models of the log ratios are advantageous over the mixed models of the absolute intensities. Conclusion The loop design and the performance of the statistical analysis contribute to the quick adoption of array CGH as a routine diagnostic tool. They lower the detection limit of mosaicisms and improve the assignment of copy number variation for genetic association studies. PMID:19925645

  4. An experimental loop design for the detection of constitutional chromosomal aberrations by array CGH.

    PubMed

    Allemeersch, Joke; Van Vooren, Steven; Hannes, Femke; De Moor, Bart; Vermeesch, Joris Robert; Moreau, Yves

    2009-11-19

    Comparative genomic hybridization microarrays for the detection of constitutional chromosomal aberrations is the application of microarray technology coming fastest into routine clinical application. Through genotype-phenotype association, it is also an important technique towards the discovery of disease causing genes and genomewide functional annotation in human. When using a two-channel microarray of genomic DNA probes for array CGH, the basic setup consists in hybridizing a patient against a normal reference sample. Two major disadvantages of this setup are (1) the use of half of the resources to measure a (little informative) reference sample and (2) the possibility that deviating signals are caused by benign copy number variation in the "normal" reference instead of a patient aberration. Instead, we apply an experimental loop design that compares three patients in three hybridizations. We develop and compare two statistical methods (linear models of log ratios and mixed models of absolute measurements). In an analysis of 27 patients seen at our genetics center, we observed that the linear models of the log ratios are advantageous over the mixed models of the absolute intensities. The loop design and the performance of the statistical analysis contribute to the quick adoption of array CGH as a routine diagnostic tool. They lower the detection limit of mosaicisms and improve the assignment of copy number variation for genetic association studies.

  5. Linear models for assessing mechanisms of sperm competition: the trouble with transformations.

    PubMed

    Eggert, Anne-Katrin; Reinhardt, Klaus; Sakaluk, Scott K

    2003-01-01

    Although sperm competition is a pervasive selective force shaping the reproductive tactics of males, the mechanisms underlying different patterns of sperm precedence remain obscure. Parker et al. (1990) developed a series of linear models designed to identify two of the more basic mechanisms: sperm lotteries and sperm displacement; the models can be tested experimentally by manipulating the relative numbers of sperm transferred by rival males and determining the paternity of offspring. Here we show that tests of the model derived for sperm lotteries can result in misleading inferences about the underlying mechanism of sperm precedence because the required inverse transformations may lead to a violation of fundamental assumptions of linear regression. We show that this problem can be remedied by reformulating the model using the actual numbers of offspring sired by each male, and log-transforming both sides of the resultant equation. Reassessment of data from a previous study (Sakaluk and Eggert 1996) using the corrected version of the model revealed that we should not have excluded a simple sperm lottery as a possible mechanism of sperm competition in decorated crickets, Gryllodes sigillatus.

  6. A linear solvation energy relationship model of organic chemical partitioning to dissolved organic carbon.

    PubMed

    Kipka, Undine; Di Toro, Dominic M

    2011-09-01

    Predicting the association of contaminants with both particulate and dissolved organic matter is critical in determining the fate and bioavailability of chemicals in environmental risk assessment. To date, the association of a contaminant to particulate organic matter is considered in many multimedia transport models, but the effect of dissolved organic matter is typically ignored due to a lack of either reliable models or experimental data. The partition coefficient to dissolved organic carbon (K(DOC)) may be used to estimate the fraction of a contaminant that is associated with dissolved organic matter. Models relating K(DOC) to the octanol-water partition coefficient (K(OW)) have not been successful for many types of dissolved organic carbon in the environment. Instead, linear solvation energy relationships are proposed to model the association of chemicals with dissolved organic matter. However, more chemically diverse K(DOC) data are needed to produce a more robust model. For humic acid dissolved organic carbon, the linear solvation energy relationship predicts log K(DOC) with a root mean square error of 0.43. Copyright © 2011 SETAC.

  7. Multilaboratory comparison of hepatitis C virus viral load assays.

    PubMed

    Caliendo, A M; Valsamakis, A; Zhou, Y; Yen-Lieberman, B; Andersen, J; Young, S; Ferreira-Gonzalez, A; Tsongalis, G J; Pyles, R; Bremer, J W; Lurain, N S

    2006-05-01

    We report a multilaboratory evaluation of hepatitis C virus (HCV) viral load assays to determine their linear range, reproducibility, subtype detection, and agreement. A panel of HCV RNA samples ranging in nominal concentration from 1.0 to 7.0 log10 IU/ml was constructed by diluting a clinical specimen (genotype 1b). Replicates of the panel were tested in multiple laboratories using the Abbott TaqMan analyte-specific reagent (Abbott reverse transcription-PCR [RT-PCR]), Roche TaqMan RUO (Roche RT-PCR), Roche Amplicor Monitor HCV 2.0 (Roche Monitor), and Bayer VERSANT HCV RNA 3.0 (Bayer bDNA) assays. Bayer bDNA-negative specimens were tested reflexively using the Bayer VERSANT HCV RNA qualitative assay (Bayer TMA). Abbott RT-PCR and Roche RT-PCR detected all 28 replicates with a concentration of 1.0 log10 IU/ml and were linear to 7.0 log10 IU/ml. Roche Monitor and Bayer bDNA detected 27 out of 28 and 13 out of 28 replicates, respectively, of 3.0 log10 IU/ml. Bayer TMA detected all seven replicates with 1.0 log10 IU/ml. Bayer bDNA was the most reproducible of the four assays. The mean viral load values for panel members in the linear ranges of the assays were within 0.5 log10 for the different tests. Eighty-nine clinical specimens of various genotypes (1 through 4) were tested in the Bayer bDNA, Abbott RT-PCR, and Roche RT-PCR assays. For Abbott RT-PCR, mean viral load values were 0.61 to 0.96 log10 greater than the values for Bayer bDNA assay for samples with genotype 1, 2, or 3 samples and 0.08 log10 greater for genotype 4 specimens. The Roche RT-PCR assay gave mean viral load values that were 0.28 to 0.82 log10 greater than those obtained with the Bayer bDNA assay for genotype 1, 2, and 3 samples. However, for genotype 4 samples the mean viral load value obtained with the Roche RT-PCR assay was, on average, 0.15 log10 lower than that of the Bayer bDNA. Based on these data, we conclude that the sensitivity and linear range of the Abbott and Roche RT-PCR assays enable them to be used for HCV diagnostics and therapeutic monitoring. However, the differences in the viral load values obtained with the different assays underscore the importance of using one assay when monitoring response to therapy.

  8. Fatigue shifts and scatters heart rate variability in elite endurance athletes.

    PubMed

    Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire

    2013-01-01

    This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in 'fatigue' or in 'no-fatigue' state in 'real life' conditions. 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms(2) and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). 172 trials were identified as in a 'fatigue' and 891 as in 'no-fatigue' state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between 'fatigue' and 'no-fatigue': HRSU (+6.27±0.61 bpm), logTPSU (-0.36±0.04), logLFSU (-0.27±0.04), logHFSU (-0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (-9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (-0.28±0.03), logLFST (-0.29±0.03), logHFST (-0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the 'fatigue' state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern.

  9. Knot probabilities in random diagrams

    NASA Astrophysics Data System (ADS)

    Cantarella, Jason; Chapman, Harrison; Mastin, Matt

    2016-10-01

    We consider a natural model of random knotting—choose a knot diagram at random from the finite set of diagrams with n crossings. We tabulate diagrams with 10 and fewer crossings and classify the diagrams by knot type, allowing us to compute exact probabilities for knots in this model. As expected, most diagrams with 10 and fewer crossings are unknots (about 78% of the roughly 1.6 billion 10 crossing diagrams). For these crossing numbers, the unknot fraction is mostly explained by the prevalence of ‘tree-like’ diagrams which are unknots for any assignment of over/under information at crossings. The data shows a roughly linear relationship between the log of knot type probability and the log of the frequency rank of the knot type, analogous to Zipf’s law for word frequency. The complete tabulation and all knot frequencies are included as supplementary data.

  10. Comprehensive Interpretation of the Laboratory Experiments Results to Construct Model of the Polish Shale Gas Rocks

    NASA Astrophysics Data System (ADS)

    Jarzyna, Jadwiga A.; Krakowska, Paulina I.; Puskarczyk, Edyta; Wawrzyniak-Guz, Kamila; Zych, Marcin

    2018-03-01

    More than 70 rock samples from so-called sweet spots, i.e. the Ordovician Sa Formation and Silurian Ja Member of Pa Formation from the Baltic Basin (North Poland) were examined in the laboratory to determine bulk and grain density, total and effective/dynamic porosity, absolute permeability, pore diameters size, total surface area, and natural radioactivity. Results of the pyrolysis, i.e., TOC (Total Organic Carbon) together with S1 and S2 - parameters used to determine the hydrocarbon generation potential of rocks, were also considered. Elemental composition from chemical analyses and mineral composition from XRD measurements were also included. SCAL analysis, NMR experiments, Pressure Decay Permeability measurements together with water immersion porosimetry and adsorption/ desorption of nitrogen vapors method were carried out along with the comprehensive interpretation of the outcomes. Simple and multiple linear statistical regressions were used to recognize mutual relationships between parameters. Observed correlations and in some cases big dispersion of data and discrepancies in the property values obtained from different methods were the basis for building shale gas rock model for well logging interpretation. The model was verified by the result of the Monte Carlo modelling of spectral neutron-gamma log response in comparison with GEM log results.

  11. Salmonella Inactivation During Extrusion of an Oat Flour Model Food.

    PubMed

    Anderson, Nathan M; Keller, Susanne E; Mishra, Niharika; Pickens, Shannon; Gradl, Dana; Hartter, Tim; Rokey, Galen; Dohl, Christopher; Plattner, Brian; Chirtel, Stuart; Grasso-Kelley, Elizabeth M

    2017-03-01

    Little research exists on Salmonella inactivation during extrusion processing, yet many outbreaks associated with low water activity foods since 2006 were linked to extruded foods. The aim of this research was to study Salmonella inactivation during extrusion of a model cereal product. Oat flour was inoculated with Salmonella enterica serovar Agona, an outbreak strain isolated from puffed cereals, and processed using a single-screw extruder at a feed rate of 75 kg/h and a screw speed of 500 rpm. Extrudate samples were collected from the barrel outlet in sterile bags and immediately cooled in an ice-water bath. Populations were determined using standard plate count methods or a modified most probable number when populations were low. Reductions in population were determined and analyzed using a general linear model. The regression model obtained for the response surface tested was Log (N R /N O ) = 20.50 + 0.82T - 141.16a w - 0.0039T 2 + 87.91a w 2 (R 2 = 0.69). The model showed significant (p < 0.05) linear and quadratic effects of a w and temperature and enabled an assessment of critical control parameters. Reductions of 0.67 ± 0.14 to 7.34 ± 0.02 log CFU/g were observed over ranges of a w (0.72 to 0.96) and temperature (65 to 100 °C) tested. Processing conditions above 82 °C and 0.89 a w achieved on average greater than a 5-log reduction of Salmonella. Results indicate that extrusion is an effective means for reducing Salmonella as most processes commonly employed to produce cereals and other low water activity foods exceed these parameters. Thus, contamination of an extruded food product would most likely occur postprocessing as a result of environmental contamination or through the addition of coatings and flavorings. © 2017 Institute of Food Technologists®.

  12. A branching process model for the analysis of abortive colony size distributions in carbon ion-irradiated normal human fibroblasts.

    PubMed

    Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki

    2014-05-01

    A single cell can form a colony, and ionizing irradiation has long been known to reduce such a cellular clonogenic potential. Analysis of abortive colonies unable to continue to grow should provide important information on the reproductive cell death (RCD) following irradiation. Our previous analysis with a branching process model showed that the RCD in normal human fibroblasts can persist over 16 generations following irradiation with low linear energy transfer (LET) γ-rays. Here we further set out to evaluate the RCD persistency in abortive colonies arising from normal human fibroblasts exposed to high-LET carbon ions (18.3 MeV/u, 108 keV/µm). We found that the abortive colony size distribution determined by biological experiments follows a linear relationship on the log-log plot, and that the Monte Carlo simulation using the RCD probability estimated from such a linear relationship well simulates the experimentally determined surviving fraction and the relative biological effectiveness (RBE). We identified the short-term phase and long-term phase for the persistent RCD following carbon-ion irradiation, which were similar to those previously identified following γ-irradiation. Taken together, our results suggest that subsequent secondary or tertiary colony formation would be invaluable for understanding the long-lasting RCD. All together, our framework for analysis with a branching process model and a colony formation assay is applicable to determination of cellular responses to low- and high-LET radiation, and suggests that the long-lasting RCD is a pivotal determinant of the surviving fraction and the RBE.

  13. Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

    PubMed Central

    Huang, Jian; Zhang, Cun-Hui

    2013-01-01

    The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100

  14. Bioconcentration of lipophilic compounds by some aquatic organisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawker, D.W.; Connell, D.W.

    1986-04-01

    With nondegradable, lipophilic compounds having log P values ranging from 2 to 6, direct linear relationships have been found between the logarithms of the equilibrium bioconcentration factors, and also reciprocal clearance rate constants, with log P for daphnids and molluscs. These relationships permit calculation of the times required for equilibrium and significant bioconcentration of lipophilic chemicals. Compared with fish, these time periods are successively shorter for molluscs, then daphnids. The equilibrium biotic concentration was found to decrease with increasing chemical hydrophobicity for both molluscs and daphnids. Also, new linear relationships between the logarithm of the bioconcentration factor and log Pmore » were found for compounds not attaining equilibrium within finite exposure times.« less

  15. Parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Amin-Javaheri, Masoud; Orin, David E.

    1989-01-01

    The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.

  16. C. botulinum inactivation kinetics implemented in a computational model of a high-pressure sterilization process.

    PubMed

    Juliano, Pablo; Knoerzer, Kai; Fryer, Peter J; Versteeg, Cornelis

    2009-01-01

    High-pressure, high-temperature (HPHT) processing is effective for microbial spore inactivation using mild preheating, followed by rapid volumetric compression heating and cooling on pressure release, enabling much shorter processing times than conventional thermal processing for many food products. A computational thermal fluid dynamic (CTFD) model has been developed to model all processing steps, including the vertical pressure vessel, an internal polymeric carrier, and food packages in an axis-symmetric geometry. Heat transfer and fluid dynamic equations were coupled to four selected kinetic models for the inactivation of C. botulinum; the traditional first-order kinetic model, the Weibull model, an nth-order model, and a combined discrete log-linear nth-order model. The models were solved to compare the resulting microbial inactivation distributions. The initial temperature of the system was set to 90 degrees C and pressure was selected at 600 MPa, holding for 220 s, with a target temperature of 121 degrees C. A representation of the extent of microbial inactivation throughout all processing steps was obtained for each microbial model. Comparison of the models showed that the conventional thermal processing kinetics (not accounting for pressure) required shorter holding times to achieve a 12D reduction of C. botulinum spores than the other models. The temperature distribution inside the vessel resulted in a more uniform inactivation distribution when using a Weibull or an nth-order kinetics model than when using log-linear kinetics. The CTFD platform could illustrate the inactivation extent and uniformity provided by the microbial models. The platform is expected to be useful to evaluate models fitted into new C. botulinum inactivation data at varying conditions of pressure and temperature, as an aid for regulatory filing of the technology as well as in process and equipment design.

  17. Removal of polycyclic aromatic hydrocarbons from aqueous solution by raw and modified plant residue materials as biosorbents.

    PubMed

    Xi, Zemin; Chen, Baoliang

    2014-04-01

    Removal of polycyclic aromatic hydrocarbons (PAHs), e.g., naphthalene, acenaphthene, phenanthrene and pyrene, from aqueous solution by raw and modified plant residues was investigated to develop low cost biosorbents for organic pollutant abatement. Bamboo wood, pine wood, pine needles and pine bark were selected as plant residues, and acid hydrolysis was used as an easily modification method. The raw and modified biosorbents were characterized by elemental analysis, Fourier transform infrared spectroscopy and scanning electron microscopy. The sorption isotherms of PAHs to raw biosorbents were apparently linear, and were dominated by a partitioning process. In comparison, the isotherms of the hydrolyzed biosorbents displayed nonlinearity, which was controlled by partitioning and the specific interaction mechanism. The sorption kinetic curves of PAHs to the raw and modified plant residues fit well with the pseudo second-order kinetics model. The sorption rates were faster for the raw biosorbents than the corresponding hydrolyzed biosorbents, which was attributed to the latter having more condensed domains (i.e., exposed aromatic core). By the consumption of the amorphous cellulose component under acid hydrolysis, the sorption capability of the hydrolyzed biosorbents was notably enhanced, i.e., 6-18 fold for phenanthrene, 6-8 fold for naphthalene and pyrene and 5-8 fold for acenaphthene. The sorption coefficients (Kd) were negatively correlated with the polarity index [(O+N)/C], and positively correlated with the aromaticity of the biosorbents. For a given biosorbent, a positive linear correlation between logKoc and logKow for different PAHs was observed. Interestingly, the linear plots of logKoc-logKow were parallel for different biosorbents. These observations suggest that the raw and modified plant residues have great potential as biosorbents to remove PAHs from wastewater. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  18. Assessing the Liquidity of Firms: Robust Neural Network Regression as an Alternative to the Current Ratio

    NASA Astrophysics Data System (ADS)

    de Andrés, Javier; Landajo, Manuel; Lorca, Pedro; Labra, Jose; Ordóñez, Patricia

    Artificial neural networks have proven to be useful tools for solving financial analysis problems such as financial distress prediction and audit risk assessment. In this paper we focus on the performance of robust (least absolute deviation-based) neural networks on measuring liquidity of firms. The problem of learning the bivariate relationship between the components (namely, current liabilities and current assets) of the so-called current ratio is analyzed, and the predictive performance of several modelling paradigms (namely, linear and log-linear regressions, classical ratios and neural networks) is compared. An empirical analysis is conducted on a representative data base from the Spanish economy. Results indicate that classical ratio models are largely inadequate as a realistic description of the studied relationship, especially when used for predictive purposes. In a number of cases, especially when the analyzed firms are microenterprises, the linear specification is improved by considering the flexible non-linear structures provided by neural networks.

  19. Infrared weak corrections to strongly interacting gauge boson scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciafaloni, Paolo; Urbano, Alfredo

    2010-04-15

    We evaluate the impact of electroweak corrections of infrared origin on strongly interacting longitudinal gauge boson scattering, calculating all-order resummed expressions at the double log level. As a working example, we consider the standard model with a heavy Higgs. At energies typical of forthcoming experiments (LHC, International Linear Collider, Compact Linear Collider), the corrections are in the 10%-40% range, with the relative sign depending on the initial state considered and on whether or not additional gauge boson emission is included. We conclude that the effect of radiative electroweak corrections should be included in the analysis of longitudinal gauge boson scattering.

  20. New method for calculating a mathematical expression for streamflow recession

    USGS Publications Warehouse

    Rutledge, Albert T.

    1991-01-01

    An empirical method has been devised to calculate the master recession curve, which is a mathematical expression for streamflow recession during times of negligible direct runoff. The method is based on the assumption that the storage-delay factor, which is the time per log cycle of streamflow recession, varies linearly with the logarithm of streamflow. The resulting master recession curve can be nonlinear. The method can be executed by a computer program that reads a data file of daily mean streamflow, then allows the user to select several near-linear segments of streamflow recession. The storage-delay factor for each segment is one of the coefficients of the equation that results from linear least-squares regression. Using results for each recession segment, a mathematical expression of the storage-delay factor as a function of the log of streamflow is determined by linear least-squares regression. The master recession curve, which is a second-order polynomial expression for time as a function of log of streamflow, is then derived using the coefficients of this function.

  1. Neurobehavioral Function in School-Age Children Exposed to Manganese in Drinking Water

    PubMed Central

    Oulhote, Youssef; Mergler, Donna; Barbeau, Benoit; Bellinger, David C.; Bouffard, Thérèse; Brodeur, Marie-Ève; Saint-Amour, Dave; Legrand, Melissa; Sauvé, Sébastien

    2014-01-01

    Background: Manganese neurotoxicity is well documented in individuals occupationally exposed to airborne particulates, but few data are available on risks from drinking-water exposure. Objective: We examined associations of exposure from concentrations of manganese in water and hair with memory, attention, motor function, and parent- and teacher-reported hyperactive behaviors. Methods: We recruited 375 children and measured manganese in home tap water (MnW) and hair (MnH). We estimated manganese intake from water ingestion. Using structural equation modeling, we estimated associations between neurobehavioral functions and MnH, MnW, and manganese intake from water. We evaluated exposure–response relationships using generalized additive models. Results: After adjusting for potential confounders, a 1-SD increase in log10 MnH was associated with a significant difference of –24% (95% CI: –36, –12%) SD in memory and –25% (95% CI: –41, –9%) SD in attention. The relations between log10 MnH and poorer memory and attention were linear. A 1-SD increase in log10 MnW was associated with a significant difference of –14% (95% CI: –24, –4%) SD in memory, and this relation was nonlinear, with a steeper decline in performance at MnW > 100 μg/L. A 1-SD increase in log10 manganese intake from water was associated with a significant difference of –11% (95% CI: –21, –0.4%) SD in motor function. The relation between log10 manganese intake and poorer motor function was linear. There was no significant association between manganese exposure and hyperactivity. Conclusion: Exposure to manganese in water was associated with poorer neurobehavioral performances in children, even at low levels commonly encountered in North America. Citation: Oulhote Y, Mergler D, Barbeau B, Bellinger DC, Bouffard T, Brodeur ME, Saint-Amour D, Legrand M, Sauvé S, Bouchard MF. 2014. Neurobehavioral function in school-age children exposed to manganese in drinking water. Environ Health Perspect 122:1343–1350; http://dx.doi.org/10.1289/ehp.1307918 PMID:25260096

  2. QSPR models for predicting generator-column-derived octanol/water and octanol/air partition coefficients of polychlorinated biphenyls.

    PubMed

    Yuan, Jintao; Yu, Shuling; Zhang, Ting; Yuan, Xuejie; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-06-01

    Octanol/water (K(OW)) and octanol/air (K(OA)) partition coefficients are two important physicochemical properties of organic substances. In current practice, K(OW) and K(OA) values of some polychlorinated biphenyls (PCBs) are measured using generator column method. Quantitative structure-property relationship (QSPR) models can serve as a valuable alternative method of replacing or reducing experimental steps in the determination of K(OW) and K(OA). In this paper, two different methods, i.e., multiple linear regression based on dragon descriptors and hologram quantitative structure-activity relationship, were used to predict generator-column-derived log K(OW) and log K(OA) values of PCBs. The predictive ability of the developed models was validated using a test set, and the performances of all generated models were compared with those of three previously reported models. All results indicated that the proposed models were robust and satisfactory and can thus be used as alternative models for the rapid assessment of the K(OW) and K(OA) of PCBs. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Accuracy for detection of simulated lesions: comparison of fluid-attenuated inversion-recovery, proton density--weighted, and T2-weighted synthetic brain MR imaging

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Itoh, R.; Melhem, E. R.

    2001-01-01

    OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.

  4. Forest fragmentation and selective logging have inconsistent effects on multiple animal-mediated ecosystem processes in a tropical forest.

    PubMed

    Schleuning, Matthias; Farwig, Nina; Peters, Marcell K; Bergsdorf, Thomas; Bleher, Bärbel; Brandl, Roland; Dalitz, Helmut; Fischer, Georg; Freund, Wolfram; Gikungu, Mary W; Hagen, Melanie; Garcia, Francisco Hita; Kagezi, Godfrey H; Kaib, Manfred; Kraemer, Manfred; Lung, Tobias; Naumann, Clas M; Schaab, Gertrud; Templin, Mathias; Uster, Dana; Wägele, J Wolfgang; Böhning-Gaese, Katrin

    2011-01-01

    Forest fragmentation and selective logging are two main drivers of global environmental change and modify biodiversity and environmental conditions in many tropical forests. The consequences of these changes for the functioning of tropical forest ecosystems have rarely been explored in a comprehensive approach. In a Kenyan rainforest, we studied six animal-mediated ecosystem processes and recorded species richness and community composition of all animal taxa involved in these processes. We used linear models and a formal meta-analysis to test whether forest fragmentation and selective logging affected ecosystem processes and biodiversity and used structural equation models to disentangle direct from biodiversity-related indirect effects of human disturbance on multiple ecosystem processes. Fragmentation increased decomposition and reduced antbird predation, while selective logging consistently increased pollination, seed dispersal and army-ant raiding. Fragmentation modified species richness or community composition of five taxa, whereas selective logging did not affect any component of biodiversity. Changes in the abundance of functionally important species were related to lower predation by antbirds and higher decomposition rates in small forest fragments. The positive effects of selective logging on bee pollination, bird seed dispersal and army-ant raiding were direct, i.e. not related to changes in biodiversity, and were probably due to behavioural changes of these highly mobile animal taxa. We conclude that animal-mediated ecosystem processes respond in distinct ways to different types of human disturbance in Kakamega Forest. Our findings suggest that forest fragmentation affects ecosystem processes indirectly by changes in biodiversity, whereas selective logging influences processes directly by modifying local environmental conditions and resource distributions. The positive to neutral effects of selective logging on ecosystem processes show that the functionality of tropical forests can be maintained in moderately disturbed forest fragments. Conservation concepts for tropical forests should thus include not only remaining pristine forests but also functionally viable forest remnants.

  5. Forest Fragmentation and Selective Logging Have Inconsistent Effects on Multiple Animal-Mediated Ecosystem Processes in a Tropical Forest

    PubMed Central

    Schleuning, Matthias; Farwig, Nina; Peters, Marcell K.; Bergsdorf, Thomas; Bleher, Bärbel; Brandl, Roland; Dalitz, Helmut; Fischer, Georg; Freund, Wolfram; Gikungu, Mary W.; Hagen, Melanie; Garcia, Francisco Hita; Kagezi, Godfrey H.; Kaib, Manfred; Kraemer, Manfred; Lung, Tobias; Schaab, Gertrud; Templin, Mathias; Uster, Dana; Wägele, J. Wolfgang; Böhning-Gaese, Katrin

    2011-01-01

    Forest fragmentation and selective logging are two main drivers of global environmental change and modify biodiversity and environmental conditions in many tropical forests. The consequences of these changes for the functioning of tropical forest ecosystems have rarely been explored in a comprehensive approach. In a Kenyan rainforest, we studied six animal-mediated ecosystem processes and recorded species richness and community composition of all animal taxa involved in these processes. We used linear models and a formal meta-analysis to test whether forest fragmentation and selective logging affected ecosystem processes and biodiversity and used structural equation models to disentangle direct from biodiversity-related indirect effects of human disturbance on multiple ecosystem processes. Fragmentation increased decomposition and reduced antbird predation, while selective logging consistently increased pollination, seed dispersal and army-ant raiding. Fragmentation modified species richness or community composition of five taxa, whereas selective logging did not affect any component of biodiversity. Changes in the abundance of functionally important species were related to lower predation by antbirds and higher decomposition rates in small forest fragments. The positive effects of selective logging on bee pollination, bird seed dispersal and army-ant raiding were direct, i.e. not related to changes in biodiversity, and were probably due to behavioural changes of these highly mobile animal taxa. We conclude that animal-mediated ecosystem processes respond in distinct ways to different types of human disturbance in Kakamega Forest. Our findings suggest that forest fragmentation affects ecosystem processes indirectly by changes in biodiversity, whereas selective logging influences processes directly by modifying local environmental conditions and resource distributions. The positive to neutral effects of selective logging on ecosystem processes show that the functionality of tropical forests can be maintained in moderately disturbed forest fragments. Conservation concepts for tropical forests should thus include not only remaining pristine forests but also functionally viable forest remnants. PMID:22114695

  6. Evaluation of third-degree and fourth-degree laceration rates as quality indicators.

    PubMed

    Friedman, Alexander M; Ananth, Cande V; Prendergast, Eri; D'Alton, Mary E; Wright, Jason D

    2015-04-01

    To examine the patterns and predictors of third-degree and fourth-degree laceration in women undergoing vaginal delivery. We identified a population-based cohort of women in the United States who underwent a vaginal delivery between 1998 and 2010 using the Nationwide Inpatient Sample. Multivariable log-linear regression models were developed to account for patient, obstetric, and hospital factors related to lacerations. Between-hospital variability of laceration rates was calculated using generalized log-linear mixed models. Among 7,096,056 women who underwent vaginal delivery in 3,070 hospitals, 3.3% (n=232,762) had a third-degree laceration and 1.1% (n=76,347) had a fourth-degree laceration. In an adjusted model for fourth-degree lacerations, important risk factors included shoulder dystocia and forceps and vacuum deliveries with and without episiotomy. Other demographic, obstetric, medical, and hospital variables, although statistically significant, were not major determinants of lacerations. Risk factors in a multivariable model for third-degree lacerations were similar to those in the fourth-degree model. Regression analysis of hospital rates (n=3,070) of lacerations demonstrated limited between-hospital variation. Risk of third-degree and fourth-degree laceration was most strongly related to operative delivery and shoulder dystocia. Between-hospital variation was limited. Given these findings and that the most modifiable practice related to lacerations would be reduction in operative vaginal deliveries (and a possible increase in cesarean delivery), third-degree and fourth-degree laceration rates may be a quality metric of limited utility.

  7. Information-geometric measures as robust estimators of connection strengths and external inputs.

    PubMed

    Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi

    2009-08-01

    Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.

  8. Entropy Conservation of Linear Dilaton Black Holes in Quantum Corrected Hawking Radiation

    NASA Astrophysics Data System (ADS)

    Sakalli, I.; Halilsoy, M.; Pasaoglu, H.

    2011-10-01

    It has been shown recently that information is lost in the Hawking radiation of the linear dilaton black holes in various theories when applying the tunneling formalism of Parikh and Wilczek without considering quantum gravity effects. In this paper, we recalculate the emission probability by taking into account the log-area correction to the Bekenstein-Hawking entropy and the statistical correlation between quanta emitted. The crucial role of the quantum gravity effects on the information leakage and black hole remnant is highlighted. The entropy conservation of the linear dilaton black holes is discussed in detail. We also model the remnant as an extreme linear dilaton black hole with a pointlike horizon in order to show that such a remnant cannot radiate and its temperature becomes zero. In summary, we show that the information can also leak out of the linear dilaton black holes together with preserving unitarity in quantum mechanics.

  9. Gender and single nucleotide polymorphisms in MTHFR, BHMT, SPTLC1, CRBP2R, and SCARB1 are significant predictors of plasma homocysteine normalized by RBC folate in healthy adults.

    USDA-ARS?s Scientific Manuscript database

    Using linear regression models, we studied the main and two-way interaction effects of the predictor variables gender, age, BMI, and 64 folate/vitamin B-12/homocysteine/lipid/cholesterol-related single nucleotide polymorphisms (SNP) on log-transformed plasma homocysteine normalized by red blood cell...

  10. Selected vitamin D metabolic gene variants and risk for autism spectrum disorder in the CHARGE Study.

    PubMed

    Schmidt, Rebecca J; Hansen, Robin L; Hartiala, Jaana; Allayee, Hooman; Sconberg, Jaime L; Schmidt, Linda C; Volk, Heather E; Tassone, Flora

    2015-08-01

    Vitamin D is essential for proper neurodevelopment and cognitive and behavioral function. We examined associations between autism spectrum disorder (ASD) and common, functional polymorphisms in vitamin D pathways. Children aged 24-60 months enrolled from 2003 to 2009 in the population-based CHARGE case-control study were evaluated clinically and confirmed to have ASD (n=474) or typical development (TD, n=281). Maternal, paternal, and child DNA samples for 384 (81%) families of children with ASD and 234 (83%) families of TD children were genotyped for: TaqI, BsmI, FokI, and Cdx2 in the vitamin D receptor (VDR) gene, and CYP27B1 rs4646536, GC rs4588, and CYP2R1 rs10741657. Case-control logistic regression, family-based log-linear, and hybrid log-linear analyses were conducted to produce risk estimates and 95% confidence intervals (CI) for each allelic variant. Paternal VDR TaqI homozygous variant genotype was significantly associated with ASD in case-control analysis (odds ratio [OR] [CI]: 6.3 [1.9-20.7]) and there was a trend towards increased risk associated with VDR BsmI (OR [CI]: 4.7 [1.6-13.4]). Log-linear triad analyses detected parental imprinting, with greater effects of paternally-derived VDR alleles. Child GC AA-genotype/A-allele was associated with ASD in log-linear and ETDT analyses. A significant association between decreased ASD risk and child CYP2R1 AA-genotype was found in hybrid log-linear analysis. There were limitations of low statistical power for less common alleles due to missing paternal genotypes. This study provides preliminary evidence that paternal and child vitamin D metabolism could play a role in the etiology of ASD; further research in larger study populations is warranted. Copyright © 2015. Published by Elsevier Ireland Ltd.

  11. A Model for Predicting Spring Emergence of Monochamus saltuarius (Coleoptera: Cerambycidae) from Korean white pine, Pinus koraiensis.

    PubMed

    Jung, Chan Sik; Koh, Sang-Hyun; Nam, Youngwoo; Ahn, Jeong Joon; Lee, Cha Young; Choi, Won I L

    2015-08-01

    Monochamus saltuarius Gebler is a vector that transmits the pine wood nematode, Bursaphelenchus xylophilus, to Korean white pine, Pinus koraiensis, in Korea. To reduce the damage caused by this nematode in pine forests, timely control measures are needed to suppress the cerambycid beetle population. This study sought to construct a forecasting model to predict beetle emergence based on spring temperature. Logs of Korean white pine were infested with M. saltuarius in 2009, and the infested logs were overwintered. In February 2010, infested logs were then moved into incubators held at constant temperature conditions of 16, 20, 23, 25, 27, 30 or 34°C until all adults had emerged. The developmental rate of the beetles was estimated by linear and nonlinear equations and a forecasting model for emergence of the beetle was constructed by pooling data based on normalized developmental rate. The lower threshold temperature for development was 8.3°C. The forecasting model relatively well predicted the emergence pattern of M. saltuarius collected from four areas in northern Republic of Korea. The median emergence dates predicted by the model were 2.2-5.9 d earlier than the observed median dates. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. A Simple Model for the Viscosity of Rhyolites as a Function of Temperature, Pressure and Water Content: Implications for Obsidian Flow Emplacement

    NASA Astrophysics Data System (ADS)

    Whittington, A. G.; Romine, W. L.

    2014-12-01

    Understanding the dynamics of rhyolitic conduits and lava flows, requires precise knowledge of how viscosity (η) varies with temperature (T), pressure (P) and volatile content (X). In order to address the paucity of viscosity data for high-silica rhyolite at low water contents, which represent water saturation at near-surface conditions, we made 245 viscosity measurements on Mono Craters (California) rhyolites containing between 0.01 and 1.1 wt.% H2O, at temperatures between 796 and 1774 K using parallel plate and concentric cylinder methods at atmospheric pressure. We then developed and calibrated a new empirical model for the log of the viscosity of rhyolitic melts, where non-linear variations due to temperature and water content are nested within a linear dependence of log η on P. The model was fitted to a total of 563 data points: our 245 new data, 255 published data from rhyolites across a wide P-T-X space, and 63 data on haplogranitic and granitic melts under high P-T conditions. Statistically insignificant parameters were eliminated from the model in an effort to increase parsimony and the final model is simple enough for use in numerical models of conduit or lava flow dynamics: log η = -5.142+(13080-2982log⁡(w+0.229))/(T-(98.9-175.9 log⁡(w+0.229)))- P(0.0007-0.76/T ) where η is in Pa s, w is water content in wt.%, P is in MPa and T is in K. The root mean square deviation (rmsd) between the model predictions and the 563 data points used in calibration is 0.39 log units. Experimental constraints have led previously to spurious correlations between P, T, X and η in viscosity data sets, so that predictive models may struggle to correctly resolve the individual effects of P, T and X, and especially their cross-correlations. The increasing water solubility with depth inside a simple isothermal sheet of obsidian suggests that viscosity should decrease by ~1 order of magnitude at ~20m depth and by ~2 orders of magnitude at ~100m depth. If equilibrium water contents are maintained, then deformation in spreading obsidian flows should be strongly partitioned into the deeper parts of the flow. Kinetically inhibited degassing, or recycling of degassed crust into a flow interior (e.g. by caterpillar-tread motion) could lead to strong lateral variations in viscosity within a flow, affecting flow evolution and morphology.

  13. Factors relating to windblown dust in associations between ...

    EPA Pesticide Factsheets

    Introduction: In effect estimates of city-specific PM2.5-mortality associations across United States (US), there exists a substantial amount of spatial heterogeneity. Some of this heterogeneity may be due to mass distribution of PM; areas where PM2.5 is likely to be dominated by large size fractions (above 1 micron; e.g., the contribution of windblown dust), may have a weaker association with mortality. Methods: Log rate ratios (betas) for the PM2.5-mortality association—derived from a model adjusting for time, an interaction with age-group, day of week, and natural splines of current temperature, current dew point, and unconstrained temperature at lags 1, 2, and 3, for 313 core-based statistical areas (CBSA) and their metropolitan divisions (MD) over 1999-2005—were used as the outcome. Using inverse variance weighted linear regression, we examined change in log rate ratios in association with PM10-PM2.5 correlation as a marker of windblown dust/higher PM size fraction; linearity of associations was assessed in models using splines with knots at quintile values. Results: Weighted mean PM2.5 association (0.96 percent increase in total non-accidental mortality for a 10 ug/m3 increment in PM2.5) increased by 0.34 (95% confidence interval: 0.20, 0.48) per interquartile change (0.25) in the PM10-PM2.5 correlation, and explained approximately 8% of the observed heterogeneity; the association was linear based on spline analysis. Conclusions: Preliminary results pro

  14. Air-sea exchange and gas-particle partitioning of polycyclic aromatic hydrocarbons over the northwestern Pacific Ocean: Role of East Asian continental outflow

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Guo, Z.

    2017-12-01

    We measured 15 parent polycyclic aromatic hydrocarbons (PAHs) in atmosphere and water during a research cruise from the East China Sea (ECS) to the northwestern Pacific Ocean (NWP) in the spring of 2015 to investigate the occurrence, air-sea gas exchange, and gas-particle partitioning of PAHs with a particular focus on the influence of East Asian continental outflow. The gaseous PAH composition and identification of sources were consistent with PAHs from the upwind area, indicating that the gaseous PAHs (three- to five-ring PAHs) were influenced by upwind land pollution. In addition, air-sea exchange fluxes of gaseous PAHs were estimated to be -54.2 to 107.4 ng m-2 d-1, and was indicative of variations of land-based PAH inputs. The logarithmic gas-particle partition coefficient (logKp) of PAHs regressed linearly against the logarithmic subcooled liquid vapor pressure, with a slope of -0.25. This was significantly larger than the theoretical value (-1), implying disequilibrium between the gaseous and particulate PAHs over the NWP. The non-equilibrium of PAH gas-particle partitioning was shielded from the volatilization of three-ring gaseous PAHs from seawater and lower soot concentrations in particular when the oceanic air masses prevailed. Modeling PAH absorption into organic matter and adsorption onto soot carbon revealed that the status of PAH gas-particle partitioning deviated more from the modeling Kp for oceanic air masses than those for continental air masses, which coincided with higher volatilization of three-ring PAHs and confirmed the influence of air-sea exchange. Meanwhile, significant linear regressions between logKp and logKoa (logKsa) for PAHs were observed for continental air masses, suggesting the dominant effect of East Asian continental outflow on atmospheric PAHs over the NWP during the sampling campaign.

  15. Integrating models that depend on variable data

    NASA Astrophysics Data System (ADS)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.

  16. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  17. Self assembly of rectangular shapes on concentration programming and probabilistic tile assembly models

    PubMed Central

    Rajasekaran, Sanguthevar

    2013-01-01

    Efficient tile sets for self assembling rectilinear shapes is of critical importance in algorithmic self assembly. A lower bound on the tile complexity of any deterministic self assembly system for an n × n square is Ω(log(n)log(log(n))) (inferred from the Kolmogrov complexity). Deterministic self assembly systems with an optimal tile complexity have been designed for squares and related shapes in the past. However designing Θ(log(n)log(log(n))) unique tiles specific to a shape is still an intensive task in the laboratory. On the other hand copies of a tile can be made rapidly using PCR (polymerase chain reaction) experiments. This led to the study of self assembly on tile concentration programming models. We present two major results in this paper on the concentration programming model. First we show how to self assemble rectangles with a fixed aspect ratio (α:β), with high probability, using Θ(α + β) tiles. This result is much stronger than the existing results by Kao et al. (Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008) and Doty (Randomized self-assembly for exact shapes. In: proceedings of the 50th annual IEEE symposium on foundations of computer science (FOCS), IEEE, Atlanta. pp 85–94, 2009)—which can only self assembly squares and rely on tiles which perform binary arithmetic. On the other hand, our result is based on a technique called staircase sampling. This technique eliminates the need for sub-tiles which perform binary arithmetic, reduces the constant in the asymptotic bound, and eliminates the need for approximate frames (Kao et al. Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008). Our second result applies staircase sampling on the equimolar concentration programming model (The tile complexity of linear assemblies. In: proceedings of the 36th international colloquium automata, languages and programming: Part I on ICALP ’09, Springer-Verlag, pp 235–253, 2009), to self assemble rectangles (of fixed aspect ratio) with high probability. The tile complexity of our algorithm is Θ(log(n)) and is optimal on the probabilistic tile assembly model (PTAM)—n being an upper bound on the dimensions of a rectangle. PMID:24311993

  18. Hawaiian forest bird trends: using log-linear models to assess long-term trends is supported by model diagnostics and assumptions (reply to Freed and Cann 2013)

    USGS Publications Warehouse

    Camp, Richard J.; Pratt, Thane K.; Gorresen, P. Marcos; Woodworth, Bethany L.; Jeffrey, John J.

    2014-01-01

    Freed and Cann (2013) criticized our use of linear models to assess trends in the status of Hawaiian forest birds through time (Camp et al. 2009a, 2009b, 2010) by questioning our sampling scheme, whether we met model assumptions, and whether we ignored short-term changes in the population time series. In the present paper, we address these concerns and reiterate that our results do not support the position of Freed and Cann (2013) that the forest birds in the Hakalau Forest National Wildlife Refuge (NWR) are declining, or that the federally listed endangered birds are showing signs of imminent collapse. On the contrary, our data indicate that the 21-year long-term trends for native birds in Hakalau Forest NWR are stable to increasing, especially in areas that have received active management.

  19. Predicting the bioconcentration factor of highly hydrophobic organic chemicals.

    PubMed

    Garg, Rajni; Smith, Carr J

    2014-07-01

    Bioconcentration refers to the process of uptake and buildup of chemicals in living organisms. Experimental measurement of bioconcentration factor (BCF) is time-consuming and expensive, and is not feasible for a large number of chemicals of regulatory concern. Quantitative structure-activity relationship (QSAR) models are used for estimating BCF values to help in risk assessment of a chemical. This paper presents the results of a QSAR study conducted to address an important problem encountered in the prediction of the BCF of highly hydrophobic chemicals. A new QSAR model is derived using a dataset of diverse organic chemicals previously tested in a United States Environmental Protection Agency laboratory. It is noted that the linear relationship between the BCF and hydrophobic parameter, i.e., calculated octanol-water partition coefficient (ClogP), breaks down for highly hydrophobic chemicals. The parabolic QSAR equation, log BCF=3.036 ClogP-0.197 ClogP(2)-0.808 MgVol (n=28, r(2)=0.817, q(2)=0.761, s=0.558) (experimental log BCF range=0.44-5.29, ClogP range=3.16-11.27), suggests that a non-linear relationship between BCF and the hydrophobic parameter, along with inclusion of additional molecular size, weight and/or volume parameters, should be considered while developing a QSAR model for more reliable prediction of the BCF of highly hydrophobic chemicals. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. A quantitative structure-activity relationship to predict efficacy of granular activated carbon adsorption to control emerging contaminants.

    PubMed

    Kennicutt, A R; Morkowchuk, L; Krein, M; Breneman, C M; Kilduff, J E

    2016-08-01

    A quantitative structure-activity relationship was developed to predict the efficacy of carbon adsorption as a control technology for endocrine-disrupting compounds, pharmaceuticals, and components of personal care products, as a tool for water quality professionals to protect public health. Here, we expand previous work to investigate a broad spectrum of molecular descriptors including subdivided surface areas, adjacency and distance matrix descriptors, electrostatic partial charges, potential energy descriptors, conformation-dependent charge descriptors, and Transferable Atom Equivalent (TAE) descriptors that characterize the regional electronic properties of molecules. We compare the efficacy of linear (Partial Least Squares) and non-linear (Support Vector Machine) machine learning methods to describe a broad chemical space and produce a user-friendly model. We employ cross-validation, y-scrambling, and external validation for quality control. The recommended Support Vector Machine model trained on 95 compounds having 23 descriptors offered a good balance between good performance statistics, low error, and low probability of over-fitting while describing a wide range of chemical features. The cross-validated model using a log-uptake (qe) response calculated at an aqueous equilibrium concentration (Ce) of 1 μM described the training dataset with an r(2) of 0.932, had a cross-validated r(2) of 0.833, and an average residual of 0.14 log units.

  1. Measurement of Setschenow constants for six hydrophobic compounds in simulated brines and use in predictive modeling for oil and gas systems.

    PubMed

    Burant, Aniela; Lowry, Gregory V; Karamalidis, Athanasios K

    2016-02-01

    Treatment and reuse of brines, produced from energy extraction activities, requires aqueous solubility data for organic compounds in saline solutions. The presence of salts decreases the aqueous solubility of organic compounds (i.e. salting-out effect) and can be modeled using the Setschenow Equation, the validity of which has not been assessed in high salt concentrations. In this study, we used solid-phase microextraction to determine Setschenow constants for selected organic compounds in aqueous solutions up to 2-5 M NaCl, 1.5-2 M CaCl2, and in Na-Ca binary electrolyte solutions to assess additivity of the constants. These compounds exhibited log-linear behavior up to these high NaCl concentrations. Log-linear decreases in solubility with increasing salt concentration were observed up to 1.5-2 M CaCl2 for all compounds, and added to a sparse database of CaCl2 Setschenow constants. Setschenow constants were additive in binary electrolyte mixtures. New models to predict CaCl2 and KCl Setschenow constants from NaCl Setschenow constants were developed, which successfully predicted the solubility of the compounds measured in this study. Overall, data show that the Setschenow Equation is valid for a wide range of salinity conditions typically found in energy-related technologies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Concentration Effects of Polymer Electrolyte Membrane Degradation Products on Oxygen Reduction Activity for Three Platinum Catalysts

    DOE PAGES

    Christ, J. M.; Neyerlin, K. C.; Richards, R.; ...

    2014-10-04

    A rotating disk electrode (RDE) along with cyclic voltammetry (CV) and linear sweep voltammetry (LSV), were used to investigate the impact of two model compounds representing degradation products of Nafion and 3M perfluorinated sulfonic acid membranes on the electrochemical surface area (ECA) and oxygen reduction reaction (ORR) activity of polycrystalline Pt, nano-structured thin film (NSTF) Pt (3M), and Pt/Vulcan carbon (Pt/Vu) (TKK) electrodes. ORR kinetic currents (measured at 0.9 V and transport corrected) were found to decrease linearly with the log of concentration for both model compounds on all Pt surfaces studied. Ultimately, model compound adsorption effects on ECA weremore » more abstruse due to competitive organic anion adsorption on Pt surfaces superimposing with the hydrogen underpotential deposition (HUPD) region.« less

  3. Normal reference values for bladder wall thickness on CT in a healthy population.

    PubMed

    Fananapazir, Ghaneh; Kitich, Aleksandar; Lamba, Ramit; Stewart, Susan L; Corwin, Michael T

    2018-02-01

    To determine normal bladder wall thickness on CT in patients without bladder disease. Four hundred and nineteen patients presenting for trauma with normal CTs of the abdomen and pelvis were included in our retrospective study. Bladder wall thickness was assessed, and bladder volume was measured using both the ellipsoid formula and an automated technique. Patient age, gender, and body mass index were recorded. Linear regression models were created to account for bladder volume, age, gender, and body mass index, and the multiple correlation coefficient with bladder wall thickness was computed. Bladder volume and bladder wall thickness were log-transformed to achieve approximate normality and homogeneity of variance. Variables that did not contribute substantively to the model were excluded, and a parsimonious model was created and the multiple correlation coefficient was calculated. Expected bladder wall thickness was estimated for different bladder volumes, and 1.96 standard deviation above expected provided the upper limit of normal on the log scale. Age, gender, and bladder volume were associated with bladder wall thickness (p = 0.049, 0.024, and < 0.001, respectively). The linear regression model had an R 2 of 0.52. Age and gender were negligible in contribution to the model, and a parsimonious model using only volume was created for both the ellipsoid and automated volumes (R 2  = 0.52 and 0.51, respectively). Bladder wall thickness correlates with bladder wall volume. The study provides reference bladder wall thicknesses on CT utilizing both the ellipsoid formula and automated bladder volumes.

  4. Effect of initial microbial density on inactivation of Giardia muris by ozone.

    PubMed

    Haas, Charles N; Kaymak, Baris

    2003-07-01

    Inactivation of microorganisms by disinfectants frequently shows non-linear behavior on a semilogarithmic plot of log survival ratio versus time. A number of models have been developed to depict these deviations from Chick's Law. Some of the models predict that the log survival ratio (at a particular disinfectant dose and contact time, even in absence of demand) would be a function of the initial concentration of microorganisms (N(0)), while other models do not predict such an effect. The effect of N(0) on the survival ratio has not been deliberately tested. This work examined the inactivation of Giardia muris by ozone in batch systems, deliberately varying the disinfectant dose and N(0). It was found that the models predicting a dependency of survival on N(0) gave a better description to the data than models that did not predict such a dependency. Hence there is an apparent decrease in disinfection efficiency of ozone against Giardia muris (at pH 8 and 15 degrees C) as the initial microorganism concentration decreases. This phenomena should be taken into account by both disinfection researchers and by process design engineers.

  5. Estimating Grass-Soil Bioconcentration of Munitions Compounds from Molecular Structure.

    PubMed

    Torralba Sanchez, Tifany L; Liang, Yuzhen; Di Toro, Dominic M

    2017-10-03

    A partitioning-based model is presented to estimate the bioconcentration of five munitions compounds and two munition-like compounds in grasses. The model uses polyparameter linear free energy relationships (pp-LFERs) to estimate the partition coefficients between soil organic carbon and interstitial water and between interstitial water and the plant cuticle, a lipid-like plant component. Inputs for the pp-LFERs are a set of numerical descriptors computed from molecular structure only that characterize the molecular properties that determine the interaction with soil organic carbon, interstitial water, and plant cuticle. The model is validated by predicting concentrations measured in the whole plant during independent uptake experiments with a root-mean-square error (log predicted plant concentration-log observed plant concentration) of 0.429. This highlights the dominant role of partitioning between the exposure medium and the plant cuticle in the bioconcentration of these compounds. The pp-LFERs can be used to assess the environmental risk of munitions compounds and munition-like compounds using only their molecular structure as input.

  6. Mineral content prediction for unconventional oil and gas reservoirs based on logging data

    NASA Astrophysics Data System (ADS)

    Maojin, Tan; Youlong, Zou; Guoyue

    2012-09-01

    Coal bed methane and shale oil &gas are both important unconventional oil and gas resources, whose reservoirs are typical non-linear with complex and various mineral components, and the logging data interpretation model are difficult to establish for calculate the mineral contents, and the empirical formula cannot be constructed due to various mineral. The radial basis function (RBF) network analysis is a new method developed in recent years; the technique can generate smooth continuous function of several variables to approximate the unknown forward model. Firstly, the basic principles of the RBF is discussed including net construct and base function, and the network training is given in detail the adjacent clustering algorithm specific process. Multi-mineral content for coal bed methane and shale oil &gas, using the RBF interpolation method to achieve a number of well logging data to predict the mineral component contents; then, for coal-bed methane reservoir parameters prediction, the RBF method is used to realized some mineral contents calculation such as ash, volatile matter, carbon content, which achieves a mapping from various logging data to multimineral. To shale gas reservoirs, the RBF method can be used to predict the clay content, quartz content, feldspar content, carbonate content and pyrite content. Various tests in coalbed and gas shale show the method is effective and applicable for mineral component contents prediction

  7. Quantifying Differences in the Impact of Variable Chemistry on Equilibrium Uranium(VI) Adsorption Properties of Aquifer Sediments

    PubMed Central

    2011-01-01

    Uranium adsorption–desorption on sediment samples collected from the Hanford 300-Area, Richland, WA varied extensively over a range of field-relevant chemical conditions, complicating assessment of possible differences in equilibrium adsorption properties. Adsorption equilibrium was achieved in 500–1000 h although dissolved uranium concentrations increased over thousands of hours owing to changes in aqueous chemical composition driven by sediment-water reactions. A nonelectrostatic surface complexation reaction, >SOH + UO22+ + 2CO32- = >SOUO2(CO3HCO3)2–, provided the best fit to experimental data for each sediment sample resulting in a range of conditional equilibrium constants (logKc) from 21.49 to 21.76. Potential differences in uranium adsorption properties could be assessed in plots based on the generalized mass-action expressions yielding linear trends displaced vertically by differences in logKc values. Using this approach, logKc values for seven sediment samples were not significantly different. However, a significant difference in adsorption properties between one sediment sample and the fines (<0.063 mm) of another could be demonstrated despite the fines requiring a different reaction stoichiometry. Estimates of logKc uncertainty were improved by capturing all data points within experimental errors. The mass-action expression plots demonstrate that applying models outside the range of conditions used in model calibration greatly increases potential errors. PMID:21923109

  8. Quantifying differences in the impact of variable chemistry on equilibrium Uranium(VI) adsorption properties of aquifer sediments.

    PubMed

    Stoliker, Deborah L; Kent, Douglas B; Zachara, John M

    2011-10-15

    Uranium adsorption-desorption on sediment samples collected from the Hanford 300-Area, Richland, WA varied extensively over a range of field-relevant chemical conditions, complicating assessment of possible differences in equilibrium adsorption properties. Adsorption equilibrium was achieved in 500-1000 h although dissolved uranium concentrations increased over thousands of hours owing to changes in aqueous chemical composition driven by sediment-water reactions. A nonelectrostatic surface complexation reaction, >SOH + UO₂²⁺ + 2CO₃²⁻ = >SOUO₂(CO₃HCO₃)²⁻, provided the best fit to experimental data for each sediment sample resulting in a range of conditional equilibrium constants (logK(c)) from 21.49 to 21.76. Potential differences in uranium adsorption properties could be assessed in plots based on the generalized mass-action expressions yielding linear trends displaced vertically by differences in logK(c) values. Using this approach, logK(c) values for seven sediment samples were not significantly different. However, a significant difference in adsorption properties between one sediment sample and the fines (< 0.063 mm) of another could be demonstrated despite the fines requiring a different reaction stoichiometry. Estimates of logK(c) uncertainty were improved by capturing all data points within experimental errors. The mass-action expression plots demonstrate that applying models outside the range of conditions used in model calibration greatly increases potential errors.

  9. Modeling near wall effects in second moment closures by elliptic relaxation

    NASA Technical Reports Server (NTRS)

    Laurence, D.; Durbin, P.

    1994-01-01

    The elliptic relaxation model of Durbin (1993) for modeling near-wall turbulence using second moment closures (SMC) is compared to DNS data for a channel flow at Re(sub t) = 395. The agreement for second order statistics and even the terms in their balance equation is quite satisfactory, confirming that very little viscous effects (via Kolmogoroff scales) need to be added to the high Reynolds versions of SMC for near-wall-turbulence. The essential near-wall feature is thus the kinematic blocking effect that a solid wall exerts on the turbulence through the fluctuating pressure, which is best modeled by an elliptic operator. Above the transition layer, the effect of the original elliptic operator decays rapidly, and it is suggested that the log-layer is better reproduced by adding a non-homogeneous reduction of the return to isotropy, the gradient of the turbulent length scale being used as a measure of the inhomogeneity of the log-layer. The elliptic operator was quite easily applied to the non-linear Craft & Launder pressure-strain model yielding an improved distinction between the spanwise and wall normal stresses, although at higher Reynolds number (Re) and away from the wall, the streamwise component is severely underpredicted, as well as the transition in the mean velocity from the log to the wake profiles. In this area a significant change of behavior was observed in the DNS pressure-strain term, entirely ignored in the models.

  10. Modeling near wall effects in second moment closures by elliptic relaxation

    NASA Astrophysics Data System (ADS)

    Laurence, D.; Durbin, P.

    1994-12-01

    The elliptic relaxation model of Durbin (1993) for modeling near-wall turbulence using second moment closures (SMC) is compared to DNS data for a channel flow at Re(sub t) = 395. The agreement for second order statistics and even the terms in their balance equation is quite satisfactory, confirming that very little viscous effects (via Kolmogoroff scales) need to be added to the high Reynolds versions of SMC for near-wall-turbulence. The essential near-wall feature is thus the kinematic blocking effect that a solid wall exerts on the turbulence through the fluctuating pressure, which is best modeled by an elliptic operator. Above the transition layer, the effect of the original elliptic operator decays rapidly, and it is suggested that the log-layer is better reproduced by adding a non-homogeneous reduction of the return to isotropy, the gradient of the turbulent length scale being used as a measure of the inhomogeneity of the log-layer. The elliptic operator was quite easily applied to the non-linear Craft & Launder pressure-strain model yielding an improved distinction between the spanwise and wall normal stresses, although at higher Reynolds number (Re) and away from the wall, the streamwise component is severely underpredicted, as well as the transition in the mean velocity from the log to the wake profiles. In this area a significant change of behavior was observed in the DNS pressure-strain term, entirely ignored in the models.

  11. Partitioning of Aromatic Constituents into Water from Jet Fuels.

    PubMed

    Tien, Chien-Jung; Shu, Youn-Yuen; Ciou, Shih-Rong; Chen, Colin S

    2015-08-01

    A comprehensive study of the most commonly used jet fuels (i.e., Jet A-1 and JP-8) was performed to properly assess potential contamination of the subsurface environment from a leaking underground storage tank occurred in an airport. The objectives of this study were to evaluate the concentration ranges of the major components in the water-soluble fraction of jet fuels and to estimate the jet fuel-water partition coefficients (K fw) for target compounds using partitioning experiments and a polyparameter linear free-energy relationship (PP-LFER) approach. The average molecular weight of Jet A-1 and JP-8 was estimated to be 161 and 147 g/mole, respectively. The density of Jet A-1 and JP-8 was measured to be 786 and 780 g/L, respectively. The distribution of nonpolar target compounds between the fuel and water phases was described using a two-phase liquid-liquid equilibrium model. Models were derived using Raoult's law convention for the activity coefficients and the liquid solubility. The observed inverse, log-log linear dependence of the K fw values on the aqueous solubility were well predicted by assuming jet fuel to be an ideal solvent mixture. The experimental partition coefficients were generally well reproduced by PP-LFER.

  12. Design and analysis of linear oscillatory single-phase permanent magnet generator for free-piston stirling engine systems

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Man; Choi, Jang-Young; Lee, Kyu-Seok; Lee, Sung-Ho

    2017-05-01

    This study focuses on the design and analysis of a linear oscillatory single-phase permanent magnet generator for free-piston stirling engine (FPSE) systems. In order to implement the design of linear oscillatory generator (LOG) for suitable FPSEs, we conducted electromagnetic analysis of LOGs with varying design parameters. Then, detent force analysis was conducted using assisted PM. Using the assisted PM gave us the advantage of using mechanical strength by detent force. To improve the efficiency, we conducted characteristic analysis of eddy-current loss with respect to the PM segment. Finally, the experimental result was analyzed to confirm the prediction of the FEA.

  13. The stability of gadolinium-based contrast agents in human serum: A reanalysis of literature data and association with clinical outcomes.

    PubMed

    Prybylski, John P; Semelka, Richard C; Jay, Michael

    2017-05-01

    To reanalyze literature data of gadolinium (Gd)-based contrast agents (GBCAs) in plasma with a kinetic model of dissociation to provide a comprehensive assessment of equilibrium conditions for linear GBCAs. Data for the release of Gd from GBCAs in human serum was extracted from a previous report in the literature and fit to a kinetic dissociation/association model. The conditional stabilities (logK cond ) and percent intact over time were calculated using the model rate constants. The correlations between clinical outcomes and logK cond or other stability indices were determined. The release curves for Omniscan®, gadodiamide, OptiMARK®, gadoversetamide Magnevist® and Multihance® were extracted and all fit well to the kinetic model. The logK cond s calculated from the rate constants were on the order of ~4-6, and were not significantly altered by excess ligand or phosphate. The stability constant based on the amount intact by the initial elimination half-life of GBCAs in plasma provided good correlation with outcomes observed in patients. Estimation of the kinetic constants for GBCA dissociation/association revealed that their stability in physiological fluid is much lower than previous approaches would suggest, which correlates well with deposition and pharmacokinetic observations of GBCAs in human patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Can we predict uranium bioavailability based on soil parameters? Part 1: effect of soil parameters on soil solution uranium concentration.

    PubMed

    Vandenhove, H; Van Hees, M; Wouters, K; Wannijn, J

    2007-01-01

    Present study aims to quantify the influence of soil parameters on soil solution uranium concentration for (238)U spiked soils. Eighteen soils collected under pasture were selected such that they covered a wide range for those parameters hypothesised as being potentially important in determining U sorption. Maximum soil solution uranium concentrations were observed at alkaline pH, high inorganic carbon content and low cation exchange capacity, organic matter content, clay content, amorphous Fe and phosphate levels. Except for the significant correlation between the solid-liquid distribution coefficients (K(d), L kg(-1)) and the organic matter content (R(2)=0.70) and amorphous Fe content (R(2)=0.63), there was no single soil parameter significantly explaining the soil solution uranium concentration (which varied 100-fold). Above pH=6, log(K(d)) was linearly related with pH [log(K(d))=-1.18 pH+10.8, R(2)=0.65]. Multiple linear regression analysis did result in improved predictions of the soil solution uranium concentration but the model was complex.

  15. Quantifying differences in the impact of variable chemistry on equilibrium uranium(VI) adsorption properties of aquifer sediments

    USGS Publications Warehouse

    Stoliker, Deborah L.; Kent, Douglas B.; Zachara, John M.

    2011-01-01

    Uranium adsorption-desorption on sediment samples collected from the Hanford 300-Area, Richland, WA varied extensively over a range of field-relevant chemical conditions, complicating assessment of possible differences in equilibrium adsorption properties. Adsorption equilibrium was achieved in 500-1000 h although dissolved uranium concentrations increased over thousands of hours owing to changes in aqueous chemical composition driven by sediment-water reactions. A nonelectrostatic surface complexation reaction, >SOH + UO22+ + 2CO32- = >SOUO2(CO3HCO3)2-, provided the best fit to experimental data for each sediment sample resulting in a range of conditional equilibrium constants (logKc) from 21.49 to 21.76. Potential differences in uranium adsorption properties could be assessed in plots based on the generalized mass-action expressions yielding linear trends displaced vertically by differences in logKc values. Using this approach, logKc values for seven sediment samples were not significantly different. However, a significant difference in adsorption properties between one sediment sample and the fines (Kc uncertainty were improved by capturing all data points within experimental errors. The mass-action expression plots demonstrate that applying models outside the range of conditions used in model calibration greatly increases potential errors.

  16. Linear and non-linear quantitative structure-activity relationship models on indole substitution patterns as inhibitors of HIV-1 attachment.

    PubMed

    Nirouei, Mahyar; Ghasemi, Ghasem; Abdolmaleki, Parviz; Tavakoli, Abdolreza; Shariati, Shahab

    2012-06-01

    The antiviral drugs that inhibit human immunodeficiency virus (HIV) entry to the target cells are already in different phases of clinical trials. They prevent viral entry and have a highly specific mechanism of action with a low toxicity profile. Few QSAR studies have been performed on this group of inhibitors. This study was performed to develop a quantitative structure-activity relationship (QSAR) model of the biological activity of indole glyoxamide derivatives as inhibitors of the interaction between HIV glycoprotein gp120 and host cell CD4 receptors. Forty different indole glyoxamide derivatives were selected as a sample set and geometrically optimized using Gaussian 98W. Different combinations of multiple linear regression (MLR), genetic algorithms (GA) and artificial neural networks (ANN) were then utilized to construct the QSAR models. These models were also utilized to select the most efficient subsets of descriptors in a cross-validation procedure for non-linear log (1/EC50) prediction. The results that were obtained using GA-ANN were compared with MLR-MLR and MLR-ANN models. A high predictive ability was observed for the MLR, MLR-ANN and GA-ANN models, with root mean sum square errors (RMSE) of 0.99, 0.91 and 0.67, respectively (N = 40). In summary, machine learning methods were highly effective in designing QSAR models when compared to statistical method.

  17. Fatigue Shifts and Scatters Heart Rate Variability in Elite Endurance Athletes

    PubMed Central

    Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire

    2013-01-01

    Purpose This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in ‘fatigue’ or in ‘no-fatigue’ state in ‘real life’ conditions. Methods 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms2 and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). Results 172 trials were identified as in a ‘fatigue’ and 891 as in ‘no-fatigue’ state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between ‘fatigue’ and ‘no-fatigue’: HRSU (+6.27±0.61 bpm), logTPSU (−0.36±0.04), logLFSU (−0.27±0.04), logHFSU (−0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (−9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (−0.28±0.03), logLFST (−0.29±0.03), logHFST (−0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the ‘fatigue’ state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). Conclusion HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern. PMID:23951198

  18. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws.

    PubMed

    Xiao, Xiao; White, Ethan P; Hooten, Mevin B; Durham, Susan L

    2011-10-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain.

  19. Linear-log counting-rate meter uses transconductance characteristics of a silicon planar transistor

    NASA Technical Reports Server (NTRS)

    Eichholz, J. J.

    1969-01-01

    Counting rate meter compresses a wide range of data values, or decades of current. Silicon planar transistor, operating in the zero collector-base voltage mode, is used as a feedback element in an operational amplifier to obtain the log response.

  20. Associations of Genetically Determined Continental Ancestry with CD4+ Count and Plasma HIV-1 RNA beyond Self-Reported Race and Ethnicity

    PubMed Central

    Brummel, Sean S; Singh, Kumud K; Maihofer, Adam X.; Farhad, Mona; Qin, Min; Fenton, Terry; Nievergelt, Caroline M.; Spector, Stephen A.

    2015-01-01

    Background Ancestry informative markers (AIMs) measure genetic admixtures within an individual beyond self-reported racial/ethnic (SRR) groups. Here, we used genetically determined ancestry (GDA) across SRR groups and examine associations between GDA and HIV-1 RNA and CD4+ counts in HIV-positive children in the US. Methods 41 AIMs, developed to distinguish 7 continental regions, were detected by real-time-PCR in 994 HIV-positive, antiretroviral naïve children. GDA was estimated comparing each individual’s genotypes to allele frequencies found in a large set of reference individuals originating from global populations using STRUCTURE. The means of GDA were calculated for each category of SRR. Linear regression was used to model GDA on CD4+ count and log10 RNA, adjusting for SRR and age. Results Subjects were 61% Black, 25% Hispanic, 13% White and 1.3% Unknown. The mean age was 2.3 years (45% male), mean CD4+ count 981 cells/mm3, and mean log10 RNA 5.11. Marked heterogeneity was found for all SRR groups with high admixture for Hispanics. In adjusted linear regression models, subjects with 100% European ancestry were estimated to have 0.33 higher log10 RNA levels (95% CI: (0.03, 0.62), p=0.028) and 253 CD4+ cells /mm3 lower (95% CI: (−517, 11), p = 0.06) in CD4+ count, compared to subjects with 100% African ancestry. Conclusion Marked continental admixture was found among this cohort of HIV-infected children from the US. GDA contributed to differences in RNA and CD4+ counts beyond SRR, and should be considered when outcomes associated with HIV infection are likely to have a genetic component. PMID:26536313

  1. Associations of Genetically Determined Continental Ancestry With CD4+ Count and Plasma HIV-1 RNA Beyond Self-Reported Race and Ethnicity.

    PubMed

    Brummel, Sean S; Singh, Kumud K; Maihofer, Adam X; Farhad, Mona; Qin, Min; Fenton, Terry; Nievergelt, Caroline M; Spector, Stephen A

    2016-04-15

    Ancestry informative markers (AIMs) measure genetic admixtures within an individual beyond self-reported racial/ethnic (SRR) groups. Here, we used genetically determined ancestry (GDA) across SRR groups and examine associations between GDA and HIV-1 RNA and CD4 counts in HIV-positive children in the United States. Forty-one AIMs, developed to distinguish 7 continental regions, were detected by real-time PCR in 994 HIV-positive, antiretroviral naive children. GDA was estimated comparing each individual's genotypes to allele frequencies found in a large set of reference individuals originating from global populations using STRUCTURE. The means of GDA were calculated for each category of SRR. Linear regression was used to model GDA on CD4 count and log10 RNA, adjusting for SRR and age. Subjects were 61% black, 25% Hispanic, 13% white, and 1.3% Unknown. The mean age was 2.3 years (45% male), mean CD4 count of 981 cells per cubic millimeter, and mean log10 RNA of 5.11. Marked heterogeneity was found for all SRR groups with high admixture for Hispanics. In adjusted linear regression models, subjects with 100% European ancestry were estimated to have 0.33 higher log10 RNA levels (95% CI: 0.03 to 0.62, P = 0.028) and 253 CD4 cells per cubic millimeter lower (95% CI: -517 to 11, P = 0.06) in CD4 count, compared to subjects with 100% African ancestry. Marked continental admixture was found among this cohort of HIV-infected children from the United States. GDA contributed to differences in RNA and CD4 counts beyond SRR and should be considered when outcomes associated with HIV infection are likely to have a genetic component.

  2. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  3. Application of high pressure processing to reduce verotoxigenic E. coli in two types of dry-fermented sausage.

    PubMed

    Omer, M K; Alvseike, O; Holck, A; Axelsson, L; Prieto, M; Skjerve, E; Heir, E

    2010-12-01

    The effect of high pressure processing (HPP) on the survival of verotoxigenic Escherichia coli (VTEC) in two types of Norwegian type dry-fermented sausages was studied. Two different types of recipes for each sausage type were produced. The sausage batter was inoculated with 6.8 log(10) CFU/g of VTEC O103:H25. After fermentation, drying and maturation, slices of finished sausages were vacuum packed and subjected to two treatment regimes of HPP. One group was treated at 600 MPa for 10 min and another at three cycles of 600 MPa for 200 s per cycle. A generalized linear model split by recipe type showed that these two HPP treatments on standard recipe sausages reduced E. coli by 2.9 log(10) CFU/g and 3.3 log(10) CFU/g, respectively. In the recipe with higher levels of dextrose, sodium chloride and sodium nitrite E. coli reduction was 2.7 log(10) CFU/g in both treatments. The data show that HPP has a potential to make the sausages safer and also that the effect depends somewhat on recipe. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.

  4. Relationship between bone turnover markers and the heel stiffness index measured by quantitative ultrasound in middle-aged and elderly Japanese men

    PubMed Central

    Nishimura, Takayuki; Arima, Kazuhiko; Abe, Yasuyo; Kanagae, Mitsuo; Mizukami, Satoshi; Okabe, Takuhiro; Tomita, Yoshihito; Goto, Hisashi; Horiguchi, Itsuko; Aoyagi, Kiyoshi

    2018-01-01

    Abstract The aim of the present study was to investigate the age-related patterns and the relationships between serum levels of tartrate-resistant acid phosphatase-5b (TRACP-5b) or bone-specific alkaline phosphatase (BAP), and the heel stiffness index measured by quantitative ultrasound (QUS) in 429 Japanese men, with special emphasis on 2 age groups (40–59 years and 60 years or over). The heel stiffness index (bone mass) was measured by QUS. Serum samples were collected, and TRACP-5b and BAP levels were measured. The stiffness index was significantly decreased with age. Log (TRACP-5b) was significantly increased with age, but Log (BAP) was stable. Generalized linear models showed that higher levels of Log (TRACP-5b) and Log (BAP) were correlated with a lower stiffness index after adjusting for covariates in men aged 60 years or over, but not in men aged 40 to 59 years. In conclusion, higher rates of bone turnover markers were associated with a lower stiffness index only in elderly men. These results may indicate a different mechanism of low bone mass among different age groups of men. PMID:29465590

  5. Papain hydrolysis of X-phenyl-N-methanesulfonyl glycinates: a quantitative structure-activity relationship and molecular graphics analysis.

    PubMed

    Carotti, A; Smith, R N; Wong, S; Hansch, C; Blaney, J M; Langridge, R

    1984-02-15

    The hydrolysis of 32 X-phenyl-N-methanesulfonyl glycinates by papain was investigated. It was found that the variation in the Michaelis constants could be rationalized by the following correlation equation: log 1/Km = 0.61 pi '3 + 0.46 MR4 + 0.55 sigma + 2.00 with a correlation coefficient of 0.945. In this expression, pi '3 is the hydrophobic constant for the more lipophilic of the two possible meta substituents, MR4 is the molar refractivity of 4-substituents, and sigma is the Hammett constant summed for all substituents. Using this equation, we designed, synthesized, and successfully predicted Km for a new congener intended to maximize binding (1/Km). The interactions involved in enzyme-substrate binding, as characterized by the correlation equation, are interpreted using a computer-constructed color three-dimensional-graphics molecular model of the enzyme active site. The nonenzymatic hydrolysis (both acid and basic) of phenyl hippurates yield rate constants which are well correlated by Hammett equations; however, log k for both acid and alkaline hydrolysis are not linearly related to log 1/Km or log kcat/Km.

  6. A model for size- and rotation-invariant pattern processing in the visual system.

    PubMed

    Reitboeck, H J; Altmann, J

    1984-01-01

    The mapping of retinal space onto the striate cortex of some mammals can be approximated by a log-polar function. It has been proposed that this mapping is of functional importance for scale- and rotation-invariant pattern recognition in the visual system. An exact log-polar transform converts centered scaling and rotation into translations. A subsequent translation-invariant transform, such as the absolute value of the Fourier transform, thus generates overall size- and rotation-invariance. In our model, the translation-invariance is realized via the R-transform. This transform can be executed by simple neural networks, and it does not require the complex computations of the Fourier transform, used in Mellin-transform size-invariance models. The logarithmic space distortion and differentiation in the first processing stage of the model is realized via "Mexican hat" filters whose diameter increases linearly with eccentricity, similar to the characteristics of the receptive fields of retinal ganglion cells. Except for some special cases, the model can explain object recognition independent of size, orientation and position. Some general problems of Mellin-type size-invariance models-that also apply to our model-are discussed.

  7. Short communication: Genetic variation of saturated fatty acids in Holsteins in the Walloon region of Belgium.

    PubMed

    Arnould, V M-R; Hammami, H; Soyeurt, H; Gengler, N

    2010-09-01

    Random regression test-day models using Legendre polynomials are commonly used for the estimation of genetic parameters and genetic evaluation for test-day milk production traits. However, some researchers have reported that these models present some undesirable properties such as the overestimation of variances at the edges of lactation. Describing genetic variation of saturated fatty acids expressed in milk fat might require the testing of different models. Therefore, 3 different functions were used and compared to take into account the lactation curve: (1) Legendre polynomials with the same order as currently applied for genetic model for production traits; 2) linear splines with 10 knots; and 3) linear splines with the same 10 knots reduced to 3 parameters. The criteria used were Akaike's information and Bayesian information criteria, percentage square biases, and log-likelihood function. These criteria indentified Legendre polynomials and linear splines with 10 knots reduced to 3 parameters models as the most useful. Reducing more complex models using eigenvalues seemed appealing because the resulting models are less time demanding and can reduce convergence difficulties, because convergence properties also seemed to be improved. Finally, the results showed that the reduced spline model was very similar to the Legendre polynomials model. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Clinical performance of the LCx HCV RNA quantitative assay.

    PubMed

    Bertuzis, Rasa; Hardie, Alison; Hottentraeger, Barbara; Izopet, Jacques; Jilg, Wolfgang; Kaesdorf, Barbara; Leckie, Gregor; Leete, Jean; Perrin, Luc; Qiu, Chunfu; Ran, Iris; Schneider, George; Simmonds, Peter; Robinson, John

    2005-02-01

    This study was conducted to assess the performance of the Abbott laboratories LCx HCV RNA Quantitative Assay (LCx assay) in the clinical setting. Four clinical laboratories measured LCx assay precision, specificity, and linearity. In addition, a method comparison was conducted between the LCx assay and the Roche HCV Amplicor Monitor, version 2.0 (Roche Monitor 2.0) and the Bayer VERSANT HCV RNA 3.0 Assay (Bayer bDNA 3.0) quantitative assays. For precision, the observed LCx assay intra-assay standard deviation (S.D.) was 0.060-0.117 log IU/ml, the inter-assay S.D. was 0.083-0.133 log IU/ml, the inter-lot S.D. was 0.105-0.177 log IU/ml, the inter-site S.D. was 0.099-0.190 log IU/ml, and the total S.D. was 0.113-0.190 log IU/ml. The specificity of the LCx assay was 99.4% (542/545; 95% CI, 98.4-99.9%). For linearity, the mean pooled LCx assay results were linear (r=0.994) over the range of the panel (2.54-5.15 log IU/ml). A method comparison demonstrated a correlation coefficient of 0.881 between the LCx assay and Roche Monitor 2.0, 0.872 between the LCx assay and Bayer bDNA 3.0, and 0.870 between Roche Monitor 2.0 and Bayer bDNA 3.0. The mean LCx assay result was 0.04 log IU/ml (95% CI, -0.08, 0.01) lower than the mean Roche Monitor 2.0 result, but 0.57 log IU/ml (95% CI, 0.53, 0.61) higher than the mean Bayer bDNA 3.0 result. The mean Roche Monitor 2.0 result was 0.60 log IU/ml (95% CI, 0.56, 0.65) higher than the mean Bayer bDNA 3.0 result. The LCx assay quantitated genotypes 1-4 with statistical equivalency. The vast majority (98.9%, 278/281) of paired LCx assay-Roche Monitor 2.0 specimen results were within 1 log IU/ml. Similarly, 86.6% (240/277) of paired LCx assay and Bayer bDNA 3.0 specimen results were within 1 log, as were 85.6% (237/277) of paired Roche Monitor 2.0 and Bayer specimen results. These data demonstrate that the LCx assay may be used for quantitation of HCV RNA in HCV-infected individuals.

  9. Performance evaluation of the QIAGEN EZ1 DSP Virus Kit with Abbott RealTime HIV-1, HBV and HCV assays.

    PubMed

    Schneider, George J; Kuper, Kevin G; Abravaya, Klara; Mullen, Carolyn R; Schmidt, Marion; Bunse-Grassmann, Astrid; Sprenger-Haussels, Markus

    2009-04-01

    Automated sample preparation systems must meet the demands of routine diagnostics laboratories with regard to performance characteristics and compatibility with downstream assays. In this study, the performance of QIAGEN EZ1 DSP Virus Kit on the BioRobot EZ1 DSP was evaluated in combination with the Abbott RealTime HIV-1, HCV, and HBV assays, followed by thermalcycling and detection on the Abbott m2000rt platform. The following performance characteristics were evaluated: linear range and precision, sensitivity, cross-contamination, effects of interfering substances and correlation. Linearity was observed within the tested ranges (for HIV-1: 2.0-6.0 log copies/ml, HCV: 1.3-6.9 log IU/ml, HBV: 1.6-7.6 log copies/ml). Excellent precision was obtained (inter-assay standard deviation for HIV-1: 0.06-0.17 log copies/ml (>2.17 log copies/ml), HCV: 0.05-0.11 log IU/ml (>2.09 log IU/ml), HBV: 0.03-0.07 log copies/ml (>2.55 log copies/ml)), with good sensitivity (95% hit rates for HIV-1: 50 copies/ml, HCV: 12.5 IU/ml, HBV: 10 IU/ml). No cross-contamination was observed, as well as no negative impact of elevated levels of various interfering substances. In addition, HCV and HBV viral load measurements after BioRobot EZ1 DSP extraction correlated well with those obtained after Abbott m2000sp extraction. This evaluation demonstrates that the QIAGEN EZ1 DSP Virus Kit provides an attractive solution for fully automated, low throughput sample preparation for use with the Abbott RealTime HIV-1, HCV, and HBV assays.

  10. GEMAS: prediction of solid-solution phase partitioning coefficients (Kd) for oxoanions and boric acid in soils using mid-infrared diffuse reflectance spectroscopy.

    PubMed

    Janik, Leslie J; Forrester, Sean T; Soriano-Disla, José M; Kirby, Jason K; McLaughlin, Michael J; Reimann, Clemens

    2015-02-01

    The authors' aim was to develop rapid and inexpensive regression models for the prediction of partitioning coefficients (Kd), defined as the ratio of the total or surface-bound metal/metalloid concentration of the solid phase to the total concentration in the solution phase. Values of Kd were measured for boric acid (B[OH]3(0)) and selected added soluble oxoanions: molybdate (MoO4(2-)), antimonate (Sb[OH](6-)), selenate (SeO4(2-)), tellurate (TeO4(2-)) and vanadate (VO4(3-)). Models were developed using approximately 500 spectrally representative soils of the Geochemical Mapping of Agricultural Soils of Europe (GEMAS) program. These calibration soils represented the major properties of the entire 4813 soils of the GEMAS project. Multiple linear regression (MLR) from soil properties, partial least-squares regression (PLSR) using mid-infrared diffuse reflectance Fourier-transformed (DRIFT) spectra, and models using DRIFT spectra plus analytical pH values (DRIFT + pH), were compared with predicted log K(d + 1) values. Apart from selenate (R(2)  = 0.43), the DRIFT + pH calibrations resulted in marginally better models to predict log K(d + 1) values (R(2)  = 0.62-0.79), compared with those from PSLR-DRIFT (R(2)  = 0.61-0.72) and MLR (R(2)  = 0.54-0.79). The DRIFT + pH calibrations were applied to the prediction of log K(d + 1) values in the remaining 4313 soils. An example map of predicted log K(d + 1) values for added soluble MoO4(2-) in soils across Europe is presented. The DRIFT + pH PLSR models provided a rapid and inexpensive tool to assess the risk of mobility and potential availability of boric acid and selected oxoanions in European soils. For these models to be used in the prediction of log K(d + 1) values in soils globally, additional research will be needed to determine if soil variability is accounted on the calibration. © 2014 SETAC.

  11. A model for the flux-r.m.s. correlation in blazar variability or the minijets-in-a-jet statistical model

    NASA Astrophysics Data System (ADS)

    Biteau, J.; Giebels, B.

    2012-12-01

    Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.

  12. Correlations between chromatographic parameters and bioactivity predictors of potential herbicides.

    PubMed

    Janicka, Małgorzata

    2014-08-01

    Different liquid chromatography techniques, including reversed-phase liquid chromatography on Purosphere RP-18e, IAM.PC.DD2 and Cosmosil Cholester columns and micellar liqud chromatography with a Purosphere RP-8e column and using buffered sodium dodecyl sulfate-acetonitrile as the mobile phase, were applied to study the lipophilic properties of 15 newly synthesized phenoxyacetic and carbamic acid derivatives, which are potential herbicides. Chromatographic lipophilicity descriptors were used to extrapolate log k parameters (log kw and log km) and log k values. Partitioning lipophilicity descriptors, i.e., log P coefficients in an n-octanol-water system, were computed from the molecular structures of the tested compounds. Bioactivity descriptors, including partition coefficients in a water-plant cuticle system and water-human serum albumin and coefficients for human skin partition and permeation were calculated in silico by ACD/ADME software using the linear solvation energy relationship of Abraham. Principal component analysis was applied to describe similarities between various chromatographic and partitioning lipophilicities. Highly significant, predictive linear relationships were found between chromatographic parameters and bioactivity descriptors. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Linear solvation energy relationship for the adsorption of synthetic organic compounds on single-walled carbon nanotubes in water.

    PubMed

    Ding, H; Chen, C; Zhang, X

    2016-01-01

    The linear solvation energy relationship (LSER) was applied to predict the adsorption coefficient (K) of synthetic organic compounds (SOCs) on single-walled carbon nanotubes (SWCNTs). A total of 40 log K values were used to develop and validate the LSER model. The adsorption data for 34 SOCs were collected from 13 published articles and the other six were obtained in our experiment. The optimal model composed of four descriptors was developed by a stepwise multiple linear regression (MLR) method. The adjusted r(2) (r(2)adj) and root mean square error (RMSE) were 0.84 and 0.49, respectively, indicating good fitness. The leave-one-out cross-validation Q(2) ([Formula: see text]) was 0.79, suggesting the robustness of the model was satisfactory. The external Q(2) ([Formula: see text]) and RMSE (RMSEext) were 0.72 and 0.50, respectively, showing the model's strong predictive ability. Hydrogen bond donating interaction (bB) and cavity formation and dispersion interactions (vV) stood out as the two most influential factors controlling the adsorption of SOCs onto SWCNTs. The equilibrium concentration would affect the fitness and predictive ability of the model, while the coefficients varied slightly.

  14. A kinetic energy model of two-vehicle crash injury severity.

    PubMed

    Sobhani, Amir; Young, William; Logan, David; Bahrololoom, Sareh

    2011-05-01

    An important part of any model of vehicle crashes is the development of a procedure to estimate crash injury severity. After reviewing existing models of crash severity, this paper outlines the development of a modelling approach aimed at measuring the injury severity of people in two-vehicle road crashes. This model can be incorporated into a discrete event traffic simulation model, using simulation model outputs as its input. The model can then serve as an integral part of a simulation model estimating the crash potential of components of the traffic system. The model is developed using Newtonian Mechanics and Generalised Linear Regression. The factors contributing to the speed change (ΔV(s)) of a subject vehicle are identified using the law of conservation of momentum. A Log-Gamma regression model is fitted to measure speed change (ΔV(s)) of the subject vehicle based on the identified crash characteristics. The kinetic energy applied to the subject vehicle is calculated by the model, which in turn uses a Log-Gamma Regression Model to estimate the Injury Severity Score of the crash from the calculated kinetic energy, crash impact type, presence of airbag and/or seat belt and occupant age. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Spatially resolved regression analysis of pre-treatment FDG, FLT and Cu-ATSM PET from post-treatment FDG PET: an exploratory study

    PubMed Central

    Bowen, Stephen R; Chappell, Richard J; Bentzen, Søren M; Deveau, Michael A; Forrest, Lisa J; Jeraj, Robert

    2012-01-01

    Purpose To quantify associations between pre-radiotherapy and post-radiotherapy PET parameters via spatially resolved regression. Materials and methods Ten canine sinonasal cancer patients underwent PET/CT scans of [18F]FDG (FDGpre), [18F]FLT (FLTpre), and [61Cu]Cu-ATSM (Cu-ATSMpre). Following radiotherapy regimens of 50 Gy in 10 fractions, veterinary patients underwent FDG PET/CT scans at three months (FDGpost). Regression of standardized uptake values in baseline FDGpre, FLTpre and Cu-ATSMpre tumour voxels to those in FDGpost images was performed for linear, log-linear, generalized-linear and mixed-fit linear models. Goodness-of-fit in regression coefficients was assessed by R2. Hypothesis testing of coefficients over the patient population was performed. Results Multivariate linear model fits of FDGpre to FDGpost were significantly positive over the population (FDGpost~0.17 FDGpre, p=0.03), and classified slopes of RECIST non-responders and responders to be different (0.37 vs. 0.07, p=0.01). Generalized-linear model fits related FDGpre to FDGpost by a linear power law (FDGpost~FDGpre0.93, p<0.001). Univariate mixture model fits of FDGpre improved R2 from 0.17 to 0.52. Neither baseline FLT PET nor Cu-ATSM PET uptake contributed statistically significant multivariate regression coefficients. Conclusions Spatially resolved regression analysis indicates that pre-treatment FDG PET uptake is most strongly associated with three-month post-treatment FDG PET uptake in this patient population, though associations are histopathology-dependent. PMID:22682748

  16. Sensory Information Processing and Symbolic Computation

    DTIC Science & Technology

    1973-12-31

    plague all image deblurring methods when working with high signal to noise ratios, is that of a ringing or ghost image phenomenon which surrounds high...Figure 11 The Impulse Response of an All-Pass Random Phase Filter 24 Figure 12 (a) Unsmoothed Log Spectra of the Sentence "The pipe began to...of automatic deblurring of images, linear predictive coding of speech and the refinement and application of mathematical models of human vision and

  17. Three-Dimensional City Determinants of the Urban Heat Island: A Statistical Approach

    NASA Astrophysics Data System (ADS)

    Chun, Bum Seok

    There is no doubt that the Urban Heat Island (UHI) is a mounting problem in built-up environments, due to the energy retention by the surface materials of dense buildings, leading to increased temperatures, air pollution, and energy consumption. Much of the earlier research on the UHI has used two-dimensional (2-D) information, such as land uses and the distribution of vegetation. In the case of homogeneous land uses, it is possible to predict surface temperatures with reasonable accuracy with 2-D information. However, three-dimensional (3-D) information is necessary to analyze more complex sites, including dense building clusters. Recent research on the UHI has started to consider multi-dimensional models. The purpose of this research is to explore the urban determinants of the UHI, using 2-D/3-D urban information with statistical modeling. The research includes the following stages: (a) estimating urban temperature, using satellite images, (b) developing a 3-D city model by LiDAR data, (c) generating geometric parameters with regard to 2-/3-D geospatial information, and (d) conducting different statistical analyses: OLS and spatial regressions. The research area is part of the City of Columbus, Ohio. To effectively and systematically analyze the UHI, hierarchical grid scales (480m, 240m, 120m, 60m, and 30m) are proposed, together with linear and the log-linear regression models. The non-linear OLS models with Log(AST) as dependent variable have the highest R2 among all the OLS-estimated models. However, both SAR and GSM models are estimated for the 480m, 240m, 120m, and 60m grids to reduce their spatial dependency. Most GSM models have R2s higher than 0.9, except for the 240m grid. Overall, the urban characteristics having high impacts in all grids are embodied in solar radiation, 3-D open space, greenery, and water streams. These results demonstrate that it is possible to mitigate the UHI, providing guidelines for policies aiming to reduce the UHI.

  18. Discrete dynamical system modelling for gene regulatory networks of 5-hydroxymethylfurfural tolerance for ethanologenic yeast.

    PubMed

    Song, M; Ouyang, Z; Liu, Z L

    2009-05-01

    Composed of linear difference equations, a discrete dynamical system (DDS) model was designed to reconstruct transcriptional regulations in gene regulatory networks (GRNs) for ethanologenic yeast Saccharomyces cerevisiae in response to 5-hydroxymethylfurfural (HMF), a bioethanol conversion inhibitor. The modelling aims at identification of a system of linear difference equations to represent temporal interactions among significantly expressed genes. Power stability is imposed on a system model under the normal condition in the absence of the inhibitor. Non-uniform sampling, typical in a time-course experimental design, is addressed by a log-time domain interpolation. A statistically significant DDS model of the yeast GRN derived from time-course gene expression measurements by exposure to HMF, revealed several verified transcriptional regulation events. These events implicate Yap1 and Pdr3, transcription factors consistently known for their regulatory roles by other studies or postulated by independent sequence motif analysis, suggesting their involvement in yeast tolerance and detoxification of the inhibitor.

  19. Bottomland hardwood forest recovery following tornado disturbance and salvage logging

    Treesearch

    John L. Nelson; John W. Groninger; Loretta L. Battaglia; Charles M. Ruffner

    2008-01-01

    Catastrophic wind events, including tornado, hurricane. and linear winds. are significant disturbances in temperate forested wetlands. Information is lacking on how post-disturbance salvage logging may impact short and long-term objectives in conservation areas where natural stands are typically managed passively. Woody regeneration and herbaceous cover were assessed...

  20. Comments Regarding the Binary Power Law for Heterogeneity of Disease Incidence

    USDA-ARS?s Scientific Manuscript database

    The binary power law (BPL) has been successfully used to characterize heterogeneity (over dispersion or small-scale aggregation) of disease incidence for many plant pathosystems. With the BPL, the log of the observed variance is a linear function of the log of the theoretical variance for a binomial...

  1. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    PubMed

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  2. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    PubMed Central

    Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346

  3. Quantitative structure-property relationship (correlation analysis) of phosphonic acid-based chelates in design of MRI contrast agent.

    PubMed

    Tiwari, Anjani K; Ojha, Himanshu; Kaul, Ankur; Dutta, Anupama; Srivastava, Pooja; Shukla, Gauri; Srivastava, Rakesh; Mishra, Anil K

    2009-07-01

    Nuclear magnetic resonance imaging is a very useful tool in modern medical diagnostics, especially when gadolinium (III)-based contrast agents are administered to the patient with the aim of increasing the image contrast between normal and diseased tissues. With the use of soft modelling techniques such as quantitative structure-activity relationship/quantitative structure-property relationship after a suitable description of their molecular structure, we have studied a series of phosphonic acid for designing new MRI contrast agent. Quantitative structure-property relationship studies with multiple linear regression analysis were applied to find correlation between different calculated molecular descriptors of the phosphonic acid-based chelating agent and their stability constants. The final quantitative structure-property relationship mathematical models were found as--quantitative structure-property relationship Model for phosphonic acid series (Model 1)--log K(ML) = {5.00243(+/-0.7102)}- MR {0.0263(+/-0.540)}n = 12 l r l = 0.942 s = 0.183 F = 99.165 quantitative structure-property relationship Model for phosphonic acid series (Model 2)--log K(ML) = {5.06280(+/-0.3418)}- MR {0.0252(+/- .198)}n = 12 l r l = 0.956 s = 0.186 F = 99.256.

  4. Bioconcentration model for non-ionic, polar, and ionizable organic compounds in amphipod.

    PubMed

    Chen, Ciara Chun; Kuo, Dave Ta Fu

    2018-05-01

    The present study presents a bioconcentration model for non-ionic, polar, and ionizable organic compounds in amphipod based on first-order kinetics. Uptake rate constant k 1 is modeled as logk1=10.81logKOW + 0.15 (root mean square error [RMSE] = 0.52). Biotransformation rate constant k M is estimated using an existing polyparameter linear free energy relationship model. Respiratory elimination k 2 is calculated as modeled k 1 over theoretical biota-water partition coefficient K biow considering the contributions of lipid, protein, carbohydrate, and water. With negligible contributions of growth and egestion over a typical amphipod bioconcentration experiment, the bioconcentration factor (BCF) is modeled as k 1 /(k M  + k 2 ) (RMSE = 0.68). The proposed model performs well for non-ionic organic compounds (log K OW range = 3.3-7.62) within 1 log-unit error margin. Approximately 12% of the BCFs are underpredicted for polar and ionizable compounds. However, >50% of the estimated k 2 values are found to exceed the total depuration rate constants. Analyses suggest that these excessive k 2 values and underpredicted BCFs reflect underestimation in K biow , which may be improved by incorporating exoskeleton as a relevant partitioning component and refining the membrane-water partitioning model. The immediate needs to build up high-quality experimental k M values, explore the sorptive role of exoskeleton, and investigate the prevalence of k 2 overestimation in other bioconcentration models are also identified. The resulting BCF model can support, within its limitations, the ecotoxicological and risk assessment of emerging polar and ionizable organic contaminants in aquatic environments and advance the science of invertebrate bioaccumulation. Environ Toxicol Chem 2018;37:1378-1386. © 2018 SETAC. © 2018 SETAC.

  5. No chiral truncation of quantum log gravity?

    NASA Astrophysics Data System (ADS)

    Andrade, Tomás; Marolf, Donald

    2010-03-01

    At the classical level, chiral gravity may be constructed as a consistent truncation of a larger theory called log gravity by requiring that left-moving charges vanish. In turn, log gravity is the limit of topologically massive gravity (TMG) at a special value of the coupling (the chiral point). We study the situation at the level of linearized quantum fields, focussing on a unitary quantization. While the TMG Hilbert space is continuous at the chiral point, the left-moving Virasoro generators become ill-defined and cannot be used to define a chiral truncation. In a sense, the left-moving asymptotic symmetries are spontaneously broken at the chiral point. In contrast, in a non-unitary quantization of TMG, both the Hilbert space and charges are continuous at the chiral point and define a unitary theory of chiral gravity at the linearized level.

  6. Dual permeability flow behavior for modeling horizontal well production in fractured-vuggy carbonate reservoirs

    NASA Astrophysics Data System (ADS)

    Guo, Jian-Chun; Nie, Ren-Shi; Jia, Yong-Lu

    2012-09-01

    SummaryFractured-vuggy carbonate reservoirs are composed of by matrix, fracture, and vug systems. This paper is the first investigation into the dual permeability flow issue for horizontal well production in a fractured-vuggy carbonate reservoir. Considering dispersed vugs in carbonate reservoirs and treating media directly connected with horizontal wellbore as the matrix and fracture systems, a test analysis model of a horizontal well was created, and triple porosity and dual permeability flow behavior were modeled. Standard log-log type curves were drawn up by numerical simulation and flow behavior characteristics were thoroughly analyzed. Numerical simulations showed that type curves are dominated by external boundary conditions as well as the permeability ratio of the fracture system to the sum of fracture and matrix systems. The parameter κ is only relevant to the dual permeability model, and if κ is one, then the dual permeability model is equivalent to the single permeability model. There are seven main flow regimes with constant rate of horizontal well production and five flow regimes with constant wellbore pressure of horizontal well production; different flow regimes have different flow behavior characteristics. Early radial flow and linear flow regimes are typical characteristics of horizontal well production; duration of early radial flow regime is usually short because formation thickness is generally less than 100 m. Derivative curves are W-shaped, which is a reflection of inter-porosity flows between matrix, fracture, and vug systems. A distorted W-shape, which could be produced in certain situations, such as one involving an erroneously low time of inter-porosity flows, would handicap the recognition of a linear flow regime. A real case application was successfully implemented, and some useful reservoir parameters (e.g., permeability and inter-porosity flow factor) were obtained from well testing interpretation.

  7. Effect of Malmquist bias on correlation studies with IRAS data base

    NASA Technical Reports Server (NTRS)

    Verter, Frances

    1993-01-01

    The relationships between galaxy properties in the sample of Trinchieri et al. (1989) are reexamined with corrections for Malmquist bias. The linear correlations are tested and linear regressions are fit for log-log plots of L(FIR), L(H-alpha), and L(B) as well as ratios of these quantities. The linear correlations for Malmquist bias are corrected using the method of Verter (1988), in which each galaxy observation is weighted by the inverse of its sampling volume. The linear regressions are corrected for Malmquist bias by a new method invented here in which each galaxy observation is weighted by its sampling volume. The results of correlation and regressions among the sample are significantly changed in the anticipated sense that the corrected correlation confidences are lower and the corrected slopes of the linear regressions are lower. The elimination of Malmquist bias eliminates the nonlinear rise in luminosity that has caused some authors to hypothesize additional components of FIR emission.

  8. Searching for Genotype-Phenotype Structure: Using Hierarchical Log-Linear Models in Crohn Disease

    PubMed Central

    Chapman, Juliet M.; Onnie, Clive M.; Prescott, Natalie J.; Fisher, Sheila A.; Mansfield, John C.; Mathew, Christopher G.; Lewis, Cathryn M.; Verzilli, Claudio J.; Whittaker, John C.

    2009-01-01

    There has been considerable recent success in the detection of gene-disease associations. We consider here the development of tools that facilitate the more detailed characterization of the effect of a genetic variant on disease. We replace the simplistic classification of individuals according to a single binary disease indicator with classification according to a number of subphenotypes. This more accurately reflects the underlying biological complexity of the disease process, but it poses additional analytical difficulties. Notably, the subphenotypes that make up a particular disease are typically highly associated, and it becomes difficult to distinguish which genes might be causing which subphenotypes. Such problems arise in many complex diseases. Here, we concentrate on an application to Crohn disease (CD). We consider this problem as one of model selection based upon log-linear models, fitted in a Bayesian framework via reversible-jump Metropolis-Hastings approach. We evaluate the performance of our suggested approach with a simple simulation study and then apply the method to a real data example in CD, revealing a sparse disease structure. Most notably, the associated NOD2.908G→R mutation appears to be directly related to more severe disease behaviors, whereas the other two associated NOD2 variants, 1007L→FS and 702R→W, are more generally related to disease in the small bowel (ileum and jejenum). The ATG16L1.300T→A variant appears to be directly associated with only disease of the small bowel. PMID:19185283

  9. Mountain plover population responses to black-tailed prairie dogs in Montana

    USGS Publications Warehouse

    Dinsmore, S.J.; White, Gary C.; Knopf, F.L.

    2005-01-01

    We studied a local population of mountain plovers (Charadrius montanus) in southern Phillips County, Montana, USA, from 1995 to 2000 to estimate annual rates of recruitment rate (f) and population change (??). We used Pradel models, and we modeled ?? as a constant across years, as a linear time trend, as year-specific, and with an additive effect of area occupied by prairie dogs (Cynomys ludovicianus). We modeled recruitment rate (f) as a function of area occupied by prairie dogs with the remaining model structure identical to the best model used to estimate ??. Our results indicated a strong negative effect of area occupied by prairie dogs on both ?? (slope coefficient on a log scale was -0.11; 95% CI was -0.17, -0.05) and f (slope coefficient on a logit scale was -0.23; 95% CI was -0.36, -0.10). We also found good evidence for a negative time trend on ??; this model had substantial weight (wi = 0.31), and the slope coefficient on the linear trend on a log scale was -0.10 (95% CI was -0.15, -0.05). Yearly estimates of ?? were >1 in all years except 1999, indicating that the population initially increased and then stabilized in the last year of the study. We found weak evidence for year-specific estimates of ??; the best model with year-specific estimates had a low weight (wi = 0.02), although the pattern of yearly estimates of ?? closely matched those estimated with a linear time trend. In southern Phillips County, the population trend of mountain plovers closely matched the trend in the area occupied by black-tailed prairie dogs. Black-tailed prairie dogs declined sharply in the mid-1990s in response to an outbreak of sylvatic plague, but their numbers have steadily increased since 1996 in concert with increases in plovers. The results of this study (1) increase our understanding of the dynamics of this population and how they relate to the area occupied by prairie dogs, and (2) will be useful for planning plover conservation in a prairie dog ecosystem.

  10. Prediction of Compressional Wave Velocity Using Regression and Neural Network Modeling and Estimation of Stress Orientation in Bokaro Coalfield, India

    NASA Astrophysics Data System (ADS)

    Paul, Suman; Ali, Muhammad; Chatterjee, Rima

    2018-01-01

    Velocity of compressional wave ( V P) of coal and non-coal lithology is predicted from five wells from the Bokaro coalfield (CF), India. Shear sonic travel time logs are not recorded for all wells under the study area. Shear wave velocity ( Vs) is available only for two wells: one from east and other from west Bokaro CF. The major lithologies of this CF are dominated by coal, shaly coal of Barakar formation. This paper focuses on the (a) relationship between Vp and Vs, (b) prediction of Vp using regression and neural network modeling and (c) estimation of maximum horizontal stress from image log. Coal characterizes with low acoustic impedance (AI) as compared to the overlying and underlying strata. The cross-plot between AI and Vp/ Vs is able to identify coal, shaly coal, shale and sandstone from wells in Bokaro CF. The relationship between Vp and Vs is obtained with excellent goodness of fit ( R 2) ranging from 0.90 to 0.93. Linear multiple regression and multi-layered feed-forward neural network (MLFN) models are developed for prediction Vp from two wells using four input log parameters: gamma ray, resistivity, bulk density and neutron porosity. Regression model predicted Vp shows poor fit (from R 2 = 0.28) to good fit ( R 2 = 0.79) with the observed velocity. MLFN model predicted Vp indicates satisfactory to good R2 values varying from 0.62 to 0.92 with the observed velocity. Maximum horizontal stress orientation from a well at west Bokaro CF is studied from Formation Micro-Imager (FMI) log. Breakouts and drilling-induced fractures (DIFs) are identified from the FMI log. Breakout length of 4.5 m is oriented towards N60°W whereas the orientation of DIFs for a cumulative length of 26.5 m is varying from N15°E to N35°E. The mean maximum horizontal stress in this CF is towards N28°E.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiedemeier, Heribert, E-mail: wiedeh@rpi.edu

    The observed linear (Na-, K-halides) and near-linear (Mg-, Sr-, Zn-, Cd-, and Hg-chalcogenides) dependences of Schottky constants on reciprocal interatomic distances yield the relation logK{sub S}=((s{sub s}1/T)+i{sub s})1/d{sub (A−B)}+(s{sub i}1/T)+i{sub i}, where K{sub S} is the product of metal and non-metal thermal equilibrium vacancy concentrations, and s{sub s}, i{sub s}, s{sub i} and i{sub i} are the group specific slope and intercept values obtained from an extended analysis of the above log K{sub S} versus 1/d{sub (A−B)} data. The previously reported linear dependences of log K{sub S} on the Born–Haber lattice energies [1] are the basis for combining the earliermore » results [1] with the Born–Mayer lattice energy equation to yield a new thermodynamic relationship, namely logK{sub S}=−(2.303nRT){sup −1}(c{sub (B−M)}/d{sub (A−B)}−I{sub e}), where c{sub (B−M)} is the product of the constants of the Born–Mayer equation and I{sub e} is the metal ionization energy of the above compounds. These results establish a correlation between point defect concentrations and basic thermodynamic, coulombic, and structural solid state properties for selected I–VII and II–VI semiconductor materials. - Graphical abstract: Display Omitted.« less

  12. Analysis of albedo versus cloud fraction relationships in liquid water clouds using heuristic models and large eddy simulation

    NASA Astrophysics Data System (ADS)

    Feingold, Graham; Balsells, Joseph; Glassmeier, Franziska; Yamaguchi, Takanobu; Kazil, Jan; McComiskey, Allison

    2017-07-01

    The relationship between the albedo of a cloudy scene A and cloud fraction fc is studied with the aid of heuristic models of stratocumulus and cumulus clouds. Existing work has shown that scene albedo increases monotonically with increasing cloud fraction but that the relationship varies from linear to superlinear. The reasons for these differences in functional dependence are traced to the relationship between cloud deepening and cloud widening. When clouds deepen with no significant increase in fc (e.g., in solid stratocumulus), the relationship between A and fc is linear. When clouds widen as they deepen, as in cumulus cloud fields, the relationship is superlinear. A simple heuristic model of a cumulus cloud field with a power law size distribution shows that the superlinear A-fc behavior is traced out either through random variation in cloud size distribution parameters or as the cloud field oscillates between a relative abundance of small clouds (steep slopes on a log-log plot) and a relative abundance of large clouds (flat slopes). Oscillations of this kind manifest in large eddy simulation of trade wind cumulus where the slope and intercept of the power law fit to the cloud size distribution are highly correlated. Further analysis of the large eddy model-generated cloud fields suggests that cumulus clouds grow larger and deeper as their underlying plumes aggregate; this is followed by breakup of large plumes and a tendency to smaller clouds. The cloud and thermal size distributions oscillate back and forth approximately in unison.

  13. A Binomial Modeling Approach for Upscaling Colloid Transport Under Unfavorable Attachment Conditions: Emergent Prediction of Nonmonotonic Retention Profiles

    NASA Astrophysics Data System (ADS)

    Hilpert, Markus; Johnson, William P.

    2018-01-01

    We used a recently developed simple mathematical network model to upscale pore-scale colloid transport information determined under unfavorable attachment conditions. Classical log-linear and nonmonotonic retention profiles, both well-reported under favorable and unfavorable attachment conditions, respectively, emerged from our upscaling. The primary attribute of the network is colloid transfer between bulk pore fluid, the near-surface fluid domain (NSFD), and attachment (treated as irreversible). The network model accounts for colloid transfer to the NSFD of downgradient grains and for reentrainment to bulk pore fluid via diffusion or via expulsion at rear flow stagnation zones (RFSZs). The model describes colloid transport by a sequence of random trials in a one-dimensional (1-D) network of Happel cells, which contain a grain and a pore. Using combinatorial analysis that capitalizes on the binomial coefficient, we derived from the pore-scale information the theoretical residence time distribution of colloids in the network. The transition from log-linear to nonmonotonic retention profiles occurs when the conditions underlying classical filtration theory are not fulfilled, i.e., when an NSFD colloid population is maintained. Then, nonmonotonic retention profiles result potentially both for attached and NSFD colloids. The concentration maxima shift downgradient depending on specific parameter choice. The concentration maxima were also shown to shift downgradient temporally (with continued elution) under conditions where attachment is negligible, explaining experimentally observed downgradient transport of retained concentration maxima of adhesion-deficient bacteria. For the case of zero reentrainment, we develop closed-form, analytical expressions for the shape, and the maximum of the colloid retention profile.

  14. Sieve analysis using the number of infecting pathogens.

    PubMed

    Follmann, Dean; Huang, Chiung-Yu

    2017-12-14

    Assessment of vaccine efficacy as a function of the similarity of the infecting pathogen to the vaccine is an important scientific goal. Characterization of pathogen strains for which vaccine efficacy is low can increase understanding of the vaccine's mechanism of action and offer targets for vaccine improvement. Traditional sieve analysis estimates differential vaccine efficacy using a single identifiable pathogen for each subject. The similarity between this single entity and the vaccine immunogen is quantified, for example, by exact match or number of mismatched amino acids. With new technology, we can now obtain the actual count of genetically distinct pathogens that infect an individual. Let F be the number of distinct features of a species of pathogen. We assume a log-linear model for the expected number of infecting pathogens with feature "f," f=1,…,F. The model can be used directly in studies with passive surveillance of infections where the count of each type of pathogen is recorded at the end of some interval, or active surveillance where the time of infection is known. For active surveillance, we additionally assume that a proportional intensity model applies to the time of potentially infectious exposures and derive product and weighted estimating equation (WEE) estimators for the regression parameters in the log-linear model. The WEE estimator explicitly allows for waning vaccine efficacy and time-varying distributions of pathogens. We give conditions where sieve parameters have a per-exposure interpretation under passive surveillance. We evaluate the methods by simulation and analyze a phase III trial of a malaria vaccine. © 2017, The International Biometric Society.

  15. Dynamic Predictive Model for Growth of Bacillus cereus from Spores in Cooked Beans.

    PubMed

    Juneja, Vijay K; Mishra, Abhinav; Pradhan, Abani K

    2018-02-01

    Kinetic growth data for Bacillus cereus grown from spores were collected in cooked beans under several isothermal conditions (10 to 49°C). Samples were inoculated with approximately 2 log CFU/g heat-shocked (80°C for 10 min) spores and stored at isothermal temperatures. B. cereus populations were determined at appropriate intervals by plating on mannitol-egg yolk-polymyxin agar and incubating at 30°C for 24 h. Data were fitted into Baranyi, Huang, modified Gompertz, and three-phase linear primary growth models. All four models were fitted to the experimental growth data collected at 13 to 46°C. Performances of these models were evaluated based on accuracy and bias factors, the coefficient of determination ( R 2 ), and the root mean square error. Based on these criteria, the Baranyi model best described the growth data, followed by the Huang, modified Gompertz, and three-phase linear models. The maximum growth rates of each primary model were fitted as a function of temperature using the modified Ratkowsky model. The high R 2 values (0.95 to 0.98) indicate that the modified Ratkowsky model can be used to describe the effect of temperature on the growth rates for all four primary models. The acceptable prediction zone (APZ) approach also was used for validation of the model with observed data collected during single and two-step dynamic cooling temperature protocols. When the predictions using the Baranyi model were compared with the observed data using the APZ analysis, all 24 observations for the exponential single rate cooling were within the APZ, which was set between -0.5 and 1 log CFU/g; 26 of 28 predictions for the two-step cooling profiles also were within the APZ limits. The developed dynamic model can be used to predict potential B. cereus growth from spores in beans under various temperature conditions or during extended chilling of cooked beans.

  16. Estimating residual fault hitting rates by recapture sampling

    NASA Technical Reports Server (NTRS)

    Lee, Larry; Gupta, Rajan

    1988-01-01

    For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.

  17. Historical HIV incidence modelling in regional subgroups: use of flexible discrete models with penalized splines based on prior curves.

    PubMed

    Greenland, S

    1996-03-15

    This paper presents an approach to back-projection (back-calculation) of human immunodeficiency virus (HIV) person-year infection rates in regional subgroups based on combining a log-linear model for subgroup differences with a penalized spline model for trends. The penalized spline approach allows flexible trend estimation but requires far fewer parameters than fully non-parametric smoothers, thus saving parameters that can be used in estimating subgroup effects. Use of reasonable prior curve to construct the penalty function minimizes the degree of smoothing needed beyond model specification. The approach is illustrated in application to acquired immunodeficiency syndrome (AIDS) surveillance data from Los Angeles County.

  18. The Dangers of Estimating V˙O2max Using Linear, Nonexercise Prediction Models.

    PubMed

    Nevill, Alan M; Cooke, Carlton B

    2017-05-01

    This study aimed to compare the accuracy and goodness of fit of two competing models (linear vs allometric) when estimating V˙O2max (mL·kg·min) using nonexercise prediction models. The two competing models were fitted to the V˙O2max (mL·kg·min) data taken from two previously published studies. Study 1 (the Allied Dunbar National Fitness Survey) recruited 1732 randomly selected healthy participants, 16 yr and older, from 30 English parliamentary constituencies. Estimates of V˙O2max were obtained using a progressive incremental test on a motorized treadmill. In study 2, maximal oxygen uptake was measured directly during a fatigue limited treadmill test in older men (n = 152) and women (n = 146) 55 to 86 yr old. In both studies, the quality of fit associated with estimating V˙O2max (mL·kg·min) was superior using allometric rather than linear (additive) models based on all criteria (R, maximum log-likelihood, and Akaike information criteria). Results suggest that linear models will systematically overestimate V˙O2max for participants in their 20s and underestimate V˙O2max for participants in their 60s and older. The residuals saved from the linear models were neither normally distributed nor independent of the predicted values nor age. This will probably explain the absence of a key quadratic age term in the linear models, crucially identified using allometric models. Not only does the curvilinear age decline within an exponential function follow a more realistic age decline (the right-hand side of a bell-shaped curve), but the allometric models identified either a stature-to-body mass ratio (study 1) or a fat-free mass-to-body mass ratio (study 2), both associated with leanness when estimating V˙O2max. Adopting allometric models will provide more accurate predictions of V˙O2max (mL·kg·min) using plausible, biologically sound, and interpretable models.

  19. Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models

    PubMed Central

    Jiang, Dingfeng; Huang, Jian

    2013-01-01

    Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. PMID:25309048

  20. Determination of real-time polymerase chain reaction uncertainty of measurement using replicate analysis and a graphical user interface with Fieller’s theorem

    PubMed Central

    Stuart, James Ian; Delport, Johan; Lannigan, Robert; Zahariadis, George

    2014-01-01

    BACKGROUND: Disease monitoring of viruses using real-time polymerase chain reaction (PCR) requires knowledge of the precision of the test to determine what constitutes a significant change. Calculation of quantitative PCR confidence limits requires bivariate statistical methods. OBJECTIVE: To develop a simple-to-use graphical user interface to determine the uncertainty of measurement (UOM) of BK virus, cytomegalovirus (CMV) and Epstein-Barr virus (EBV) real-time PCR assays. METHODS: Thirty positive clinical samples for each of the three viral assays were repeated once. A graphical user interface was developed using a spreadsheet (Excel, Microsoft Corporation, USA) to enable data entry and calculation of the UOM (according to Fieller’s theorem) and PCR efficiency. RESULTS: The confidence limits for the BK virus, CMV and EBV tests were ∼0.5 log, 0.5 log to 1.0 log, and 0.5 log to 1.0 log, respectively. The efficiencies of these assays, in the same order were 105%, 119% and 90%. The confidence limits remained stable over the linear range of all three tests. DISCUSSION: A >5 fold (0.7 log) and a >3-fold (0.5 log) change in viral load were significant for CMV and EBV when the results were ≤1000 copies/mL and >1000 copies/mL, respectively. A >3-fold (0.5 log) change in viral load was significant for BK virus over its entire linear range. PCR efficiency was ideal for BK virus and EBV but not CMV. Standardized international reference materials and shared reporting of UOM among laboratories are required for the development of treatment guidelines for BK virus, CMV and EBV in the context of changes in viral load. PMID:25285125

  1. Determination of real-time polymerase chain reaction uncertainty of measurement using replicate analysis and a graphical user interface with Fieller's theorem.

    PubMed

    Stuart, James Ian; Delport, Johan; Lannigan, Robert; Zahariadis, George

    2014-07-01

    Disease monitoring of viruses using real-time polymerase chain reaction (PCR) requires knowledge of the precision of the test to determine what constitutes a significant change. Calculation of quantitative PCR confidence limits requires bivariate statistical methods. To develop a simple-to-use graphical user interface to determine the uncertainty of measurement (UOM) of BK virus, cytomegalovirus (CMV) and Epstein-Barr virus (EBV) real-time PCR assays. Thirty positive clinical samples for each of the three viral assays were repeated once. A graphical user interface was developed using a spreadsheet (Excel, Microsoft Corporation, USA) to enable data entry and calculation of the UOM (according to Fieller's theorem) and PCR efficiency. The confidence limits for the BK virus, CMV and EBV tests were ∼0.5 log, 0.5 log to 1.0 log, and 0.5 log to 1.0 log, respectively. The efficiencies of these assays, in the same order were 105%, 119% and 90%. The confidence limits remained stable over the linear range of all three tests. A >5 fold (0.7 log) and a >3-fold (0.5 log) change in viral load were significant for CMV and EBV when the results were ≤1000 copies/mL and >1000 copies/mL, respectively. A >3-fold (0.5 log) change in viral load was significant for BK virus over its entire linear range. PCR efficiency was ideal for BK virus and EBV but not CMV. Standardized international reference materials and shared reporting of UOM among laboratories are required for the development of treatment guidelines for BK virus, CMV and EBV in the context of changes in viral load.

  2. Highly fault-tolerant parallel computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spielman, D.A.

    We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less

  3. Localized massive halo properties in BAHAMAS and MACSIS simulations: scalings, log-normality, and covariance

    NASA Astrophysics Data System (ADS)

    Farahi, Arya; Evrard, August E.; McCarthy, Ian; Barnes, David J.; Kay, Scott T.

    2018-05-01

    Using tens of thousands of halos realized in the BAHAMAS and MACSIS simulations produced with a consistent astrophysics treatment that includes AGN feedback, we validate a multi-property statistical model for the stellar and hot gas mass behavior in halos hosting groups and clusters of galaxies. The large sample size allows us to extract fine-scale mass-property relations (MPRs) by performing local linear regression (LLR) on individual halo stellar mass (Mstar) and hot gas mass (Mgas) as a function of total halo mass (Mhalo). We find that: 1) both the local slope and variance of the MPRs run with mass (primarily) and redshift (secondarily); 2) the conditional likelihood, p(Mstar, Mgas| Mhalo, z) is accurately described by a multivariate, log-normal distribution, and; 3) the covariance of Mstar and Mgas at fixed Mhalo is generally negative, reflecting a partially closed baryon box model for high mass halos. We validate the analytical population model of Evrard et al. (2014), finding sub-percent accuracy in the log-mean halo mass selected at fixed property, ⟨ln Mhalo|Mgas⟩ or ⟨ln Mhalo|Mstar⟩, when scale-dependent MPR parameters are employed. This work highlights the potential importance of allowing for running in the slope and scatter of MPRs when modeling cluster counts for cosmological studies. We tabulate LLR fit parameters as a function of halo mass at z = 0, 0.5 and 1 for two popular mass conventions.

  4. Analyzing Response Times in Tests with Rank Correlation Approaches

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jorg-Tobias

    2013-01-01

    It is common practice to log-transform response times before analyzing them with standard factor analytical methods. However, sometimes the log-transformation is not capable of linearizing the relation between the response times and the latent traits. Therefore, a more general approach to response time analysis is proposed in the current…

  5. Predicting landslides in clearcut patches

    Treesearch

    Raymond M. Rice; Norman H. Pillsbury

    1982-01-01

    Abstract - Accelerated erosion in the form of landslides can be an undesirable consequence of clearcut logging on steep slopes. Forest managers need a method of predicting the risk of such erosion. Data collected after logging in a granitic area of northwestern California were used to develop a predictive equation. A linear discriminant function was developed that...

  6. Fractal geometry of music.

    PubMed Central

    Hsü, K J; Hsü, A J

    1990-01-01

    Music critics have compared Bach's music to the precision of mathematics. What "mathematics" and what "precision" are the questions for a curious scientist. The purpose of this short note is to suggest that the mathematics is, at least in part, Mandelbrot's fractal geometry and the precision is the deviation from a log-log linear plot. PMID:11607061

  7. Using Artificial Neural Networks to Predict the Presence of Overpressured Zones in the Anadarko Basin, Oklahoma

    NASA Astrophysics Data System (ADS)

    Cranganu, Constantin

    2007-10-01

    Many sedimentary basins throughout the world exhibit areas with abnormal pore-fluid pressures (higher or lower than normal or hydrostatic pressure). Predicting pore pressure and other parameters (depth, extension, magnitude, etc.) in such areas are challenging tasks. The compressional acoustic (sonic) log (DT) is often used as a predictor because it responds to changes in porosity or compaction produced by abnormal pore-fluid pressures. Unfortunately, the sonic log is not commonly recorded in most oil and/or gas wells. We propose using an artificial neural network to synthesize sonic logs by identifying the mathematical dependency between DT and the commonly available logs, such as normalized gamma ray (GR) and deep resistivity logs (REID). The artificial neural network process can be divided into three steps: (1) Supervised training of the neural network; (2) confirmation and validation of the model by blind-testing the results in wells that contain both the predictor (GR, REID) and the target values (DT) used in the supervised training; and 3) applying the predictive model to all wells containing the required predictor data and verifying the accuracy of the synthetic DT data by comparing the back-predicted synthetic predictor curves (GRNN, REIDNN) to the recorded predictor curves used in training (GR, REID). Artificial neural networks offer significant advantages over traditional deterministic methods. They do not require a precise mathematical model equation that describes the dependency between the predictor values and the target values and, unlike linear regression techniques, neural network methods do not overpredict mean values and thereby preserve original data variability. One of their most important advantages is that their predictions can be validated and confirmed through back-prediction of the input data. This procedure was applied to predict the presence of overpressured zones in the Anadarko Basin, Oklahoma. The results are promising and encouraging.

  8. A quasi-Monte-Carlo comparison of parametric and semiparametric regression methods for heavy-tailed and non-normal data: an application to healthcare costs.

    PubMed

    Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel

    2016-10-01

    We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.

  9. Testing concordance of instrumental variable effects in generalized linear models with application to Mendelian randomization

    PubMed Central

    Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li

    2014-01-01

    Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158

  10. Estimation of octanol/water partition coefficients using LSER parameters

    USGS Publications Warehouse

    Luehrs, Dean C.; Hickey, James P.; Godbole, Kalpana A.; Rogers, Tony N.

    1998-01-01

    The logarithms of octanol/water partition coefficients, logKow, were regressed against the linear solvation energy relationship (LSER) parameters for a training set of 981 diverse organic chemicals. The standard deviation for logKow was 0.49. The regression equation was then used to estimate logKow for a test of 146 chemicals which included pesticides and other diverse polyfunctional compounds. Thus the octanol/water partition coefficient may be estimated by LSER parameters without elaborate software but only moderate accuracy should be expected.

  11. Factors Associated with Post-traumatic Stress Symptoms in Students Who Survived 20 Months after the Sewol Ferry Disaster in Korea

    PubMed Central

    2018-01-01

    Background The Sewol ferry disaster caused national shock and grief in Korea. The present study examined the prevalence and associated factors of post-traumatic stress disorder (PTSD) symptoms among the surviving students 20 months after that disaster. Methods This study was conducted using a cross-sectional design and a sample of 57 students (29 boys and 28 girls) who survived the Sewol ferry disaster. Data were collected using a questionnaire, including instruments that assessed psychological status. A generalized linear model using a log link and Poisson distribution was performed to identify factors associated with PTSD symptoms. Results The results showed that 26.3% of participants were classified in the clinical group by the Child Report of Post-traumatic Symptoms score. Based on a generalized linear model, Poisson distribution, and log link analyses, PTSD symptoms were positively correlated with the number of exposed traumatic events, peers and social support, peri-traumatic dissociation and post-traumatic negative beliefs, and emotional difficulties. On the other hand, PTSD symptoms were negatively correlated with psychological well-being, family cohesion, post-traumatic social support, receiving care at a psychiatry clinic, and female gender. Conclusion This study uncovered risk and protective factors of PTSD in disaster-exposed adolescents. The implications of these findings are considered in relation to determining assessment and interventional strategies aimed at helping survivors following similar traumatic experiences. PMID:29495137

  12. Kepler eclipsing binaries with δ Scuti components and tidally induced heartbeat stars

    NASA Astrophysics Data System (ADS)

    Guo, Zhao; Gies, Douglas R.; Matson, Rachel A.

    δ Scuti stars are generally fast rotators and their pulsations are not in the asymptotic regime, so the interpretation of their pulsation spectra is a very difficult task. Binary stars, especially eclipsing systems, offer us the opportunity to constrain the space of fundamental stellar parameters. Firstly, we show the results of KIC9851944 and KIC4851217 as two case studies. We found the signature of the large frequency separation in the pulsational spectrum of both stars. The observed mean stellar density and the large frequency separation obey the linear relation in the log-log space as found by Suarez et al. (2014) and García Hernández et al. (2015). Second, we apply the simple `one-layer model' of Moreno & Koenigsberger (1999) to the prototype heartbeat star KOI-54. The model naturally reproduces the tidally induced high frequency oscillations and their frequencies are very close to the observed frequency at 90 and 91 times the orbital frequency.

  13. The Use of Crow-AMSAA Plots to Assess Mishap Trends

    NASA Technical Reports Server (NTRS)

    Dawson, Jeffrey W.

    2011-01-01

    Crow-AMSAA (CA) plots are used to model reliability growth. Use of CA plots has expanded into other areas, such as tracking events of interest to management, maintenance problems, and safety mishaps. Safety mishaps can often be successfully modeled using a Poisson probability distribution. CA plots show a Poisson process in log-log space. If the safety mishaps are a stable homogenous Poisson process, a linear fit to the points in a CA plot will have a slope of one. Slopes of greater than one indicate a nonhomogenous Poisson process, with increasing occurrence. Slopes of less than one indicate a nonhomogenous Poisson process, with decreasing occurrence. Changes in slope, known as "cusps," indicate a change in process, which could be an improvement or a degradation. After presenting the CA conceptual framework, examples are given of trending slips, trips and falls, and ergonomic incidents at NASA (from Agency-level data). Crow-AMSAA plotting is a robust tool for trending safety mishaps that can provide insight into safety performance over time.

  14. Signal detection in power-law noise: effect of spectrum exponents.

    PubMed

    Burgess, Arthur E; Judy, Philip F

    2007-12-01

    Many natural backgrounds have approximately isotropic power spectra of the power-law form, P(f)=K/f(beta), where f is radial frequency. For natural scenes and mammograms, the values of the exponent, beta, range from 1.5 to 3.5. The ideal observer model predicts that for signals with certain properties and backgrounds that can be treated as random noise, a plot of log (contrast threshold) versus log (signal size) will be linear with slope, m, given by: m=(beta-2)/2. This plot is referred to as a contrast-detail (CD) diagram. It is interesting that this predicts a detection threshold that is independent of signal size for beta equal to 2. We present two-alternative forced-choice (2AFC) detection results for human and channelized model observers of a simple signal in filtered noise with exponents from 1.5 to 3.5. The CD diagram results are in good agreement with the prediction of this equation.

  15. Neural network prediction of carbonate lithofacies from well logs, Big Bow and Sand Arroyo Creek fields, Southwest Kansas

    USGS Publications Warehouse

    Qi, L.; Carr, T.R.

    2006-01-01

    In the Hugoton Embayment of southwestern Kansas, St. Louis Limestone reservoirs have relatively low recovery efficiencies, attributed to the heterogeneous nature of the oolitic deposits. This study establishes quantitative relationships between digital well logs and core description data, and applies these relationships in a probabilistic sense to predict lithofacies in 90 uncored wells across the Big Bow and Sand Arroyo Creek fields. In 10 wells, a single hidden-layer neural network based on digital well logs and core described lithofacies of the limestone depositional texture was used to train and establish a non-linear relationship between lithofacies assignments from detailed core descriptions and selected log curves. Neural network models were optimized by selecting six predictor variables and automated cross-validation with neural network parameters and then used to predict lithofacies on the whole data set of the 2023 half-foot intervals from the 10 cored wells with the selected network size of 35 and a damping parameter of 0.01. Predicted lithofacies results compared to actual lithofacies displays absolute accuracies of 70.37-90.82%. Incorporating adjoining lithofacies, within-one lithofacies improves accuracy slightly (93.72%). Digital logs from uncored wells were batch processed to predict lithofacies and probabilities related to each lithofacies at half-foot resolution corresponding to log units. The results were used to construct interpolated cross-sections and useful depositional patterns of St. Louis lithofacies were illustrated, e.g., the concentration of oolitic deposits (including lithofacies 5 and 6) along local highs and the relative dominance of quartz-rich carbonate grainstone (lithofacies 1) in the zones A and B of the St. Louis Limestone. Neural network techniques are applicable to other complex reservoirs, in which facies geometry and distribution are the key factors controlling heterogeneity and distribution of rock properties. Future work involves extension of the neural network to predict reservoir properties, and construction of three-dimensional geo-models. ?? 2005 Elsevier Ltd. All rights reserved.

  16. Compositional data analysis as a robust tool to delineate hydrochemical facies within and between gas-bearing aquifers

    NASA Astrophysics Data System (ADS)

    Owen, D. Des. R.; Pawlowsky-Glahn, V.; Egozcue, J. J.; Buccianti, A.; Bradd, J. M.

    2016-08-01

    Isometric log ratios of proportions of major ions, derived from intuitive sequential binary partitions, are used to characterize hydrochemical variability within and between coal seam gas (CSG) and surrounding aquifers in a number of sedimentary basins in the USA and Australia. These isometric log ratios are the coordinates corresponding to an orthonormal basis in the sample space (the simplex). The characteristic proportions of ions, as described by linear models of isometric log ratios, can be used for a mathematical-descriptive classification of water types. This is a more informative and robust method of describing water types than simply classifying a water type based on the dominant ions. The approach allows (a) compositional distinctions between very similar water types to be made and (b) large data sets with a high degree of variability to be rapidly assessed with respect to particular relationships/compositions that are of interest. A major advantage of these techniques is that major and minor ion components can be comprehensively assessed and subtle processes—which may be masked by conventional techniques such as Stiff diagrams, Piper plots, and classic ion ratios—can be highlighted. Results show that while all CSG groundwaters are dominated by Na, HCO3, and Cl ions, the proportions of other ions indicate they can evolve via different means and the particular proportions of ions within total or subcompositions can be unique to particular basins. Using isometric log ratios, subtle differences in the behavior of Na, K, and Cl between CSG water types and very similar Na-HCO3 water types in adjacent aquifers are also described. A complementary pair of isometric log ratios, derived from a geochemically-intuitive sequential binary partition that is designed to reflect compositional variability within and between CSG groundwater, is proposed. These isometric log ratios can be used to model a hydrochemical pathway associated with methanogenesis and/or to delineate groundwater associated with high gas concentrations.

  17. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography.

    PubMed

    Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A

    2013-11-01

    Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  18. LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.

    PubMed

    Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu

    2005-01-01

    Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

  19. Body size and hosts of Triatoma infestans populations affect the size of bloodmeal contents and female fecundity in rural northwestern Argentina

    PubMed Central

    Fernández, María del Pilar; Cecere, María Carla; Cohen, Joel E.

    2017-01-01

    Human sleeping quarters (domiciles) and chicken coops are key source habitats of Triatoma infestans—the principal vector of the infection that causes Chagas disease—in rural communities in northern Argentina. Here we investigated the links among individual bug bloodmeal contents (BMC, mg), female fecundity, body length (L, mm), host blood sources and habitats. We tested whether L, habitat and host blood conferred relative fitness advantages using generalized linear mixed-effects models and a multimodel inference approach with model averaging. The data analyzed include 769 late-stage triatomines collected in 120 sites from six habitats in 87 houses in Figueroa, Santiago del Estero, during austral spring. L correlated positively with other body-size surrogates and was modified by habitat type, bug stage and recent feeding. Bugs from chicken coops were significantly larger than pig-corral and kitchen bugs. The best-fitting model of log BMC included habitat, a recent feeding, bug stage, log Lc (mean-centered log L) and all two-way interactions including log Lc. Human- and chicken-fed bugs had significantly larger BMC than bugs fed on other hosts whereas goat-fed bugs ranked last, in consistency with average blood-feeding rates. Fecundity was maximal in chicken-fed bugs from chicken coops, submaximal in human- and pig-fed bugs, and minimal in goat-fed bugs. This study is the first to reveal the allometric effects of body-size surrogates on BMC and female fecundity in a large set of triatomine populations occupying multiple habitats, and discloses the links between body size, microsite temperatures and various fitness components that affect the risks of transmission of Trypanosoma cruzi. PMID:29211791

  20. Body size and hosts of Triatoma infestans populations affect the size of bloodmeal contents and female fecundity in rural northwestern Argentina.

    PubMed

    Gürtler, Ricardo E; Fernández, María Del Pilar; Cecere, María Carla; Cohen, Joel E

    2017-12-01

    Human sleeping quarters (domiciles) and chicken coops are key source habitats of Triatoma infestans-the principal vector of the infection that causes Chagas disease-in rural communities in northern Argentina. Here we investigated the links among individual bug bloodmeal contents (BMC, mg), female fecundity, body length (L, mm), host blood sources and habitats. We tested whether L, habitat and host blood conferred relative fitness advantages using generalized linear mixed-effects models and a multimodel inference approach with model averaging. The data analyzed include 769 late-stage triatomines collected in 120 sites from six habitats in 87 houses in Figueroa, Santiago del Estero, during austral spring. L correlated positively with other body-size surrogates and was modified by habitat type, bug stage and recent feeding. Bugs from chicken coops were significantly larger than pig-corral and kitchen bugs. The best-fitting model of log BMC included habitat, a recent feeding, bug stage, log Lc (mean-centered log L) and all two-way interactions including log Lc. Human- and chicken-fed bugs had significantly larger BMC than bugs fed on other hosts whereas goat-fed bugs ranked last, in consistency with average blood-feeding rates. Fecundity was maximal in chicken-fed bugs from chicken coops, submaximal in human- and pig-fed bugs, and minimal in goat-fed bugs. This study is the first to reveal the allometric effects of body-size surrogates on BMC and female fecundity in a large set of triatomine populations occupying multiple habitats, and discloses the links between body size, microsite temperatures and various fitness components that affect the risks of transmission of Trypanosoma cruzi.

  1. Could LogP be a principal determinant of biological activity in 18-crown-6 ethers? Synthesis of biologically active adamantane-substituted diaza-crowns.

    PubMed

    Supek, Fran; Ramljak, Tatjana Šumanovac; Marjanović, Marko; Buljubašić, Maja; Kragol, Goran; Ilić, Nataša; Smuc, Tomislav; Zahradka, Davor; Mlinarić-Majerski, Kata; Kralj, Marijeta

    2011-08-01

    18-crown-6 ethers are known to exert their biological activity by transporting K(+) ions across cell membranes. Using non-linear Support Vector Machines regression, we searched for structural features that influence antiproliferative activity in a diverse set of 19 known oxa-, monoaza- and diaza-18-crown-6 ethers. Here, we show that the logP of the molecule is the most important molecular descriptor, among ∼1300 tested descriptors, in determining biological potency (R(2)(cv) = 0.704). The optimal logP was at 5.5 (Ghose-Crippen ALOGP estimate) while both higher and lower values were detrimental to biological potency. After controlling for logP, we found that the antiproliferative activity of the molecule was generally not affected by side chain length, molecular symmetry, or presence of side chain amide links. To validate this QSAR model, we synthesized six novel, highly lipophilic diaza-18-crown-6 derivatives with adamantane moieties attached to the side arms. These compounds have near-optimal logP values and consequently exhibit strong growth inhibition in various human cancer cell lines and a bacterial system. The bioactivities of different diaza-18-crown-6 analogs in Bacillus subtilis and cancer cells were correlated, suggesting conserved molecular features may be mediating the cytotoxic response. We conclude that relying primarily on the logP is a sensible strategy in preparing future 18-crown-6 analogs with optimized biological activity. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  2. A Systematic Review and Meta-Regression Analysis of Lung Cancer Risk and Inorganic Arsenic in Drinking Water.

    PubMed

    Lamm, Steven H; Ferdosi, Hamid; Dissen, Elisabeth K; Li, Ji; Ahn, Jaeil

    2015-12-07

    High levels (> 200 µg/L) of inorganic arsenic in drinking water are known to be a cause of human lung cancer, but the evidence at lower levels is uncertain. We have sought the epidemiological studies that have examined the dose-response relationship between arsenic levels in drinking water and the risk of lung cancer over a range that includes both high and low levels of arsenic. Regression analysis, based on six studies identified from an electronic search, examined the relationship between the log of the relative risk and the log of the arsenic exposure over a range of 1-1000 µg/L. The best-fitting continuous meta-regression model was sought and found to be a no-constant linear-quadratic analysis where both the risk and the exposure had been logarithmically transformed. This yielded both a statistically significant positive coefficient for the quadratic term and a statistically significant negative coefficient for the linear term. Sub-analyses by study design yielded results that were similar for both ecological studies and non-ecological studies. Statistically significant X-intercepts consistently found no increased level of risk at approximately 100-150 µg/L arsenic.

  3. A novel approach for characterizing broad-band radio spectral energy distributions

    NASA Astrophysics Data System (ADS)

    Harvey, V. M.; Franzen, T.; Morgan, J.; Seymour, N.

    2018-05-01

    We present a new broad-band radio frequency catalogue across 0.12 GHz ≤ ν ≤ 20 GHz created by combining data from the Murchison Widefield Array Commissioning Survey, the Australia Telescope 20 GHz survey, and the literature. Our catalogue consists of 1285 sources limited by S20 GHz > 40 mJy at 5σ, and contains flux density measurements (or estimates) and uncertainties at 0.074, 0.080, 0.119, 0.150, 0.180, 0.408, 0.843, 1.4, 4.8, 8.6, and 20 GHz. We fit a second-order polynomial in log-log space to the spectral energy distributions of all these sources in order to characterize their broad-band emission. For the 994 sources that are well described by a linear or quadratic model we present a new diagnostic plot arranging sources by the linear and curvature terms. We demonstrate the advantages of such a plot over the traditional radio colour-colour diagram. We also present astrophysical descriptions of the sources found in each segment of this new parameter space and discuss the utility of these plots in the upcoming era of large area, deep, broad-band radio surveys.

  4. A Systematic Review and Meta-Regression Analysis of Lung Cancer Risk and Inorganic Arsenic in Drinking Water

    PubMed Central

    Lamm, Steven H.; Ferdosi, Hamid; Dissen, Elisabeth K.; Li, Ji; Ahn, Jaeil

    2015-01-01

    High levels (> 200 µg/L) of inorganic arsenic in drinking water are known to be a cause of human lung cancer, but the evidence at lower levels is uncertain. We have sought the epidemiological studies that have examined the dose-response relationship between arsenic levels in drinking water and the risk of lung cancer over a range that includes both high and low levels of arsenic. Regression analysis, based on six studies identified from an electronic search, examined the relationship between the log of the relative risk and the log of the arsenic exposure over a range of 1–1000 µg/L. The best-fitting continuous meta-regression model was sought and found to be a no-constant linear-quadratic analysis where both the risk and the exposure had been logarithmically transformed. This yielded both a statistically significant positive coefficient for the quadratic term and a statistically significant negative coefficient for the linear term. Sub-analyses by study design yielded results that were similar for both ecological studies and non-ecological studies. Statistically significant X-intercepts consistently found no increased level of risk at approximately 100–150 µg/L arsenic. PMID:26690190

  5. Automating linear accelerator quality assurance.

    PubMed

    Eckhause, Tobias; Al-Hallaq, Hania; Ritter, Timothy; DeMarco, John; Farrey, Karl; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Perez, Mario; Park, SungYong; Booth, Jeremy T; Thorwarth, Ryan; Moran, Jean M

    2015-10-01

    The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The standard deviation in MLC position, as determined by EPID measurements, across the consortium was 0.33 mm for IMRT fields. With respect to the log files, the deviations between expected and actual positions for parameters were small (<0.12 mm) for all Linacs. Considering both log files and EPID measurements, all parameters were well within published tolerance values. Variations in collimator angle, MLC position, and gantry sag were also evaluated for all Linacs. The performance of the TrueBeam Linac model was shown to be consistent based on automated analysis of trajectory log files and EPID images acquired during delivery of a standardized test suite. The results can be compared directly to tolerance thresholds. In addition, sharing of results from standard tests across institutions can facilitate the identification of QA process and Linac changes. These reference values are presented along with the standard deviation for common tests so that the test suite can be used by other centers to evaluate their Linac performance against those in this consortium.

  6. Assessment of microplastic-sorbed contaminant bioavailability through analysis of biomarker gene expression in larval zebrafish.

    PubMed

    Sleight, Victoria A; Bakir, Adil; Thompson, Richard C; Henry, Theodore B

    2017-03-15

    Microplastics (MPs) are prevalent in marine ecosystems. Because toxicants (termed here "co-contaminants") can sorb to MPs, there is potential for MPs to alter co-contaminant bioavailability. Our objective was to demonstrate sorption of two co-contaminants with different physicochemistries [phenanthrene (Phe), log 10 K ow =4.57; and 17α-ethinylestradiol (EE2), log 10 K ow =3.67] to MPs; and assess whether co-contaminant bioavailability was increased after MP settlement. Bioavailability was indicated by gene expression in larval zebrafish. Both Phe and EE2 sorbed to MPs, which reduced bioavailability by a maximum of 33% and 48% respectively. Sorption occurred, but was not consistent with predictions based on co-contaminant physicochemistry (Phe having higher log 10 K ow was expected to have higher sorption). Contaminated MPs settled to the bottom of the exposures did not lead to increased bioavailability of Phe or EE2. Phe was 48% more bioavailable than predicted by a linear sorption model, organism-based measurements therefore contribute unique insight into MP co-contaminant bioavailability. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Prediction of S-wave velocity using complete ensemble empirical mode decomposition and neural networks

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Hachay, Olga; Zaourar, Naima

    2017-04-01

    One of the key elements in hydrocarbon reservoirs characterization is the S-wave velocity (Vs). Since the traditional estimating methods often fail to accurately predict this physical parameter, a new approach that takes into account its non-stationary and non-linear properties is needed. In this view, a prediction model based on complete ensemble empirical mode decomposition (CEEMD) and a multiple layer perceptron artificial neural network (MLP ANN) is suggested to compute Vs from P-wave velocity (Vp). Using a fine-to-coarse reconstruction algorithm based on CEEMD, the Vp log data is decomposed into a high frequency (HF) component, a low frequency (LF) component and a trend component. Then, different combinations of these components are used as inputs of the MLP ANN algorithm for estimating Vs log. Applications on well logs taken from different geological settings illustrate that the predicted Vs values using MLP ANN with the combinations of HF, LF and trend in inputs are more accurate than those obtained with the traditional estimating methods. Keywords: S-wave velocity, CEEMD, multilayer perceptron neural networks.

  8. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws

    USGS Publications Warehouse

    Xiao, X.; White, E.P.; Hooten, M.B.; Durham, S.L.

    2011-01-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain. ?? 2011 by the Ecological Society of America.

  9. Fractal attractors in economic growth models with random pollution externalities

    NASA Astrophysics Data System (ADS)

    La Torre, Davide; Marsiglio, Simone; Privileggi, Fabio

    2018-05-01

    We analyze a discrete time two-sector economic growth model where the production technologies in the final and human capital sectors are affected by random shocks both directly (via productivity and factor shares) and indirectly (via a pollution externality). We determine the optimal dynamics in the decentralized economy and show how these dynamics can be described in terms of a two-dimensional affine iterated function system with probability. This allows us to identify a suitable parameter configuration capable of generating exactly the classical Barnsley's fern as the attractor of the log-linearized optimal dynamical system.

  10. Growth kinetics of Staphylococcus aureus on Brie and Camembert cheeses.

    PubMed

    Lee, Heeyoung; Kim, Kyungmi; Lee, Soomin; Han, Minkyung; Yoon, Yohan

    2014-05-01

    In this study, we developed mathematical models to describe the growth kinetics of Staphylococcus aureus on natural cheeses. A five-strain mixture of Staph. aureus was inoculated onto 15 g of Brie and Camembert cheeses at 4 log CFU/g. The samples were then stored at 4, 10, 15, 25, and 30 °C for 2-60 d, with a different storage time being used for each temperature. Total bacterial and Staph. aureus cells were enumerated on tryptic soy agar and mannitol salt agar, respectively. The Baranyi model was fitted to the growth data of Staph. aureus to calculate kinetic parameters such as the maximum growth rate in log CFU units (r max; log CFU/g/h) and the lag phase duration (λ; h). The effects of temperature on the square root of r max and on the natural logarithm of λ were modelled in the second stage (secondary model). Independent experimental data (observed data) were compared with prediction and the respective root mean square error compared with the RMSE of the fit on the original data, as a measure of model performance. The total growth of bacteria was observed at 10, 15, 25, and 30 °C on both cheeses. The r max values increased with storage temperature (P<0·05), but a significant effect of storage temperature on λ values was only observed between 4 and 15 °C (P<0·05). The square root model and linear equation were found to be appropriate for description of the effect of storage temperature on growth kinetics (R 2=0·894-0·983). Our results indicate that the models developed in this study should be useful for describing the growth kinetics of Staph. aureus on Brie and Camembert cheeses.

  11. Statistical Models for the Analysis and Design of Digital Polymerase Chain Reaction (dPCR) Experiments.

    PubMed

    Dorazio, Robert M; Hunter, Margaret E

    2015-11-03

    Statistical methods for the analysis and design of experiments using digital PCR (dPCR) have received only limited attention and have been misused in many instances. To address this issue and to provide a more general approach to the analysis of dPCR data, we describe a class of statistical models for the analysis and design of experiments that require quantification of nucleic acids. These models are mathematically equivalent to generalized linear models of binomial responses that include a complementary, log-log link function and an offset that is dependent on the dPCR partition volume. These models are both versatile and easy to fit using conventional statistical software. Covariates can be used to specify different sources of variation in nucleic acid concentration, and a model's parameters can be used to quantify the effects of these covariates. For purposes of illustration, we analyzed dPCR data from different types of experiments, including serial dilution, evaluation of copy number variation, and quantification of gene expression. We also showed how these models can be used to help design dPCR experiments, as in selection of sample sizes needed to achieve desired levels of precision in estimates of nucleic acid concentration or to detect differences in concentration among treatments with prescribed levels of statistical power.

  12. Analyzing chromatographic data using multilevel modeling.

    PubMed

    Wiczling, Paweł

    2018-06-01

    It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.

  13. High-performance computing on GPUs for resistivity logging of oil and gas wells

    NASA Astrophysics Data System (ADS)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  14. Assessing Competencies Needed to Engage With Digital Health Services: Development of the eHealth Literacy Assessment Toolkit.

    PubMed

    Karnoe, Astrid; Furstrand, Dorthe; Christensen, Karl Bang; Norgaard, Ole; Kayser, Lars

    2018-05-10

    To achieve full potential in user-oriented eHealth projects, we need to ensure a match between the eHealth technology and the user's eHealth literacy, described as knowledge and skills. However, there is a lack of multifaceted eHealth literacy assessment tools suitable for screening purposes. The objective of our study was to develop and validate an eHealth literacy assessment toolkit (eHLA) that assesses individuals' health literacy and digital literacy using a mix of existing and newly developed scales. From 2011 to 2015, scales were continuously tested and developed in an iterative process, which led to 7 tools being included in the validation study. The eHLA validation version consisted of 4 health-related tools (tool 1: "functional health literacy," tool 2: "health literacy self-assessment," tool 3: "familiarity with health and health care," and tool 4: "knowledge of health and disease") and 3 digitally-related tools (tool 5: "technology familiarity," tool 6: "technology confidence," and tool 7: "incentives for engaging with technology") that were tested in 475 respondents from a general population sample and an outpatient clinic. Statistical analyses examined floor and ceiling effects, interitem correlations, item-total correlations, and Cronbach coefficient alpha (CCA). Rasch models (RM) examined the fit of data. Tools were reduced in items to secure robust tools fit for screening purposes. Reductions were made based on psychometrics, face validity, and content validity. Tool 1 was not reduced in items; it consequently consists of 10 items. The overall fit to the RM was acceptable (Anderson conditional likelihood ratio, CLR=10.8; df=9; P=.29), and CCA was .67. Tool 2 was reduced from 20 to 9 items. The overall fit to a log-linear RM was acceptable (Anderson CLR=78.4, df=45, P=.002), and CCA was .85. Tool 3 was reduced from 23 to 5 items. The final version showed excellent fit to a log-linear RM (Anderson CLR=47.7, df=40, P=.19), and CCA was .90. Tool 4 was reduced from 12 to 6 items. The fit to a log-linear RM was acceptable (Anderson CLR=42.1, df=18, P=.001), and CCA was .59. Tool 5 was reduced from 20 to 6 items. The fit to the RM was acceptable (Anderson CLR=30.3, df=17, P=.02), and CCA was .94. Tool 6 was reduced from 5 to 4 items. The fit to a log-linear RM taking local dependency (LD) into account was acceptable (Anderson CLR=26.1, df=21, P=.20), and CCA was .91. Tool 7 was reduced from 6 to 4 items. The fit to a log-linear RM taking LD and differential item functioning into account was acceptable (Anderson CLR=23.0, df=29, P=.78), and CCA was .90. The eHLA consists of 7 short, robust scales that assess individual's knowledge and skills related to digital literacy and health literacy. ©Astrid Karnoe, Dorthe Furstrand, Karl Bang Christensen, Ole Norgaard, Lars Kayser. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.05.2018.

  15. A new concept for the environmental risk assessment of poorly water soluble compounds and its application to consumer products.

    PubMed

    Tolls, Johannes; Müller, Martin; Willing, Andreas; Steber, Josef

    2009-07-01

    Many consumer products contain lipophilic, poorly soluble ingredients representing large-volume substances whose aquatic toxicity cannot be adequately determined with standard methods for a number of reasons. In such cases, a recently developed approach can be used to define an aquatic exposure threshold of no concern (ETNCaq; i.e., a concentration below which no adverse affects on the environment are to be expected). A risk assessment can be performed by comparing the ETNCaq value with the aquatic exposure levels of poorly soluble substances. Accordingly, the aquatic exposure levels of substances with water solubility below the ETNCaq will not exceed the ecotoxicological no-effect concentration; therefore, their risk can be assessed as being negligible. The ETNCaq value relevant for substances with a narcotic mode of action is 1.9 microg/L. To apply the above risk assessment strategy, the solubility in water needs to be known. Most frequently, this parameter is estimated by means of quantitative structure/activity relationships based on the log octanol-water partition coefficient (log Kow). The predictive value of several calculation models for water solubility has been investigated by this method with the use of more recent experimental solubility data for lipophilic compounds. A linear regression model was shown to be the most suitable for providing correct predictions without underestimation of real water solubility. To define a log Kow threshold suitable for reliably predicting a water solubility of less than 1.9 microg/L, a confidence limit was established by statistical comparison of the experimental solubility data with their log Kow. It was found that a threshold of log Kow = 7 generally allows discrimination between substances with solubility greater than and less than 1.9 microg/L. Accordingly, organic substances with a baseline toxicity and log Kow > 7 do not require further testing to prove that they have low environmental risk. In applying this concept, the uncertainty of the prediction of water solubility can be accounted for. If the predicted solubility in water is to be below ETNCaq with a probability of 95%, the corresponding log Kow value is 8.

  16. Seeing through the Canopy: Relationship between Coarse Woody Debris and Forest Structure measured by Airborne Lidar in the Brazilian Amazon

    NASA Astrophysics Data System (ADS)

    Scaranello, M. A., Sr.; Keller, M. M.; dos-Santos, M. N.; Longo, M.; Pinagé, E. R.; Leitold, V.

    2016-12-01

    Coarse woody debris is an important but infrequently quantified carbon pool in tropical forests. Based on studies at 12 sites spread across the Brazilian Amazon, we quantified coarse woody debris stocks in intact forests and forests affected by different intensities of degradation by logging and/or fire. Measurement were made in-situ and for the first time field measurements of coarse woody debris were related to structural metrics derived from airborne lidar. Using the line-intercept method we established 84 transects for sampling fallen coarse woody debris and associated inventory plots for sampling standing dead wood in intact, conventional logging, reduced impact logging, burned and burned after logging forests. Overall mean and standard deviation of total coarse woody debris were 50.0 Mg ha-1 and 26.4 Mg ha-1 respectively. Forest degradation increased coarse woody debris stocks compared to intact forests by a factor of 1.7 in reduced impact logging forests and up to 3-fold in burned forests, in a side-by-side comparison of nearby areas. The ratio between coarse woody debris and biomass increased linearly with number of degradation events (R²: 0.67, p<0.01). Individual lidar-derived structural variables strongly correlated with coarse woody debris in intact and reduced impact logging forests: the 5th percentile of last returns for in intact forests (R²: 0.78, p<0.01) and forest gap area, mapped using lidar-derived canopy height model, for reduced impact logging forests (R²: 0.63, p<0.01). Individual gap area also played a weak but significant role in determining coarse woody debris in burned forests (R2: 0.21, p<0.05), but with contrasting trend. Both degradation-specific and general multiple models using lidar-derived variables were good predictor of coarse woody debris stocks in different degradation levels in the Brazilian Amazon. The strong relation of coarse woody debris with lidar derived structural variables suggests an approach for quantifying infrequently measured coarse woody debris over large areas.

  17. An hourly PM10 diagnosis model for the Bilbao metropolitan area using a linear regression methodology.

    PubMed

    González-Aparicio, I; Hidalgo, J; Baklanov, A; Padró, A; Santa-Coloma, O

    2013-07-01

    There is extensive evidence of the negative impacts on health linked to the rise of the regional background of particulate matter (PM) 10 levels. These levels are often increased over urban areas becoming one of the main air pollution concerns. This is the case on the Bilbao metropolitan area, Spain. This study describes a data-driven model to diagnose PM10 levels in Bilbao at hourly intervals. The model is built with a training period of 7-year historical data covering different urban environments (inland, city centre and coastal sites). The explanatory variables are quantitative-log [NO2], temperature, short-wave incoming radiation, wind speed and direction, specific humidity, hour and vehicle intensity-and qualitative-working days/weekends, season (winter/summer), the hour (from 00 to 23 UTC) and precipitation/no precipitation. Three different linear regression models are compared: simple linear regression; linear regression with interaction terms (INT); and linear regression with interaction terms following the Sawa's Bayesian Information Criteria (INT-BIC). Each type of model is calculated selecting two different periods: the training (it consists of 6 years) and the testing dataset (it consists of 1 year). The results of each type of model show that the INT-BIC-based model (R(2) = 0.42) is the best. Results were R of 0.65, 0.63 and 0.60 for the city centre, inland and coastal sites, respectively, a level of confidence similar to the state-of-the art methodology. The related error calculated for longer time intervals (monthly or seasonal means) diminished significantly (R of 0.75-0.80 for monthly means and R of 0.80 to 0.98 at seasonally means) with respect to shorter periods.

  18. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  19. New World Vistas: New Models of Computation Lattice Based Quantum Computation

    DTIC Science & Technology

    1996-07-25

    ro ns Eniac (18,000 vacuum tubes) UNIVAC II (core memory) Digital Devices magnetostrictive delay line Intel 1103 integrated circuit IBM 3340 disk...in areal size of a bit for the last fifty years since the 1946 Eniac computer. 1 Planned Research I propose to consider the feasibility of implement...tech- nology. Fiqure 1 is a log-linear plot of data for the areal size of a bit over the last fifty years (from 18,000 bits in the 1946 Eniac computer

  20. Combined Log Inventory and Process Simulation Models for the Planning and Control of Sawmill Operations

    Treesearch

    Guillermo A. Mendoza; Roger J. Meimban; Philip A. Araman; William G. Luppold

    1991-01-01

    A log inventory model and a real-time hardwood process simulation model were developed and combined into an integrated production planning and control system for hardwood sawmills. The log inventory model was designed to monitor and periodically update the status of the logs in the log yard. The process simulation model was designed to estimate various sawmill...

  1. Cortical circuitry implementing graphical models.

    PubMed

    Litvak, Shai; Ullman, Shimon

    2009-11-01

    In this letter, we develop and simulate a large-scale network of spiking neurons that approximates the inference computations performed by graphical models. Unlike previous related schemes, which used sum and product operations in either the log or linear domains, the current model uses an inference scheme based on the sum and maximization operations in the log domain. Simulations show that using these operations, a large-scale circuit, which combines populations of spiking neurons as basic building blocks, is capable of finding close approximations to the full mathematical computations performed by graphical models within a few hundred milliseconds. The circuit is general in the sense that it can be wired for any graph structure, it supports multistate variables, and it uses standard leaky integrate-and-fire neuronal units. Following previous work, which proposed relations between graphical models and the large-scale cortical anatomy, we focus on the cortical microcircuitry and propose how anatomical and physiological aspects of the local circuitry may map onto elements of the graphical model implementation. We discuss in particular the roles of three major types of inhibitory neurons (small fast-spiking basket cells, large layer 2/3 basket cells, and double-bouquet neurons), subpopulations of strongly interconnected neurons with their unique connectivity patterns in different cortical layers, and the possible role of minicolumns in the realization of the population-based maximum operation.

  2. A Maximum Likelihood Ensemble Data Assimilation Method Tailored to the Inner Radiation Belt

    NASA Astrophysics Data System (ADS)

    Guild, T. B.; O'Brien, T. P., III; Mazur, J. E.

    2014-12-01

    The Earth's radiation belts are composed of energetic protons and electrons whose fluxes span many orders of magnitude, whose distributions are log-normal, and where data-model differences can be large and also log-normal. This physical system thus challenges standard data assimilation methods relying on underlying assumptions of Gaussian distributions of measurements and data-model differences, where innovations to the model are small. We have therefore developed a data assimilation method tailored to these properties of the inner radiation belt, analogous to the ensemble Kalman filter but for the unique cases of non-Gaussian model and measurement errors, and non-linear model and measurement distributions. We apply this method to the inner radiation belt proton populations, using the SIZM inner belt model [Selesnick et al., 2007] and SAMPEX/PET and HEO proton observations to select the most likely ensemble members contributing to the state of the inner belt. We will describe the algorithm, the method of generating ensemble members, our choice of minimizing the difference between instrument counts not phase space densities, and demonstrate the method with our reanalysis of the inner radiation belt throughout solar cycle 23. We will report on progress to continue our assimilation into solar cycle 24 using the Van Allen Probes/RPS observations.

  3. Developing and applying metamodels of high resolution ...

    EPA Pesticide Factsheets

    As defined by Wikipedia (https://en.wikipedia.org/wiki/Metamodeling), “(a) metamodel or surrogate model is a model of a model, and metamodeling is the process of generating such metamodels.” The goals of metamodeling include, but are not limited to (1) developing functional or statistical relationships between a model’s input and output variables for model analysis, interpretation, or information consumption by users’ clients; (2) quantifying a model’s sensitivity to alternative or uncertain forcing functions, initial conditions, or parameters; and (3) characterizing the model’s response or state space. Using five existing models developed by US Environmental Protection Agency, we generate a metamodeling database of the expected environmental and biological concentrations of 644 organic chemicals released into nine US rivers from wastewater treatment works (WTWs) assuming multiple loading rates and sizes of populations serviced. The chemicals of interest have log n-octanol/water partition coefficients ( ) ranging from 3 to 14, and the rivers of concern have mean annual discharges ranging from 1.09 to 3240 m3/s. Log linear regression models are derived to predict mean annual dissolved and total water concentrations and total sediment concentrations of chemicals of concern based on their , Henry’s Law Constant, and WTW loading rate and on the mean annual discharges of the receiving rivers. Metamodels are also derived to predict mean annual chemical

  4. Temporal Decay in Timber Species Composition and Value in Amazonian Logging Concessions.

    PubMed

    Richardson, Vanessa A; Peres, Carlos A

    2016-01-01

    Throughout human history, slow-renewal biological resource populations have been predictably overexploited, often to the point of economic extinction. We assess whether and how this has occurred with timber resources in the Brazilian Amazon. The asynchronous advance of industrial-scale logging frontiers has left regional-scale forest landscapes with varying histories of logging. Initial harvests in unlogged forests can be highly selective, targeting slow-growing, high-grade, shade-tolerant hardwood species, while later harvests tend to focus on fast-growing, light-wooded, long-lived pioneer trees. Brazil accounts for 85% of all native neotropical forest roundlog production, and the State of Pará for almost half of all timber production in Brazilian Amazonia, the largest old-growth tropical timber reserve controlled by any country. Yet the degree to which timber harvests beyond the first-cut can be financially profitable or demographically sustainable remains poorly understood. Here, we use data on legally planned logging of ~17.3 million cubic meters of timber across 314 species extracted from 824 authorized harvest areas in private and community-owned forests, 446 of which reported volumetric composition data by timber species. We document patterns of timber extraction by volume, species composition, and monetary value along aging eastern Amazonian logging frontiers, which are then explained on the basis of historical and environmental variables. Generalized linear models indicate that relatively recent logging operations farthest from heavy-traffic roads are the most selective, concentrating gross revenues on few high-value species. We find no evidence that the post-logging timber species composition and total value of forest stands recovers beyond the first-cut, suggesting that the commercially most valuable timber species become predictably rare or economically extinct in old logging frontiers. In avoiding even more destructive land-use patterns, managing yields of selectively-logged forests is crucial for the long-term integrity of forest biodiversity and financial viability of local industries. The logging history of eastern Amazonian old-growth forests likely mirrors unsustainable patterns of timber depletion over time in Brazil and other tropical countries.

  5. Temporal Decay in Timber Species Composition and Value in Amazonian Logging Concessions

    PubMed Central

    Peres, Carlos A.

    2016-01-01

    Throughout human history, slow-renewal biological resource populations have been predictably overexploited, often to the point of economic extinction. We assess whether and how this has occurred with timber resources in the Brazilian Amazon. The asynchronous advance of industrial-scale logging frontiers has left regional-scale forest landscapes with varying histories of logging. Initial harvests in unlogged forests can be highly selective, targeting slow-growing, high-grade, shade-tolerant hardwood species, while later harvests tend to focus on fast-growing, light-wooded, long-lived pioneer trees. Brazil accounts for 85% of all native neotropical forest roundlog production, and the State of Pará for almost half of all timber production in Brazilian Amazonia, the largest old-growth tropical timber reserve controlled by any country. Yet the degree to which timber harvests beyond the first-cut can be financially profitable or demographically sustainable remains poorly understood. Here, we use data on legally planned logging of ~17.3 million cubic meters of timber across 314 species extracted from 824 authorized harvest areas in private and community-owned forests, 446 of which reported volumetric composition data by timber species. We document patterns of timber extraction by volume, species composition, and monetary value along aging eastern Amazonian logging frontiers, which are then explained on the basis of historical and environmental variables. Generalized linear models indicate that relatively recent logging operations farthest from heavy-traffic roads are the most selective, concentrating gross revenues on few high-value species. We find no evidence that the post-logging timber species composition and total value of forest stands recovers beyond the first-cut, suggesting that the commercially most valuable timber species become predictably rare or economically extinct in old logging frontiers. In avoiding even more destructive land-use patterns, managing yields of selectively-logged forests is crucial for the long-term integrity of forest biodiversity and financial viability of local industries. The logging history of eastern Amazonian old-growth forests likely mirrors unsustainable patterns of timber depletion over time in Brazil and other tropical countries. PMID:27410029

  6. Disentangling road network impacts: The need for a holistic approach

    USDA-ARS?s Scientific Manuscript database

    Traditional and alternative energy development, logging and mining activities, together with off-highway vehicles (OHV) and exurban development, have increased the density of linear disturbances on public and private lands throughout the world. We argue that the dramatic increase in linear disturba...

  7. Probing star formation relations of mergers and normal galaxies across the CO ladder

    NASA Astrophysics Data System (ADS)

    Greve, Thomas R.

    We examine integrated luminosity relations between the IR continuum and the CO rotational ladder observed for local (ultra) luminous infra-red galaxies ((U)LIRGs, L IR >= 1011 M⊙) and normal star forming galaxies in the context of radiation pressure regulated star formation proposed by Andrews & Thompson (2011). This can account for the normalization and linear slopes of the luminosity relations (log L IR = α log L'CO + β) of both low- and high-J CO lines observed for normal galaxies. Super-linear slopes occur for galaxy samples with significantly different dense gas fractions. Local (U)LIRGs are observed to have sub-linear high-J (J up > 6) slopes or, equivalently, increasing L COhigh-J /L IR with L IR. In the extreme ISM conditions of local (U)LIRGs, the high-J CO lines no longer trace individual hot spots of star formation (which gave rise to the linear slopes for normal galaxies) but a more widespread warm and dense gas phase mechanically heated by powerful supernovae-driven turbulence and shocks.

  8. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  9. Quantum algorithm for linear regression

    NASA Astrophysics Data System (ADS)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  10. Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.

    PubMed

    Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe

    2013-04-01

    Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.

  11. Relationship between neighbourhood socioeconomic position and neighbourhood public green space availability: An environmental inequality analysis in a large German city applying generalized linear models.

    PubMed

    Schüle, Steffen Andreas; Gabriel, Katharina M A; Bolte, Gabriele

    2017-06-01

    The environmental justice framework states that besides environmental burdens also resources may be social unequally distributed both on the individual and on the neighbourhood level. This ecological study investigated whether neighbourhood socioeconomic position (SEP) was associated with neighbourhood public green space availability in a large German city with more than 1 million inhabitants. Two different measures were defined for green space availability. Firstly, percentage of green space within neighbourhoods was calculated with the additional consideration of various buffers around the boundaries. Secondly, percentage of green space was calculated based on various radii around the neighbourhood centroid. An index of neighbourhood SEP was calculated with principal component analysis. Log-gamma regression from the group of generalized linear models was applied in order to consider the non-normal distribution of the response variable. All models were adjusted for population density. Low neighbourhood SEP was associated with decreasing neighbourhood green space availability including 200m up to 1000m buffers around the neighbourhood boundaries. Low neighbourhood SEP was also associated with decreasing green space availability based on catchment areas measured from neighbourhood centroids with different radii (1000m up to 3000 m). With an increasing radius the strength of the associations decreased. Social unequally distributed green space may amplify environmental health inequalities in an urban context. Thus, the identification of vulnerable neighbourhoods and population groups plays an important role for epidemiological research and healthy city planning. As a methodical aspect, log-gamma regression offers an adequate parametric modelling strategy for positively distributed environmental variables. Copyright © 2017 Elsevier GmbH. All rights reserved.

  12. Patterns of out-of-home placement decision-making in child welfare.

    PubMed

    Chor, Ka Ho Brian; McClelland, Gary M; Weiner, Dana A; Jordan, Neil; Lyons, John S

    2013-10-01

    Out-of-home placement decision-making in child welfare is founded on the best interest of the child in the least restrictive setting. After a child is removed from home, however, little is known about the mechanism of placement decision-making. This study aims to systematically examine the patterns of out-of-home placement decisions made in a state's child welfare system by comparing two models of placement decision-making: a multidisciplinary team decision-making model and a clinically based decision support algorithm. Based on records of 7816 placement decisions representing 6096 children over a 4-year period, hierarchical log-linear modeling characterized concordance or agreement, and discordance or disagreement when comparing the two models and accounting for age-appropriate placement options. Children aged below 16 had an overall concordance rate of 55.7%, most apparent in the least restrictive (20.4%) and the most restrictive placement (18.4%). Older youth showed greater discordant distributions (62.9%). Log-linear analysis confirmed the overall robustness of concordance (odd ratios [ORs] range: 2.9-442.0), though discordance was most evident from small deviations from the decision support algorithm, such as one-level under-placement in group home (OR=5.3) and one-level over-placement in residential treatment center (OR=4.8). Concordance should be further explored using child-level clinical and placement stability outcomes. Discordance might be explained by dynamic factors such as availability of placements, caregiver preferences, or policy changes and could be justified by positive child-level outcomes. Empirical placement decision-making is critical to a child's journey in child welfare and should be continuously improved to effect positive child welfare outcomes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Adsorption of pharmaceuticals onto trimethylsilylated mesoporous SBA-15.

    PubMed

    Bui, Tung Xuan; Pham, Viet Hung; Le, Son Thanh; Choi, Heechul

    2013-06-15

    The adsorption of a complex mixture of 12 selected pharmaceuticals to trimethylsilylated mesoporous SBA-15 (TMS-SBA-15) has been investigated by batch adsorption experiments. The adsorption of pharmaceuticals to TMS-SBA-15 was highly dependent on the solution pH and pharmaceutical properties (i.e., hydrophobicity (logKow) and acidity (pKa)). Good log-log linear relationships between the adsorption (Kd) and pH-dependent octanol-water coefficients (Kow(pH)) were then established among the neutral, anionic, and cationic compounds, suggesting hydrophobic interaction as a primary driving force in the adsorption. In addition, the neutral species of each compound accounted for a major contribution to the overall compound adsorption onto TMS-SBA-15. The adsorption kinetics of pharmaceuticals was evaluated by the nonlinear first-order and pseudo-second-order models. The first-order model gave a better fit for five pharmaceuticals with lower adsorption capacity, whereas the pseudo-second-order model fitted better for seven pharmaceuticals having higher adsorption capacity. In the same group of properties, pharmaceuticals having higher adsorption capacity exhibited faster adsorption rates. The rate-limiting steps for adsorption of pharmaceuticals onto TMS-SBA-15 are boundary layer diffusion and intraparticle diffusion including diffusion in mesopores and micropores. In addition, the adsorption of pharmaceuticals to TMS-SBA-15 was not influenced by the change of initial pharmaceutical concentration (10-100μgL(-1)) and the presence of natural organic matter. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis

    PubMed Central

    Papadimitriou, Konstantinos I.; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M.

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4th) order topology. PMID:25653579

  15. Neuromorphic log-domain silicon synapse circuits obey bernoulli dynamics: a unifying tutorial analysis.

    PubMed

    Papadimitriou, Konstantinos I; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M

    2014-01-01

    The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4(th)) order topology.

  16. Modeling and validating the grabbing forces of hydraulic log grapples used in forest operations

    Treesearch

    Jingxin Wang; Chris B. LeDoux; Lihai Wang

    2003-01-01

    The grabbing forces of log grapples were modeled and analyzed mathematically under operating conditions when grabbing logs from compact log piles and from bunch-like log piles. The grabbing forces are closely related to the structural parameters of the grapple, the weight of the grapple, and the weight of the log grabbed. An operational model grapple was designed and...

  17. Quantitative analysis of adsorptive interactions of ionic and neutral pharmaceuticals and other chemicals with the surface of Escherichia coli cells in aquatic environment.

    PubMed

    Cho, Chul-Woong; Park, Jeong-Soo; Zhao, Yufeng; Yun, Yeoung-Sang

    2017-08-01

    Since Escherichia coli is ubiquitous in nature and has been applied to biological, chemical, and environmental processes, molecular-level understanding of adsorptive interactions between chemicals and the bacterial surface is of great importance. To characterise the adsorption properties of the surface of E. coli cells in aquatic environment, the binding affinities (log K d ) of calibration compounds were experimentally measured, and then based on the values and numerically well-defined molecular interaction forces, i.e. linear free energy relationship (LFER) descriptors, a predictive model was developed. The examined substances are composed of cations, anions, and neutral compounds, and the used LFER descriptors are excess molar refraction (E), dipolarity/polarisability (S), H-bonding acidity (A) and basicity (B), McGowan volume (V), and coulombic interactions of cations (J + ) and anions (J - ). In experimental results, adsorption of anions on the bacterial surface was not observed, while cations exhibited high affinities. In case of neutral compounds, their low quantities were adsorbed, however whose affinities were mostly lower than those of cations. In a LFER study, it was shown that cationic interaction term has the best correlation in R 2 of 0.691 and sequential additions of S, A, and V help to increase the prediction accuracy. The LFER model (log K d  = - 0.72-0.79 S + 0.81 A + 0.41 V + 0.85 J + ) could predict the log K d in R 2 of 0.871 and SE of 0.402 log unit, and then to check robustness and predictability of the model, we internally validated it by a leave-one-out cross validation (Q 2 LOO ) study. As a result, the Q 2 LOO value was estimated to be 0.826, which was larger than standard of model acceptability (>0.5). Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Prenatal bisphenol a exposure and dysregulation of infant hypothalamic-pituitary-adrenal axis function: findings from the APrON cohort study.

    PubMed

    Giesbrecht, Gerald F; Ejaredar, Maede; Liu, Jiaying; Thomas, Jenna; Letourneau, Nicole; Campbell, Tavis; Martin, Jonathan W; Dewey, Deborah

    2017-05-19

    Animal models show that prenatal bisphenol A (BPA) exposure leads to sexually dimorphic disruption of the neuroendocrine system in offspring, including the hypothalamic-pituitary-adrenal (HPA) neuroendocrine system, but human data are lacking. In humans, prenatal BPA exposure is associated with sex-specific behavioural problems in children, and HPA axis dysregulation may be a biological mechanism. The objective of the current study was to examine sex differences in associations between prenatal maternal urinary BPA concentration and HPA axis function in 3 month old infants. Mother-infant pairs (n = 132) were part of the Alberta Pregnancy Outcomes and Nutrition study, a longitudinal birth cohort recruited (2010-2012) during pregnancy. Maternal spot urine samples collected during the 2nd trimester were analyzed for total BPA and creatinine. Infant saliva samples collected prior to and after a blood draw were analyzed for cortisol. Linear growth curve models were used to characterize changes in infant cortisol as a function of prenatal BPA exposure. Higher maternal BPA was associated with increases in baseline cortisol among females (β = 0.13 log μg/dL; 95% CI: 0.01, 0.26), but decreases among males (β = -0.22 log μg/dL; 95% CI: -0.39, -0.05). In contrast, higher BPA was associated with increased reactivity in males (β = .30 log μg/dL; 95% CI: 0.04, 0.56) but decreased reactivity in females (β = -0.15 log μg/dL; 95% CI: -0.35, 0.05). Models adjusting for creatinine yielded similar results. Prenatal BPA exposure is associated with sex-specific changes in infant HPA axis function. The biological plausibility of these findings is supported by their consistency with evidence in rodent models. Furthermore, these data support the hypotheses that sexually dimorphic changes in children's behaviour following prenatal BPA exposure are mediated by sexually dimorphic changes in HPA axis function.

  19. Experimental study and thermodynamic modeling of the phase relation in the Fe-S-Si system with implications for the distribution of S and Si in a partially solidified core

    NASA Astrophysics Data System (ADS)

    Tao, R.; Fei, Y.

    2017-12-01

    Planetary cooling leads to solidification of any initially molten metallic core. Some terrestrial cores (e.g. Mercury) are formed and differentiated under relatively reduced conditions, and they are thought to be composed of Fe-S-Si. However, there are limited understanding of the phase relations in the Fe-S-Si system at high pressure and temperature. In this study, we conducted high-pressure experiments to investigate the phase relations in the Fe-S-Si system up to 25 GPa. Experimental results show that the liquidus and solidus in this study are slightly lower than those in the Fe-S binary system for the same S concentration in liquid at same pressure. The Fe3S, which is supposed to be the stable sub-solidus S-bearing phase in the Fe-S binary system above 17 GPa, is not observed in the Fe-S-Si system at 21 GPa. Almost all S prefers to partition into liquid, while the distribution of Si between solid and liquid depends on experimental P and T conditions. We obtained the partition coefficient log(KDSi) by fitting the experimental data as a function of P, T and S concentration in liquid. At a constant pressure, the log(KDSi) linearly decreases with 1/T(K). With increase of pressure, the slopes of linear correlation between log(KDSi) and 1/T(K) decreases, indicating that more Si partitions into solid at higher pressure. In order to interpolate and extrapolate the phase relations over a wide pressure and temperature range, we established a comprehensive thermodynamic model in the Fe-S-Si system. The results will be used to constrain the distribution of S and Si between solid inner core and liquid outer core for a range of planet sizes. A Si-rich solid inner core and a S-rich liquid outer core are suggested for an iron-rich core.

  20. Spatial and temporal behavioural responses of wild cattle to tropical forest degradation

    PubMed Central

    Goossens, Benoît; Goon Ee Wern, Jocelyn; Kretzschmar, Petra; Bohm, Torsten; Vaughan, Ian P.

    2018-01-01

    Identifying the consequences of tropical forest degradation is essential to mitigate its effects upon forest fauna. Large forest-dwelling mammals are often highly sensitive to environmental perturbation through processes such as fragmentation, simplification of habitat structure, and abiotic changes including increased temperatures where the canopy is cleared. Whilst previous work has focused upon species richness and rarity in logged forest, few look at spatial and temporal behavioural responses to forest degradation. Using camera traps, we explored the relationships between diel activity, behavioural expression, habitat use and ambient temperature to understand how the wild free-ranging Bornean banteng (Bos javanicus lowi) respond to logging and regeneration. Three secondary forests in Sabah, Malaysian Borneo were studied, varying in the time since last logging (6–23 years). A combination of generalised linear mixed models and generalised linear models were constructed using >36,000 trap-nights. Temperature had no significant effect on activity, however it varied markedly between forests, with the period of intense heat shortening as forest regeneration increased over the years. Bantengs regulated activity, with a reduction during the wet season in the most degraded forest (z = -2.6, Std. Error = 0.13, p = 0.01), and reductions during midday hours in forest with limited regeneration, however after >20 years of regrowth, activity was more consistent throughout the day. Foraging and use of open canopy areas dominated the activity budget when regeneration was limited. As regeneration advanced, this was replaced by greater investment in travelling and using a closed canopy. Forest degradation modifies the ambient temperature, and positively influences flooding and habitat availability during the wet season. Retention of a mosaic of mature forest patches within commercial forests could minimise these effects and also provide refuge, which is key to heat dissipation and the prevention of thermal stress, whilst retention of degraded forest could provide forage. PMID:29649279

  1. Monotonic non-linear transformations as a tool to investigate age-related effects on brain white matter integrity: A Box-Cox investigation.

    PubMed

    Morozova, Maria; Koschutnig, Karl; Klein, Elise; Wood, Guilherme

    2016-01-15

    Non-linear effects of age on white matter integrity are ubiquitous in the brain and indicate that these effects are more pronounced in certain brain regions at specific ages. Box-Cox analysis is a technique to increase the log-likelihood of linear relationships between variables by means of monotonic non-linear transformations. Here we employ Box-Cox transformations to flexibly and parsimoniously determine the degree of non-linearity of age-related effects on white matter integrity by means of model comparisons using a voxel-wise approach. Analysis of white matter integrity in a sample of adults between 20 and 89years of age (n=88) revealed that considerable portions of the white matter in the corpus callosum, cerebellum, pallidum, brainstem, superior occipito-frontal fascicle and optic radiation show non-linear effects of age. Global analyses revealed an increase in the average non-linearity from fractional anisotropy to radial diffusivity, axial diffusivity, and mean diffusivity. These results suggest that Box-Cox transformations are a useful and flexible tool to investigate more complex non-linear effects of age on white matter integrity and extend the functionality of the Box-Cox analysis in neuroimaging. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Lethal photosensitization of periodontal pathogens by a red-filtered Xenon lamp in vitro.

    PubMed

    Matevski, Donco; Weersink, Robert; Tenenbaum, Howard C; Wilson, Brian; Ellen, Richard P; Lépine, Guylaine

    2003-08-01

    The ability of Helium-Neon (He-Ne) laser irradiation of a photosensitizer to induce localized phototoxic effects that kill periodontal pathogens is well documented and is termed photodynamic therapy (PDT). We investigated the potential of a conventional light source (red-filtered Xenon lamp) to activate toluidine blue O (TBO) in vitro and determined in vitro model parameters that may be used in future in vivo trials. Porphyromonas gingivalis 381 was used as the primary test bacterium. Treatment with a 2.2 J/cm2 light dose and 50 micro g/ml TBO concentration resulted in a bacterial kill of 2.43 +/- 0.39 logs with the He-Ne laser control and 3.34 +/- 0.24 logs with the lamp, a near 10-fold increase (p = 0.028). Increases in light intensity produced significantly higher killing (p = 0.012) that plateaued at 25 mW/cm2. There was a linear relationship between light dose and bacterial killing (r2 = 0.916); as light dose was increased bacterial survival decreased. No such relationship was found for the drug concentrations tested. Addition of serum or blood at 50% v/v to the P. gingivalis suspension prior to irradiation diminished killing from approximately 5 logs to 3 logs at 10 J/cm2. When serum was washed off, killing returned to 5 logs for all species tested except Bacteroides forsythus (3.92 +/- 0.68 logs kill). The data indicate that PDT utilizing a conventional light source is at least as effective as laser-induced treatment in vitro. Furthermore, PDT achieves significant bactericidal activity in the presence of serum and blood when used with the set parameters of 10 J/cm2, 100 mW/cm2 and 12.5 micro g/ml TBO.

  3. Individual and Group-Based Engagement in an Online Physical Activity Monitoring Program in Georgia.

    PubMed

    Smith, Matthew Lee; Durrett, Nicholas K; Bowie, Maria; Berg, Alison; McCullick, Bryan A; LoPilato, Alexander C; Murray, Deborah

    2018-06-07

    Given the rising prevalence of obesity in the United States, innovative methods are needed to increase physical activity (PA) in community settings. Evidence suggests that individuals are more likely to engage in PA if they are given a choice of activities and have support from others (for encouragement, motivation, and accountability). The objective of this study was to describe the use of the online Walk Georgia PA tracking platform according to whether the user was an individual user or group user. Walk Georgia is a free, interactive online tracking platform that enables users to log PA by duration, activity, and perceived difficulty, and then converts these data into points based on metabolic equivalents. Users join individually or in groups and are encouraged to set weekly PA goals. Data were examined for 6,639 users (65.8% were group users) over 28 months. We used independent sample t tests and Mann-Whitney U tests to compare means between individual and group users. Two linear regression models were fitted to identify factors associated with activity logging. Users logged 218,766 activities (15,119,249 minutes of PA spanning 592,714 miles [41,858,446 points]). On average, group users had created accounts more recently than individual users (P < .001); however, group users logged more activities (P < .001). On average, group users logged more minutes of PA (P < .001) and earned more points (P < .001). Being in a group was associated with a larger proportion of weeks in which 150 minutes or more of weekly PA was logged (B = 20.47, P < .001). Use of Walk Georgia was significantly higher among group users than among individual users. To expand use and dissemination of online tracking of PA, programs should target naturally occurring groups (eg, workplaces, schools, faith-based groups).

  4. Assessment of selenium bioavailability from naturally produced high-selenium soy foods in selenium-deficient rats.

    PubMed

    Yan, Lin; Reeves, Philip G; Johnson, LuAnn K

    2010-10-01

    We assessed the bioavailability of selenium (Se) from a protein isolate and tofu (bean curd) prepared from naturally produced high-Se soybeans. The Se concentrations of the soybeans, the protein isolate and tofu were 5.2±0.2, 11.4±0.1 and 7.4±0.1mg/kg, respectively. Male weanling Sprague-Dawley rats were depleted of Se by feeding them a 30% Torula yeast-based diet (4.1μg Se/kg) for 56 days, and then they were replenished with Se for an additional 50 days by feeding them the same diet containing 14, 24 or 30 μg Se/kg from the protein isolate or 13, 23 or 31 μg Se/kg from tofu, respectively. l-Selenomethionine (SeMet) was used as a reference. Selenium bioavailability was determined on the basis of the restoration of Se-dependent enzyme activities and tissue Se concentrations in Se-depleted rats, comparing those responses for the protein isolate and tofu to those for SeMet by using a slope-ratio method. Dietary supplementation with the protein isolate or tofu resulted in linear or log-linear, dose-dependent increases in glutathione peroxidase activities in blood and liver and in thioredoxin reductase activity in liver. Furthermore, supplementation with the protein isolate or tofu resulted in linear or log-linear, dose-dependent increases in the Se concentrations of plasma, liver, muscle and kidneys. These results indicated an overall bioavailability of approximately 101% for Se from the protein isolate and 94% from tofu, relative to SeMet. We conclude that Se from naturally produced high-Se soybeans is highly bioavailable in this model and that high-Se soybeans may be a good dietary source of Se. Published by Elsevier GmbH.

  5. Analytical performance of the Hologic Aptima HBV Quant Assay and the COBAS Ampliprep/COBAS TaqMan HBV test v2.0 for the quantification of HBV DNA in plasma samples.

    PubMed

    Schønning, Kristian; Johansen, Kim; Nielsen, Lone Gilmor; Weis, Nina; Westh, Henrik

    2018-07-01

    Quantification of HBV DNA is used for initiating and monitoring antiviral treatment. Analytical test performance consequently impacts treatment decisions. To compare the analytical performance of the Aptima HBV Quant Assay (Aptima) and the COBAS Ampliprep/COBAS TaqMan HBV Test v2.0 (CAPCTMv2) for the quantification of HBV DNA in plasma samples. The performance of the two tests was compared on 129 prospective plasma samples, and on 63 archived plasma samples of which 53 were genotyped. Linearity of the two assays was assessed on dilutions series of three clinical samples (Genotype B, C, and D). Bland-Altman analysis of 120 clinical samples, which quantified in both tests, showed an average quantification bias (Aptima - CAPCTMv2) of -0.19 Log IU/mL (SD: 0.33 Log IU/mL). A single sample quantified more than three standard deviations higher in Aptima than in CAPCTMv2. Only minor differences were observed between genotype A (N = 4; average difference -0.01 Log IU/mL), B (N = 8; -0.13 Log IU/mL), C (N = 8; -0.31 Log IU/mL), D (N = 25; -0.22 Log IU/mL), and E (N = 7; -0.03 Log IU/mL). Deming regression showed that the two tests were excellently correlated (slope of the regression line 1.03; 95% CI: 0.998-1.068). Linearity of the tests was evaluated on dilution series and showed an excellent correlation of the two tests. Both tests were precise with %CV less than 3% for HBV DNA ≥3 Log IU/mL. The Aptima and CAPCTMv2 tests are highly correlated, and both tests are useful for monitoring patients chronically infected with HBV. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. The effect of terpene enhancer lipophilicity on the percutaneous permeation of hydrocortisone formulated in HPMC gel systems.

    PubMed

    El-Kattan, A F; Asbill, C S; Michniak, B B

    2000-04-05

    The percutaneous permeation of hydrocortisone (HC) was investigated in hairless mouse skin after application of an alcoholic hydrogel using a diffusion cell technique. The formulations contained one of 12 terpenes, the selection of which was based on an increase in their lipophilicity (log P 1.06-5.36). Flux, cumulative receptor concentrations, skin content, and lag time of HC were measured over 24 h and compared with control gels (containing no terpene). Furthermore, HC skin content and the solubility of HC in the alcoholic hydrogel solvent mixture in the presence of terpene were determined, and correlated to the enhancing activity of terpenes. The in vitro permeation experiments with hairless mouse skin revealed that the terpene enhancers varied in their ability to enhance the flux of HC. Nerolidol which possessed the highest lipophilicity (log P = 5.36+/-0.38) provided the greatest enhancement for HC flux (35.3-fold over control). Fenchone (log P = 2.13+/-0.30) exhibited the lowest enhancement of HC flux (10.1-fold over control). In addition, a linear relationship was established between the log P of terpenes and the cumulative amount of HC in the receptor after 24 h (Q(24)). Nerolidol, provided the highest Q(24) (1733+/-93 microg/cm(2)), whereas verbenone produced the lowest Q(24) (653+/-105 microg/cm(2)). Thymol provided the lowest HC skin content (1151+/-293 microg/g), while cineole produced the highest HC skin content (18999+/-5666 microg/g). No correlation was established between the log P of enhancers and HC skin content. A correlation however, existed between the log P of terpenes and the lag time. As log P increased, a linear decrease in lag time was observed. Cymene yielded the shortest HC lag time, while fenchone produced the longest lag time. Also, the increase in the log P of terpenes resulted in a proportional increase in HC solubility in the formulation solvent mixture.

  7. Major Depression and Acute Coronary Syndrome-Related Factors

    PubMed Central

    Figueiredo, Jose Henrique Cunha; Silva, Nelson Albuquerque de Souza e; Pereira, Basilio de Bragança; de Oliveira, Glaucia Maria Moraes

    2017-01-01

    Background Major Depressive Disorder (MDD) is one of the most common mental illnesses in psychiatry, being considered a risk factor for Acute Coronary Syndrome (ACS). Objective To assess the prevalence of MDD in ACS patients, as well as to analyze associated factors through the interdependence of sociodemographic, lifestyle and clinical variables. Methods Observational, descriptive, cross-sectional, case-series study conducted on patients hospitalized consecutively at the coronary units of three public hospitals in the city of Rio de Janeiro over a 24-month period. All participants answered a standardized questionnaire requesting sociodemographic, lifestyle and clinical data, as well as a structured diagnostic interview for the DSM-IV regarding ongoing major depressive episodes. A general log-linear model of multivariate analysis was employed to assess association and interdependence with a significance level of 5%. Results Analysis of 356 patients (229 men), with an average and median age of 60 years (SD ± 11.42, 27-89). We found an MDD point prevalence of 23%, and a significant association between MDD and gender, marital status, sedentary lifestyle, Killip classification, and MDD history. Controlling for gender, we found a statistically significant association between MDD and gender, age ≤ 60 years, sedentary lifestyle and MDD history. The log-linear model identified the variables MDD history, gender, sedentary lifestyle, and age ≤ 60 years as having the greatest association with MDD. Conclusion Distinct approaches are required to diagnose and treat MDD in young women with ACS, history of MDD, sedentary lifestyle, and who are not in stable relationships. PMID:28443957

  8. Factors Associated with Post-traumatic Stress Symptoms in Students Who Survived 20 Months after the Sewol Ferry Disaster in Korea.

    PubMed

    Lee, So Hee; Kim, Eun Ji; Noh, Jin Won; Chae, Jeong Ho

    2018-03-12

    The Sewol ferry disaster caused national shock and grief in Korea. The present study examined the prevalence and associated factors of post-traumatic stress disorder (PTSD) symptoms among the surviving students 20 months after that disaster. This study was conducted using a cross-sectional design and a sample of 57 students (29 boys and 28 girls) who survived the Sewol ferry disaster. Data were collected using a questionnaire, including instruments that assessed psychological status. A generalized linear model using a log link and Poisson distribution was performed to identify factors associated with PTSD symptoms. The results showed that 26.3% of participants were classified in the clinical group by the Child Report of Post-traumatic Symptoms score. Based on a generalized linear model, Poisson distribution, and log link analyses, PTSD symptoms were positively correlated with the number of exposed traumatic events, peers and social support, peri-traumatic dissociation and post-traumatic negative beliefs, and emotional difficulties. On the other hand, PTSD symptoms were negatively correlated with psychological well-being, family cohesion, post-traumatic social support, receiving care at a psychiatry clinic, and female gender. This study uncovered risk and protective factors of PTSD in disaster-exposed adolescents. The implications of these findings are considered in relation to determining assessment and interventional strategies aimed at helping survivors following similar traumatic experiences. © 2018 The Korean Academy of Medical Sciences.

  9. Sheep Feed and Scrapie, France

    PubMed Central

    Philippe, Sandrine; Ducrot, Christian; Roy, Pascal; Remontet, Laurent; Jarrige, Nathalie

    2005-01-01

    Scrapie is a small ruminant, transmissible spongiform encephalopathy (TSE). Although in the past scrapie has not been considered a zoonosis, the emergence of bovine spongiform encephalopathy, transmissible to humans and experimentally to sheep, indicates that risk exists for small ruminant TSEs in humans. To identify the risk factors for introducing scrapie into sheep flocks, a case-control study was conducted in France from 1999 to 2000. Ninety-four case and 350 control flocks were matched by location and main breed. Three main hypotheses were tested: direct contact between flocks, indirect environmental contact, and foodborne risk. Statistical analysis was performed by using adjusted generalized linear models with the complementary log-log link function, considering flock size as an offset. A notable effect of using proprietary concentrates and milk replacers was observed. The risk was heterogeneous among feed factories. Contacts between flocks were not shown to be a risk factor. PMID:16102318

  10. Capillary-induced crack healing between surfaces of nanoscale roughness.

    PubMed

    Soylemez, Emrecan; de Boer, Maarten P

    2014-10-07

    Capillary forces are important in nature (granular materials, insect locomotion) and in technology (disk drives, adhesion). Although well studied in equilibrium state, the dynamics of capillary formation merit further investigation. Here, we show that microcantilever crack healing experiments are a viable experimental technique for investigating the influence of capillary nucleation on crack healing between rough surfaces. The average crack healing velocity, v̅, between clean hydrophilic polycrystalline silicon surfaces of nanoscale roughness is measured. A plot of v̅ versus energy release rate, G, reveals log-linear behavior, while the slope |d[log(v̅)]/dG| decreases with increasing relative humidity. A simplified interface model that accounts for the nucleation time of water bridges by an activated process is developed to gain insight into the crack healing trends. This methodology enables us to gain insight into capillary bridge dynamics, with a goal of attaining a predictive capability for this important microelectromechanical systems (MEMS) reliability failure mechanism.

  11. Modeling dolomitized carbonate-ramp reservoirs: A case study of the Seminole San Andres unit. Part 2 -- Seismic modeling, reservoir geostatistics, and reservoir simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, F.P.; Dai, J.; Kerans, C.

    1998-11-01

    In part 1 of this paper, the authors discussed the rock-fabric/petrophysical classes for dolomitized carbonate-ramp rocks, the effects of rock fabric and pore type on petrophysical properties, petrophysical models for analyzing wireline logs, the critical scales for defining geologic framework, and 3-D geologic modeling. Part 2 focuses on geophysical and engineering characterizations, including seismic modeling, reservoir geostatistics, stochastic modeling, and reservoir simulation. Synthetic seismograms of 30 to 200 Hz were generated to study the level of seismic resolution required to capture the high-frequency geologic features in dolomitized carbonate-ramp reservoirs. Outcrop data were collected to investigate effects of sampling interval andmore » scale-up of block size on geostatistical parameters. Semivariogram analysis of outcrop data showed that the sill of log permeability decreases and the correlation length increases with an increase of horizontal block size. Permeability models were generated using conventional linear interpolation, stochastic realizations without stratigraphic constraints, and stochastic realizations with stratigraphic constraints. Simulations of a fine-scale Lawyer Canyon outcrop model were used to study the factors affecting waterflooding performance. Simulation results show that waterflooding performance depends strongly on the geometry and stacking pattern of the rock-fabric units and on the location of production and injection wells.« less

  12. Treatment carryover impacts on effectiveness of intraocular pressure lowering agents, estimated by a discrete event simulation model.

    PubMed

    Denis, P; Le Pen, C; Umuhire, D; Berdeaux, G

    2008-01-01

    To compare the effectiveness of two treatment sequences, latanoprost-latanoprost timolol fixed combination (L-LT) versus travoprost-travoprost timolol fixed combination (T-TT), in the treatment of open-angle glaucoma (OAG) or ocular hypertension (OHT). A discrete event simulation (DES) model was constructed. Patients with either OAG or OHT were treated first-line with a prostaglandin, either latanoprost or travoprost. In case of treatment failure, patients were switched to the specific prostaglandin-timolol sequence LT or TT. Failure was defined as intraocular pressure higher than or equal to 18 mmHg at two visits. Time to failure was estimated from two randomized clinical trials. Log-rank tests were computed. Linear functions after log-log transformation were used to model time to failure. The time horizon of the model was 60 months. Outcomes included treatment failure and disease progression. Sensitivity analyses were performed. Latanoprost treatment resulted in more treatment failures than travoprost (p<0.01), and LT more than TT (p<0.01). At 60 months, the probability of starting a third treatment line was 39.2% with L-LT versus 29.9% with T-TT. On average, L-LT patients developed 0.55 new visual field defects versus 0.48 for T-TT patients. The probability of no disease progression at 60 months was 61.4% with L-LT and 65.5% with T-TT. Based on randomized clinical trial results and using a DES model, the T-TT sequence was more effective at avoiding starting a third line treatment than the L-LT sequence. T-TT treated patients developed less glaucoma progression.

  13. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    NASA Technical Reports Server (NTRS)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  14. Reinforcement of drinking by running: effect of fixed ratio and reinforcement time1

    PubMed Central

    Premack, David; Schaeffer, Robert W.; Hundt, Alan

    1964-01-01

    Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT. PMID:14120150

  15. REINFORCEMENT OF DRINKING BY RUNNING: EFFECT OF FIXED RATIO AND REINFORCEMENT TIME.

    PubMed

    PREMACK, D; SCHAEFFER, R W; HUNDT, A

    1964-01-01

    Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT.

  16. Mathematical modelling of the growth of human fetus anatomical structures.

    PubMed

    Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech

    2017-09-01

    The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.

  17. Multinomial Logistic Regression & Bootstrapping for Bayesian Estimation of Vertical Facies Prediction in Heterogeneous Sandstone Reservoirs

    NASA Astrophysics Data System (ADS)

    Al-Mudhafar, W. J.

    2013-12-01

    Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.

  18. Coexistence and local μ-stability of multiple equilibrium points for memristive neural networks with nonmonotonic piecewise linear activation functions and unbounded time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2016-12-01

    In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Comparative evaluation of a new lactation curve model for pasture-based Holstein-Friesian dairy cows.

    PubMed

    Adediran, S A; Ratkowsky, D A; Donaghy, D J; Malau-Aduli, A E O

    2012-09-01

    Fourteen lactation models were fitted to average and individual cow lactation data from pasture-based dairy systems in the Australian states of Victoria and Tasmania. The models included a new "log-quadratic" model, and a major objective was to evaluate and compare the performance of this model with the other models. Nine empirical and 5 mechanistic models were first fitted to average test-day milk yield of Holstein-Friesian dairy cows using the nonlinear procedure in SAS. Two additional semiparametric models were fitted using a linear model in ASReml. To investigate the influence of days to first test-day and the number of test-days, 5 of the best-fitting models were then fitted to individual cow lactation data. Model goodness of fit was evaluated using criteria such as the residual mean square, the distribution of residuals, the correlation between actual and predicted values, and the Wald-Wolfowitz runs test. Goodness of fit was similar in all but one of the models in terms of fitting average lactation but they differed in their ability to predict individual lactations. In particular, the widely used incomplete gamma model most displayed this failing. The new log-quadratic model was robust in fitting average and individual lactations, and was less affected by sampled data and more parsimonious in having only 3 parameters, each of which lends itself to biological interpretation. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. The frequency dependence of the viscous component of the magnetic susceptibility of lunar rock and soil samples

    NASA Technical Reports Server (NTRS)

    Hanneken, J. W.; Vant-Hull, L. L.; Carnes, J. G.

    1976-01-01

    The susceptibility of two lunar samples (a soil and a low metamorphic grade breccia) has been measured in a weak field - 0.001 Oe - and as a function of frequency from 0.032 to 1.0 Hz. The measurements were made using a superconducting magnetometer. The results show that the susceptibility decreases linearly with the log of frequency. This observation is in agreement with a theoretical model for viscous decay based on the Neel theory of single-domain and superparamagnetic grains. The relation derived agrees with a model in which there is a uniform distribution of relaxation times.

  1. Petrophysical evaluation of subterranean formations

    DOEpatents

    Klein, James D; Schoderbek, David A; Mailloux, Jason M

    2013-05-28

    Methods and systems are provided for evaluating petrophysical properties of subterranean formations and comprehensively evaluating hydrate presence through a combination of computer-implemented log modeling and analysis. Certain embodiments include the steps of running a number of logging tools in a wellbore to obtain a variety of wellbore data and logs, and evaluating and modeling the log data to ascertain various petrophysical properties. Examples of suitable logging techniques that may be used in combination with the present invention include, but are not limited to, sonic logs, electrical resistivity logs, gamma ray logs, neutron porosity logs, density logs, NRM logs, or any combination or subset thereof.

  2. A risk analysis approach for using discriminant functions to manage logging-related landslides on granitic terrain

    Treesearch

    Raymond M. Rice; Norman H. Pillsbury; Kurt W. Schmidt

    1985-01-01

    Abstract - A linear discriminant function, developed to predict debris avalanches after clearcut logging on a granitic batholith in northwestern California, was tested on data from two batholiths. The equation was inaccurate in predicting slope stability on one of them. A new equation based on slope, crown cover, and distance from a stream (retained from the original...

  3. Effect of Nitrite/Nitrate concentrations on Corrosivity of Washed Precipitate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Congdon, J.W.

    2001-03-28

    Cyclic polarization scans were performed using A-537 carbon steel in simulated washed precipitate solutions of various nitrite and nitrate concentrations. The results of this study indicate that nitrate is an aggressive anion in washed precipitate. Furthermore, a quantitative linear log-log relationship between the minimum effective nitrite concentration and the nitrate concentration was established for washed precipitate with other ions at their average compositions.

  4. Insight into Global Mosquito Biogeography from Country Species Records

    DTIC Science & Technology

    2007-01-01

    biodiversity gradient was observed, with species richness increasing toward the equator. A linear log-log species (y)-area (x) relationship (SAR) was...and endemism is proposed to guide mosqUito biodiversity surveys. For species groups. we show that the number ofspecies ofAIWfJheles subgenus Anopheles...into global patterns of mosquito biodiversity and survey history. KEY WORDS mosquito, biogeography, country occurrence records. species richness, species

  5. Identifying CpG sites associated with eczema via random forest screening of epigenome-scale DNA methylation.

    PubMed

    Quraishi, B M; Zhang, H; Everson, T M; Ray, M; Lockett, G A; Holloway, J W; Tetali, S R; Arshad, S H; Kaushal, A; Rezwan, F I; Karmaus, W

    2015-01-01

    The prevalence of eczema is increasing in industrialized nations. Limited evidence has shown the association of DNA methylation (DNA-M) with eczema. We explored this association at the epigenome-scale to better understand the role of DNA-M. Data from the first generation (F1) of the Isle of Wight (IoW) birth cohort participants and the second generation (F2) were examined in our study. Epigenome-scale DNA methylation of F1 at age 18 years and F2 in cord blood was measured using the Illumina Infinium HumanMethylation450 Beadchip. A total of 307,357 cytosine-phosphate-guanine sites (CpGs) in the F1 generation were screened via recursive random forest (RF) for their potential association with eczema at age 18. Functional enrichment and pathway analysis of resulting genes were carried out using DAVID gene functional classification tool. Log-linear models were performed in F1 to corroborate the identified CpGs. Findings in F1 were further replicated in F2. The recursive RF yielded 140 CpGs, 88 of which showed statistically significant associations with eczema at age 18, corroborated by log-linear models after controlling for false discovery rate (FDR) of 0.05. These CpGs were enriched among many biological pathways, including pathways related to creating transcriptional variety and pathways mechanistically linked to eczema such as cadherins, cell adhesion, gap junctions, tight junctions, melanogenesis, and apoptosis. In the F2 generation, about half of the 83 CpGs identified in F1 showed the same direction of association with eczema risk as in F1, of which two CpGs were significantly associated with eczema risk, cg04850479 of the PROZ gene (risk ratio (RR) = 15.1 in F1, 95 % confidence interval (CI) 1.71, 79.5; RR = 6.82 in F2, 95 % CI 1.52, 30.62) and cg01427769 of the NEU1 gene (RR = 0.13 in F1, 95 % CI 0.03, 0.46; RR = 0.09 in F2, 95 % CI 0.03, 0.36). Via epigenome-scaled analyses using recursive RF followed by log-linear models, we identified 88 CpGs associated with eczema in F1, of which 41 were replicated in F2. Several identified CpGs are located within genes in biological pathways relating to skin barrier integrity, which is central to the pathogenesis of eczema. Novel genes associated with eczema risk were identified (e.g., the PROZ and NEU1 genes).

  6. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    PubMed

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  7. Adsorptive removal of pharmaceuticals from water by commercial and waste-based carbons.

    PubMed

    Calisto, Vânia; Ferreira, Catarina I A; Oliveira, João A B P; Otero, Marta; Esteves, Valdemar I

    2015-04-01

    This work describes the single adsorption of seven pharmaceuticals (carbamazepine, oxazepam, sulfamethoxazole, piroxicam, cetirizine, venlafaxine and paroxetine) from water onto a commercially available activated carbon and a non-activated carbon produced by pyrolysis of primary paper mill sludge. Kinetics and equilibrium adsorption studies were performed using a batch experimental approach. For all pharmaceuticals, both carbons presented fast kinetics (equilibrium times varying from less than 5 min to 120 min), mainly described by a pseudo-second order model. Equilibrium data were appropriately described by the Langmuir and Freundlich isotherm models, the last one giving slightly higher correlation coefficients. The fitted parameters obtained for both models were quite different for the seven pharmaceuticals under study. In order to evaluate the influence of water solubility, log Kow, pKa, polar surface area and number of hydrogen bond acceptors of pharmaceuticals on the adsorption parameters, multiple linear regression analysis was performed. The variability is mainly due to log Kow followed by water solubility, in the case of the waste-based carbon, and due to water solubility in the case of the commercial activated carbon. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Modeling removal of bacteriophages MS2 and PRD1 by dune recharge at Castricum, Netherlands

    NASA Astrophysics Data System (ADS)

    Schijven, Jack F.; Hoogenboezem, Wim; Hassanizadeh, S. Majid; Peters, Jos H.

    1999-04-01

    Removal of model viruses by dune recharge was studied at a field site in the dune area of Castricum, Netherlands. Recharge water was dosed with bacteriophages MS2 and PRD1 for 11 days at a constant concentration in a 10- by 15-m compartment that was isolated in a recharge basin. Breakthrough was monitored for 120 days at six wells with their screens along a flow line. Concentrations of both phages were reduced about 3 log10 within the first 2.4 m and another 5 log10 in a linear fashion within the following 27 m. A model accounting for one-site kinetic attachment as well as first-order inactivation was employed to simulate the bacteriophage breakthrough curves. The major removal process was found to be attachment of the bacteriophages. Detachment was very slow. After passage of the pulse of dosed bacteriophages, there was a long tail whose slope corresponds to the inactivation rate coefficient of 0.07-0.09 day-1 for attached bacteriophages. The end of the rising and the start of the declining limbs of the breakthrough curves could not be simulated completely, probably because of an as yet unknown process.

  9. Predicting the Rate of River Bank Erosion Caused by Large Wood Log

    NASA Astrophysics Data System (ADS)

    Zhang, N.; Rutherfurd, I.; Ghisalberti, M.

    2016-12-01

    When a single tree falls into a river channel, flow is deflected and accelerated between the tree roots and the bank face, increasing shear stress and scouring the bank. The scallop shaped erosion increases the diversity of the channel morphology, but also causes concern for adjacent landholders. Concern about increased bank erosion is one of the main reasons for large wood to still be removed from channels in SE Australia. Further, the hydraulic effect of many logs in the channel can reduce overall bank erosion rates. Although both phenomena have been described before, this research develops a hydraulic model that estimates their magnitude, and tests and calibrates this model with flume and field measurements, with logs with various configurations and sizes. Specifically, the model estimates the change in excess shear stress on the bank associated . The model addresses the effect of the log angle, distance from bank, and log size and flow condition by solving the mass continuity and energy conservation between the cross section at the approaching flow and contracted flow. Then, we evaluate our model against flume experiment preformed with semi-realistic log models to represent logs in different sizes and decay stages by comparing the measured and simulated velocity increase in the gap between the log and the bank. The log angle, distance from bank, and flow condition are systemically varied for each log model during the experiment. Final, the calibrated model is compared with the field data collected in anabranching channels of Murray River in SE Australia where there are abundant instream logs and regulated and consistent high flow for irrigation. Preliminary results suggest that a log can significantly increase the shear stress on the bank, especially when it positions perpendicular to the flow. The shear stress increases with the log angle in a rising curve (The log angle is the angle between log trunk and flow direction. 0o means log is parallel to flow with canopy pointing downstream). However, the shear stress shows insignificant changes as the log is being moved close to the bank.

  10. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  11. Length of stay of stroke rehabilitation inpatients: prediction through the functional independence measure.

    PubMed

    Franchignoni, F; Tesio, L; Martino, M T; Benevolo, E; Castagna, M

    1998-01-01

    A model for prediction of length of stay (LOS, in days) of stroke rehabilitation inpatients was developed, based on patients' age (years) and function at admission (scored on the Functional Independence Measure, FIMSM). One hundred and twenty-nine cases, consecutively admitted to three free-standing rehabilitation centres in Italy, were analyzed. A multiple linear regression using forward stepwise selection procedure was adopted. Median admission and discharge scores were: 57 and 75 for the total FIM score, 29 and 48 for the 13-item motor FIM subscore, 29 and 30 for the 5-item cognitive FIM subscore (potential range: 18-126, 13-91, 5-35, respectively). Median LOS was 44 days (interquartile range 30-62). The logLOS predictive model included three FIM items ("toilet transfer", TTr; "social interaction"; "expression") and patient's age (R2 = 0.48). TTr alone explained 31.3% of the variance of logLOS. These results are consistent with previous American studies, showing that FIM scores at admission are strong predictors of patients' LOS, with the transfer items having the greatest predictive power.

  12. Functional mixed effects spectral analysis

    PubMed Central

    KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG

    2011-01-01

    SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437

  13. Unit Price Scaling Trends for Chemical Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Wei; Sathre, Roger; William R. Morrow, III

    2015-08-01

    To facilitate early-stage life-cycle techno-economic modeling of emerging technologies, here we identify scaling relations between unit price and sales quantity for a variety of chemical products of three categories - metal salts, organic compounds, and solvents. We collect price quotations for lab-scale and bulk purchases of chemicals from both U.S. and Chinese suppliers. We apply a log-log linear regression model to estimate the price discount effect. Using the median discount factor of each category, one can infer bulk prices of products for which only lab-scale prices are available. We conduct out-of-sample tests showing that most of the price proxies deviatemore » from their actual reference prices by a factor less than ten. We also apply the bootstrap method to determine if a sample median discount factor should be accepted for price approximation. We find that appropriate discount factors for metal salts and for solvents are both -0.56, while that for organic compounds is -0.67 and is less representative due to greater extent of product heterogeneity within this category.« less

  14. Theoretical study of the acid-base properties of the montmorillonite/electrolyte interface: influence of the surface heterogeneity and ionic strength on the potentiometric titration curves.

    PubMed

    Zarzycki, Piotr; Thomas, Fabien

    2006-10-15

    The parallel shape of the potentiometric titration curves for montmorillonite suspension is explained using the surface complexation model and taking into account the surface heterogeneity. The homogeneous models give accurate predictions only if they assume unphysically large values of the equilibrium constants for the exchange process occurring on the basal plane. However, the assumption that the basal plane is energetically heterogeneous allows to fit the experimental data (reported by Avena and De Pauli [M. Avena, C.P. De Pauli, J. Colloid Interface Sci. 202 (1998) 195-204]) for reasonable values of exchange equilibrium constant equal to 1.26 (suggested by Fletcher and Sposito [P. Fletcher, G. Sposito, Clay Miner. 24 (1989) 375-391]). Moreover, we observed the typical behavior of point of zero net proton charge (pznpc) as a function of logarithm of the electrolyte concentration (log[C]). We showed that the slope of the linear dependence, pznpc=f(log[C]), is proportional to the number of isomorphic substitutions in the crystal phase, which was also observed in the experimental studies.

  15. Stability versus neuronal specialization for STDP: long-tail weight distributions solve the dilemma.

    PubMed

    Gilson, Matthieu; Fukai, Tomoki

    2011-01-01

    Spike-timing-dependent plasticity (STDP) modifies the weight (or strength) of synaptic connections between neurons and is considered to be crucial for generating network structure. It has been observed in physiology that, in addition to spike timing, the weight update also depends on the current value of the weight. The functional implications of this feature are still largely unclear. Additive STDP gives rise to strong competition among synapses, but due to the absence of weight dependence, it requires hard boundaries to secure the stability of weight dynamics. Multiplicative STDP with linear weight dependence for depression ensures stability, but it lacks sufficiently strong competition required to obtain a clear synaptic specialization. A solution to this stability-versus-function dilemma can be found with an intermediate parametrization between additive and multiplicative STDP. Here we propose a novel solution to the dilemma, named log-STDP, whose key feature is a sublinear weight dependence for depression. Due to its specific weight dependence, this new model can produce significantly broad weight distributions with no hard upper bound, similar to those recently observed in experiments. Log-STDP induces graded competition between synapses, such that synapses receiving stronger input correlations are pushed further in the tail of (very) large weights. Strong weights are functionally important to enhance the neuronal response to synchronous spike volleys. Depending on the input configuration, multiple groups of correlated synaptic inputs exhibit either winner-share-all or winner-take-all behavior. When the configuration of input correlations changes, individual synapses quickly and robustly readapt to represent the new configuration. We also demonstrate the advantages of log-STDP for generating a stable structure of strong weights in a recurrently connected network. These properties of log-STDP are compared with those of previous models. Through long-tail weight distributions, log-STDP achieves both stable dynamics for and robust competition of synapses, which are crucial for spike-based information processing.

  16. Thermal inactivation of Salmonella spp. in pork burger patties.

    PubMed

    Gurman, P M; Ross, T; Holds, G L; Jarrett, R G; Kiermeier, A

    2016-02-16

    Predictive models, to estimate the reduction in Escherichia coli O157:H7 concentration in beef burgers, have been developed to inform risk management decisions; no analogous model exists for Salmonella spp. in pork burgers. In this study, "Extra Lean" and "Regular" fat pork minces were inoculated with Salmonella spp. (Salmonella 4,[5],12,i:-, Salmonella Senftenberg and Salmonella Typhimurium) and formed into pork burger patties. Patties were cooked on an electric skillet (to imitate home cooking) to one of seven internal temperatures (46, 49, 52, 55, 58, 61, 64 °C) and Salmonella enumerated. A generalised linear logistic regression model was used to develop a predictive model for the Salmonella concentration based on the internal endpoint temperature. It was estimated that in pork mince with a fat content of 6.1%, Salmonella survival will be decreased by -0.2407log10 CFU/g for a 1 °C increase in internal endpoint temperature, with a 5-log10 reduction in Salmonella concentration estimated to occur when the geometric centre temperature reaches 63 °C. The fat content influenced the rate of Salmonella inactivation (P=0.043), with Salmonella survival increasing as fat content increased, though this effect became negligible as the temperature approached 62 °C. Fat content increased the time required for patties to achieve a specified internal temperature (P=0.0106 and 0.0309 for linear and quadratic terms respectively), indicating that reduced fat pork mince may reduce the risk of salmonellosis from consumption of pork burgers. Salmonella serovar did not significantly affect the model intercepts (P=0.86) or slopes (P=0.10) of the fitted logistic curve. This predictive model can be applied to estimate the reduction in Salmonella in pork burgers after cooking to a specific endpoint temperature and hence to assess food safety risk. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  17. Coronal Heating and the Increase of Coronal Luminosity with Magnetic Flux

    NASA Technical Reports Server (NTRS)

    Moore, R. L.; Falconer, D. A.; Porter, J. G.; Hathaway, D. H.; Six, N. Frank (Technical Monitor)

    2002-01-01

    We present the observed scaling of coronal luminosity with magnetic flux in a set of quiet regions. Comparison of this with the observed scaling found for active regions suggests an underlying difference between coronal heating in active regions and quiet regions. From SOHO/EIT coronal images and SOHO/MDI magnetograms of four similar large quiet regions, we measure L(sub corona) and Phi(sub total) in random subregions ranging in area from about four supergranules [(70,000 km)(exp 2)] to about 100 supergranules [(0.5 R(sub sun))(exp 2)], where L(sub corona) is the luminosity of the corona in a subregion and Phi(sub total) is the flux content of the magnetic network in the subregion. This sampling of our quiet regions yields a correlation plot of Log L(sub corona) vs Log Phi(sub total) appropriate for comparison with the corresponding plot for active regions. For our quiet regions, the mean values of L(sub corona) and Phi(sub total) both increase linearly with area (simply because each set of subregions of the same area has very nearly the same mean coronal luminosity per unit area and mean magnetic flux per unit area), and in each constant-area set the values of L(sub corona) and Phi(sub total) 'scatter' about their means for that area. This results in the linear least-squares fit to the Log ((L (sub corona)), vs Log ((Phi (sub total)) plot having a slope somewhat less than one. If active regions mimicked our quiet regions in that all large sets of same-area active regions had the same mean coronal luminosity per unit area and same mean magnetic flux per unit area, then the least-squares fit to their Log((L (sub corona)) vs Log((Phi (sub total)) plot would also have a slope of less than one. Instead, the slope for active regions is 1.2. Given the observed factor of three scatter about the least-squares linear fit, this slope is consistent with Phi(sub total) on average increasing linearly with area (A) as in quiet regions, but L(sub corona) on average increasing as the volume (A(exp 1.5)) of the active region instead of as the area. This possibility is reasonable if the heating in active regions is a burning down of previously-stored coronal magnetic energy rather than a steady dissipation of energy flux from below as expected in quiet regions.

  18. Limits on Log Cross-Product Ratios for Item Response Models. Research Report. ETS RR-06-10

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Holland, Paul W.; Sinharay, Sandip

    2006-01-01

    Bounds are established for log cross-product ratios (log odds ratios) involving pairs of items for item response models. First, expressions for bounds on log cross-product ratios are provided for unidimensional item response models in general. Then, explicit bounds are obtained for the Rasch model and the two-parameter logistic (2PL) model.…

  19. Hearing aid fitting for visual and hearing impaired patients with Usher syndrome type IIa.

    PubMed

    Hartel, B P; Agterberg, M J H; Snik, A F; Kunst, H P M; van Opstal, A J; Bosman, A J; Pennings, R J E

    2017-08-01

    Usher syndrome is the leading cause of hereditary deaf-blindness. Most patients with Usher syndrome type IIa start using hearing aids from a young age. A serious complaint refers to interference between sound localisation abilities and adaptive sound processing (compression), as present in today's hearing aids. The aim of this study was to investigate the effect of advanced signal processing on binaural hearing, including sound localisation. In this prospective study, patients were fitted with hearing aids with a nonlinear (compression) and linear amplification programs. Data logging was used to objectively evaluate the use of either program. Performance was evaluated with a speech-in-noise test, a sound localisation test and two questionnaires focussing on self-reported benefit. Data logging confirmed that the reported use of hearing aids was high. The linear program was used significantly more often (average use: 77%) than the nonlinear program (average use: 17%). The results for speech intelligibility in noise and sound localisation did not show a significant difference between type of amplification. However, the self-reported outcomes showed higher scores on 'ease of communication' and overall benefit, and significant lower scores on disability for the new hearing aids when compared to their previous hearing aids with compression amplification. Patients with Usher syndrome type IIa prefer a linear amplification over nonlinear amplification when fitted with novel hearing aids. Apart from a significantly higher logged use, no difference in speech in noise and sound localisation was observed between linear and nonlinear amplification with the currently used tests. Further research is needed to evaluate the reasons behind the preference for the linear settings. © 2016 The Authors. Clinical Otolaryngology Published by John Wiley & Sons Ltd.

  20. Inactivation of Mycobacterium avium subsp. paratuberculosis during cooking of hamburger patties.

    PubMed

    Hammer, Philipp; Walte, Hans-Georg C; Matzen, Sönke; Hensel, Jann; Kiesner, Christian

    2013-07-01

    The role of Mycobacterium avium subsp. paratuberculosis (MAP) in Crohn's disease in humans has been debated for many years. Milk and milk products have been suggested as possible vectors for transmission since the beginning of this debate, whereas recent publications show that slaughtered cattle and their carcasses, meat, and organs can also serve as reservoirs for MAP transmission. The objective of this study was to generate heat-inactivation data for MAP during the cooking of hamburger patties. Hamburger patties of lean ground beef weighing 70 and 50 g were cooked for 6, 5, 4, 3, and 2 min, which were sterilized by irradiation and spiked with three different MAP strains at levels between 10² and 10⁶ CFU/ml. Single-sided cooking with one flip was applied, and the temperatures within the patties were recorded by seven thermocouples. Counting of the surviving bacteria was performed by direct plating onto Herrold's egg yolk medium and a three-vial most-probable-number method by using modified Dubos medium. There was considerable variability in temperature throughout the patties during frying. In addition, the log reduction in MAP numbers showed strong variations. In patties weighing 70 g, considerable bacterial reduction of 4 log or larger could only be achieved after 6 min of cooking. For all other cooking times, the bacterial reduction was less than 2 log. Patties weighing 50 g showed a 5-log or larger reduction after cooking times of 5 and 6 min. To determine the inactivation kinetics, a log-linear regression model was used, showing a constant decrease of MAP numbers over cooking time.

  1. On the Rapid Computation of Various Polylogarithmic Constants

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Borwein, Peter; Plouffe, Simon

    1996-01-01

    We give algorithms for the computation of the d-th digit of certain transcendental numbers in various bases. These algorithms can be easily implemented (multiple precision arithmetic is not needed), require virtually no memory, and feature run times that scale nearly linearly with the order of the digit desired. They make it feasible to compute, for example, the billionth binary digit of log(2) or pi on a modest workstation in a few hours run time. We demonstrate this technique by computing the ten billionth hexadecimal digit of pi, the billionth hexadecimal digits of pi-squared, log(2) and log-squared(2), and the ten billionth decimal digit of log(9/10). These calculations rest on the observation that very special types of identities exist for certain numbers like pi, pi-squared, log(2) and log-squared(2). These are essentially polylogarithmic ladders in an integer base. A number of these identities that we derive in this work appear to be new, for example a critical identity for pi.

  2. Earthquake models using rate and state friction and fast multipoles

    NASA Astrophysics Data System (ADS)

    Tullis, T.

    2003-04-01

    The most realistic current earthquake models employ laboratory-derived non-linear constitutive laws. These are the rate and state friction laws having both a non-linear viscous or direct effect and an evolution effect in which frictional resistance depends on time of stationary contact and has a memory of past slip velocity that fades with slip. The frictional resistance depends on the log of the slip velocity as well as the log of stationary hold time, and the fading memory involves an approximately exponential decay with slip. Due to the nonlinearly of these laws, analytical earthquake models are not attainable and numerical models are needed. The situation is even more difficult if true dynamic models are sought that deal with inertial forces and slip velocities on the order of 1 m/s as are observed during dynamic earthquake slip. Additional difficulties that exist if the dynamic slip phase of earthquakes is modeled arise from two sources. First, many physical processes might operate during dynamic slip, but they are only poorly understood, the relative importance of the processes is unknown, and the processes are even more nonlinear than those described by the current rate and state laws. Constitutive laws describing such behaviors are still being developed. Second, treatment of inertial forces and the influence that dynamic stresses from elastic waves may have on slip on the fault requires keeping track of the history of slip on remote parts of the fault as far into the past as it takes waves to travel from there. This places even more stringent requirements on computer time. Challenges for numerical modeling of complete earthquake cycles are that both time steps and mesh sizes must be small. Time steps must be milliseconds during dynamic slip, and yet models must represent earthquake cycles 100 years or more in length; methods using adaptive step sizes are essential. Element dimensions need to be on the order of meters, both to approximate continuum behavior adequately and to model microseismicity as well as large earthquakes. In order to model significant sized earthquakes this requires millions of elements. Modeling methods like the boundary element method that involve Green's functions normally require computation times that increase with the number N of elements squared, so using large N becomes impossible. We have adapted the Fast Multipole method to this problem in which the influence of sufficiently remote elements are grouped together and the elements are indexed such that the computations more efficient when run on parallel computers. Compute time varies with N log N rather than N squared. Computer programs are available that use this approach (http://www.servogrid.org/slide/GEM/PARK). Whether the multipole approach can be adapted to dynamic modeling is unclear.

  3. Substituted benzotriazoles as inhibitors of copper corrosion in borate buffer solutions

    NASA Astrophysics Data System (ADS)

    Agafonkina, M. O.; Andreeva, N. P.; Kuznetsov, Yu. I.; Timashev, S. F.

    2017-08-01

    The adsorption of substituted 1,2,3-benzotriazoles (R-BTAs) onto copper is measured via ellipsometry in a pure borate buffer (pH 7.4) and satisfactorily described by Temkin's isotherm. The adsorption free energy (-Δ G a 0 ) values of these azoles are determined. The (-Δ G a 0 ) values are found to rise as their hydrophobicity, characterized by the logarithm of the partition coefficient of a substituted BTA in a model octanol-water system (log P), grows. The minimum concentration sufficient for the spontaneous passivation of copper ( C min) and a shift in the potential of local copper depassivation with chlorides ( E pt) after an azole is added to the solution (i.e., Δ E = E pt in - E pt backgr characterizing the ability of its adsorption to stabilize passivation) are determined in the same solution containing a corrosion additive (0.01M NaCl) for each azole under study. Both criteria of the passivating properties of azoles (log C min and Δ E) are shown to correlate linearly with log P, testifying to the role played by surface activity of this family of organic inhibitors in protecting copper in an aqueous solution.

  4. Historic Mining and Agriculture as Indicators of Occurrence and Abundance of Widespread Invasive Plant Species

    PubMed Central

    Calinger, Kellen; Calhoon, Elisabeth; Chang, Hsiao-chi; Whitacre, James; Wenzel, John; Comita, Liza; Queenborough, Simon

    2015-01-01

    Anthropogenic disturbances often change ecological communities and provide opportunities for non-native species invasion. Understanding the impacts of disturbances on species invasion is therefore crucial for invasive species management. We used generalized linear mixed effects models to explore the influence of land-use history and distance to roads on the occurrence and abundance of two invasive plant species (Rosa multiflora and Berberis thunbergii) in a 900-ha deciduous forest in the eastern U.S.A., the Powdermill Nature Reserve. Although much of the reserve has been continuously forested since at least 1939, aerial photos revealed a variety of land-uses since then including agriculture, mining, logging, and development. By 2008, both R. multiflora and B. thunbergii were widespread throughout the reserve (occurring in 24% and 13% of 4417 10-m diameter regularly-placed vegetation plots, respectively) with occurrence and abundance of each varying significantly with land-use history. Rosa multiflora was more likely to occur in historically farmed, mined, logged or developed plots than in plots that remained forested, (log odds of 1.8 to 3.0); Berberis thunbergii was more likely to occur in plots with agricultural, mining, or logging history than in plots without disturbance (log odds of 1.4 to 2.1). Mining, logging, and agriculture increased the probability that R. multiflora had >10% cover while only past agriculture was related to cover of B. thunbergii. Proximity to roads was positively correlated with the occurrence of R. multiflora (a 0.26 increase in the log odds for every 1-m closer) but not B. thunbergii, and roads had no impact on the abundance of either species. Our results indicated that a wide variety of disturbances may aid the introduction of invasive species into new habitats, while high-impact disturbances such as agriculture and mining increase the likelihood of high abundance post-introduction. PMID:26046534

  5. Kinetics of hydrogen peroxide decomposition by catalase: hydroxylic solvent effects.

    PubMed

    Raducan, Adina; Cantemir, Anca Ruxandra; Puiu, Mihaela; Oancea, Dumitru

    2012-11-01

    The effect of water-alcohol (methanol, ethanol, propan-1-ol, propan-2-ol, ethane-1,2-diol and propane-1,2,3-triol) binary mixtures on the kinetics of hydrogen peroxide decomposition in the presence of bovine liver catalase is investigated. In all solvents, the activity of catalase is smaller than in water. The results are discussed on the basis of a simple kinetic model. The kinetic constants for product formation through enzyme-substrate complex decomposition and for inactivation of catalase are estimated. The organic solvents are characterized by several physical properties: dielectric constant (D), hydrophobicity (log P), concentration of hydroxyl groups ([OH]), polarizability (α), Kamlet-Taft parameter (β) and Kosower parameter (Z). The relationships between the initial rate, kinetic constants and medium properties are analyzed by linear and multiple linear regression.

  6. [Scale Relativity Theory in living beings morphogenesis: fratal, determinism and chance].

    PubMed

    Chaline, J

    2012-10-01

    The Scale Relativity Theory has many biological applications from linear to non-linear and, from classical mechanics to quantum mechanics. Self-similar laws have been used as model for the description of a huge number of biological systems. Theses laws may explain the origin of basal life structures. Log-periodic behaviors of acceleration or deceleration can be applied to branching macroevolution, to the time sequences of major evolutionary leaps. The existence of such a law does not mean that the role of chance in evolution is reduced, but instead that randomness and contingency may occur within a framework which may itself be structured in a partly statistical way. The scale relativity theory can open new perspectives in evolution. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  7. Fallon, Nevada FORGE Lithology Logs and Well 21-31 Drilling Data

    DOE Data Explorer

    Blankenship, Doug; Hinz, Nicholas; Faulds, James

    2018-03-11

    This submission includes lithology logs for all Fallon FORGE area wells; determined from core, cuttings, and thin section. Wells included are 84-31, 21-31, 82-36, FOH-3D, 62-36, 18-5, 88-24, 86-25, FOH-2, 14-36, 17-16, 34-33, 35A-11, 51A-20, 62-15, 72-7, 86-15, Carson_Strat_1_36-32, and several others. Lithology logs last updated 3/13/2018 with confirmation well 21-31 data, and revisited existing wells. Also included is well logging data for Fallon FORGE 21-31. Well logging data includes daily reports, well logs (drill rate, lithology, fractures, mud losses, minerals, temperature, gases, and descriptions), mud reports, drilling parameter plots, daily mud loss summaries, survey reports, progress reports, plan view maps (easting, northing), and wireline logs (caliper [with GR], triple combo [GR, caliper, SP, resistivity, array induction, density, photoelectric factor, and neutron porosity], array induction with linear correlation [GR, SP, Array Induction, caliper, conductivity], and monopole compression dipole shear [GR, SP, Caliper, sonic porosity, delta-T compressional, and delta-T shear])

  8. Assessment of passive drag in swimming by numerical simulation and analytical procedure.

    PubMed

    Barbosa, Tiago M; Ramos, Rui; Silva, António J; Marinho, Daniel A

    2018-03-01

    The aim was to compare the passive drag-gliding underwater by a numerical simulation and an analytical procedure. An Olympic swimmer was scanned by computer tomography and modelled gliding at a 0.75-m depth in the streamlined position. Steady-state computer fluid dynamics (CFD) analyses were performed on Fluent. A set of analytical procedures was selected concurrently. Friction drag (D f ), pressure drag (D pr ), total passive drag force (D f +pr ) and drag coefficient (C D ) were computed between 1.3 and 2.5 m · s -1 by both techniques. D f +pr ranged from 45.44 to 144.06 N with CFD, from 46.03 to 167.06 N with the analytical procedure (differences: from 1.28% to 13.77%). C D ranged between 0.698 and 0.622 by CFD, 0.657 and 0.644 by analytical procedures (differences: 0.40-6.30%). Linear regression models showed a very high association for D f +pr plotted in absolute values (R 2  = 0.98) and after log-log transformation (R 2  = 0.99). The C D also obtained a very high adjustment for both absolute (R 2  = 0.97) and log-log plots (R 2  = 0.97). The bias for the D f +pr was 8.37 N and 0.076 N after logarithmic transformation. D f represented between 15.97% and 18.82% of the D f +pr by the CFD, 14.66% and 16.21% by the analytical procedures. Therefore, despite the bias, analytical procedures offer a feasible way of gathering insight on one's hydrodynamics characteristics.

  9. Using measured octanol-air partition coefficients to explain environmental partitioning of organochlorine pesticides.

    PubMed

    Shoeib, Mahiba; Harner, Tom

    2002-05-01

    Octanol-air partition coefficients (Koa) were measured directly for 19 organochlorine (OC) pesticides over the temperature range of 5 to 35 degrees C. Values of log Koa at 25 degrees C ranged over three orders of magnitude, from 7.4 for hexachlorobenzene to 10.1 for 1,1-dichloro-2,2-bis(p-chlorophenyl) ethane. Measured values were compared to values calculated as KowRT/H (where R is the ideal gas constant [8.314 J mol(-1) K(-1)], T is absolute temperature, and H is Henry's law constant) were, in general, larger. Discrepancies of up to three orders of magnitude were observed, highlighting the need for direct measurements of Koa. Plots of Koa versus inverse absolute temperature exhibited a log-linear correlation. Enthalpies of phase transition between octanol and air (deltaHoa) were determined from the temperature slopes and were in the range of 56 to 105 kJ mol(-1) K(-1). Activity coefficients in octanol (gamma(o)) were determined from Koa and reported supercooled liquid vapor pressures (pL(o)), and these were in the range of 0.3 to 12, indicating near-ideal solution behavior. Differences in Koa values for structural isomers of hexachlorocyclohexane were also explored. A Koa-based model was described for predicting the partitioning of OC pesticides to aerosols and used to calculate particulate fractions at 25 and -10 degrees C. The model also agreed well with experimental results for several OC pesticides that were equilibrated with urban aerosols in the laboratory. A log-log regression of the particle-gas partition coefficient versus Koa had a slope near unity, indicating that octanol is a good surrogate for the aerosol organic matter.

  10. Ranking contributing areas of salt and selenium in the Lower Gunnison River Basin, Colorado, using multiple linear regression models

    USGS Publications Warehouse

    Linard, Joshua I.

    2013-01-01

    Mitigating the effects of salt and selenium on water quality in the Grand Valley and lower Gunnison River Basin in western Colorado is a major concern for land managers. Previous modeling indicated means to improve the models by including more detailed geospatial data and a more rigorous method for developing the models. After evaluating all possible combinations of geospatial variables, four multiple linear regression models resulted that could estimate irrigation-season salt yield, nonirrigation-season salt yield, irrigation-season selenium yield, and nonirrigation-season selenium yield. The adjusted r-squared and the residual standard error (in units of log-transformed yield) of the models were, respectively, 0.87 and 2.03 for the irrigation-season salt model, 0.90 and 1.25 for the nonirrigation-season salt model, 0.85 and 2.94 for the irrigation-season selenium model, and 0.93 and 1.75 for the nonirrigation-season selenium model. The four models were used to estimate yields and loads from contributing areas corresponding to 12-digit hydrologic unit codes in the lower Gunnison River Basin study area. Each of the 175 contributing areas was ranked according to its estimated mean seasonal yield of salt and selenium.

  11. Shape of growth-rate distribution determines the type of Non-Gibrat’s Property

    NASA Astrophysics Data System (ADS)

    Ishikawa, Atushi; Fujimoto, Shouji; Mizuno, Takayuki

    2011-11-01

    In this study, the authors examine exhaustive business data on Japanese firms, which cover nearly all companies in the mid- and large-scale ranges in terms of firm size, to reach several key findings on profits/sales distribution and business growth trends. Here, profits denote net profits. First, detailed balance is observed not only in profits data but also in sales data. Furthermore, the growth-rate distribution of sales has wider tails than the linear growth-rate distribution of profits in log-log scale. On the one hand, in the mid-scale range of profits, the probability of positive growth decreases and the probability of negative growth increases symmetrically as the initial value increases. This is called Non-Gibrat’s First Property. On the other hand, in the mid-scale range of sales, the probability of positive growth decreases as the initial value increases, while the probability of negative growth hardly changes. This is called Non-Gibrat’s Second Property. Under detailed balance, Non-Gibrat’s First and Second Properties are analytically derived from the linear and quadratic growth-rate distributions in log-log scale, respectively. In both cases, the log-normal distribution is inferred from Non-Gibrat’s Properties and detailed balance. These analytic results are verified by empirical data. Consequently, this clarifies the notion that the difference in shapes between growth-rate distributions of sales and profits is closely related to the difference between the two Non-Gibrat’s Properties in the mid-scale range.

  12. High-resolution vertical profiles of groundwater electrical conductivity (EC) and chloride from direct-push EC logs

    NASA Astrophysics Data System (ADS)

    Bourke, Sarah A.; Hermann, Kristian J.; Hendry, M. Jim

    2017-11-01

    Elevated groundwater salinity associated with produced water, leaching from landfills or secondary salinity can degrade arable soils and potable water resources. Direct-push electrical conductivity (EC) profiling enables rapid, relatively inexpensive, high-resolution in-situ measurements of subsurface salinity, without requiring core collection or installation of groundwater wells. However, because the direct-push tool measures the bulk EC of both solid and liquid phases (ECa), incorporation of ECa data into regional or historical groundwater data sets requires the prediction of pore water EC (ECw) or chloride (Cl-) concentrations from measured ECa. Statistical linear regression and physically based models for predicting ECw and Cl- from ECa profiles were tested on a brine plume in central Saskatchewan, Canada. A linear relationship between ECa/ECw and porosity was more accurate for predicting ECw and Cl- concentrations than a power-law relationship (Archie's Law). Despite clay contents of up to 96%, the addition of terms to account for electrical conductance in the solid phase did not improve model predictions. In the absence of porosity data, statistical linear regression models adequately predicted ECw and Cl- concentrations from direct-push ECa profiles (ECw = 5.48 ECa + 0.78, R 2 = 0.87; Cl- = 1,978 ECa - 1,398, R 2 = 0.73). These statistical models can be used to predict ECw in the absence of lithologic data and will be particularly useful for initial site assessments. The more accurate linear physically based model can be used to predict ECw and Cl- as porosity data become available and the site-specific ECw-Cl- relationship is determined.

  13. On the null distribution of Bayes factors in linear regression

    USDA-ARS?s Scientific Manuscript database

    We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...

  14. Modeling Count Outcomes from HIV Risk Reduction Interventions: A Comparison of Competing Statistical Models for Count Responses

    PubMed Central

    Xia, Yinglin; Morrison-Beedy, Dianne; Ma, Jingming; Feng, Changyong; Cross, Wendi; Tu, Xin

    2012-01-01

    Modeling count data from sexual behavioral outcomes involves many challenges, especially when the data exhibit a preponderance of zeros and overdispersion. In particular, the popular Poisson log-linear model is not appropriate for modeling such outcomes. Although alternatives exist for addressing both issues, they are not widely and effectively used in sex health research, especially in HIV prevention intervention and related studies. In this paper, we discuss how to analyze count outcomes distributed with excess of zeros and overdispersion and introduce appropriate model-fit indices for comparing the performance of competing models, using data from a real study on HIV prevention intervention. The in-depth look at these common issues arising from studies involving behavioral outcomes will promote sound statistical analyses and facilitate research in this and other related areas. PMID:22536496

  15. Contextual Multi-armed Bandits under Feature Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Seyoung; Nam, Jun Hyun; Mo, Sangwoo

    We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features. For the case of identical noise on features across actions, we propose an algorithm, coined NLinRel, having O(T⁷/₈(log(dT)+K√d)) regret bound for T rounds, K actions, and d-dimensional feature vectors. Next, for the case of non-identical noise, we observe that popular linear hypotheses including NLinRel are impossible to achieve such sub-linear regret. Instead, under assumption of Gaussian feature vectors, we prove that a greedy algorithm has O(T²/₃√log d)regret bound with respect to the optimal linear hypothesis. Utilizing our theoretical understanding on the Gaussian case,more » we also design a practical variant of NLinRel, coined Universal-NLinRel, for arbitrary feature distributions. It first runs NLinRel for finding the ‘true’ coefficient vector using feature uncertainties and then adjust it to minimize its regret using the statistical feature information. We justify the performance of Universal-NLinRel on both synthetic and real-world datasets.« less

  16. An integrated 3D log processing optimization system for small sawmills in central Appalachia

    Treesearch

    Wenshu Lin; Jingxin Wang

    2013-01-01

    An integrated 3D log processing optimization system was developed to perform 3D log generation, opening face determination, headrig log sawing simulation, fl itch edging and trimming simulation, cant resawing, and lumber grading. A circular cross-section model, together with 3D modeling techniques, was used to reconstruct 3D virtual logs. Internal log defects (knots)...

  17. Functional response of ungulate browsers in disturbed eastern hemlock forests

    USGS Publications Warehouse

    DeStefano, Stephen

    2015-01-01

    Ungulate browsing in predator depleted North American landscapes is believed to be causing widespread tree recruitment failures. However, canopy disturbances and variations in ungulate densities are sources of heterogeneity that can buffer ecosystems against herbivory. Relatively little is known about the functional response (the rate of consumption in relation to food availability) of ungulates in eastern temperate forests, and therefore how “top down” control of vegetation may vary with disturbance type, intensity, and timing. This knowledge gap is relevant in the Northeastern United States today with the recent arrival of hemlock woolly adelgid (HWA; Adelges tsugae) that is killing eastern hemlocks (Tsuga canadensis) and initiating salvage logging as a management response. We used an existing experiment in central New England begun in 2005, which simulated severe adelgid infestation and intensive logging of intact hemlock forest, to examine the functional response of combined moose (Alces americanus) and white-tailed deer (Odocoileus virginianus) foraging in two different time periods after disturbance (3 and 7 years). We predicted that browsing impacts would be linear or accelerating (Type I or Type III response) in year 3 when regenerating stem densities were relatively low and decelerating (Type II response) in year 7 when stem densities increased. We sampled and compared woody regeneration and browsing among logged and simulated insect attack treatments and two intact controls (hemlock and hardwood forest) in 2008 and again in 2012. We then used AIC model selection to compare the three major functional response models (Types I, II, and III) of ungulate browsing in relation to forage density. We also examined relative use of the different stand types by comparing pellet group density and remote camera images. In 2008, total and proportional browse consumption increased with stem density, and peaked in logged plots, revealing a Type I response. In 2012, stem densities were greatest in girdled plots, but proportional browse consumption was highest at intermediate stem densities in logged plots, exhibiting a Type III (rather than a Type II) functional response. Our results revealed shifting top–down control by herbivores at different stages of stand recovery after disturbance and in different understory conditions resulting from logging vs. simulated adelgid attack. If forest managers wish to promote tree regeneration in hemlock stands that is more resistant to ungulate browsers, leaving HWA-infested stands unmanaged may be a better option than preemptively logging them.

  18. A hybrid machine learning model to estimate nitrate contamination of production zone groundwater in the Central Valley, California

    NASA Astrophysics Data System (ADS)

    Ransom, K.; Nolan, B. T.; Faunt, C. C.; Bell, A.; Gronberg, J.; Traum, J.; Wheeler, D. C.; Rosecrans, C.; Belitz, K.; Eberts, S.; Harter, T.

    2016-12-01

    A hybrid, non-linear, machine learning statistical model was developed within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface in the Central Valley, California. A database of 213 predictor variables representing well characteristics, historical and current field and county scale nitrogen mass balance, historical and current landuse, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6,000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The machine learning method, gradient boosting machine (GBM) was used to screen predictor variables and rank them in order of importance in relation to the groundwater nitrate measurements. The top five most important predictor variables included oxidation/reduction characteristics, historical field scale nitrogen mass balance, climate, and depth to 60 year old water. Twenty-two variables were selected for the final model and final model errors for log-transformed hold-out data were R squared of 0.45 and root mean square error (RMSE) of 1.124. Modeled mean groundwater age was tested separately for error improvement in the model and when included decreased model RMSE by 0.5% compared to the same model without age and by 0.20% compared to the model with all 213 variables. 1D and 2D partial plots were examined to determine how variables behave individually and interact in the model. Some variables behaved as expected: log nitrate decreased with increasing probability of anoxic conditions and depth to 60 year old water, generally decreased with increasing natural landuse surrounding wells and increasing mean groundwater age, generally increased with increased minimum depth to high water table and with increased base flow index value. Other variables exhibited much more erratic or noisy behavior in the model making them more difficult to interpret but highlighting the usefulness of the non-linear machine learning method. 2D interaction plots show probability of anoxic groundwater conditions largely control estimated nitrate concentrations compared to the other predictors.

  19. Studying parents and grandparents to assess genetic contributions to early-onset disease.

    PubMed

    Weinberg, Clarice R

    2003-02-01

    Suppose DNA is available from affected individuals, their parents, and their grandparents. Particularly for early-onset diseases, maternally mediated genetic effects can play a role, because the mother determines the prenatal environment. The proposed maximum-likelihood approach for the detection of apparent transmission distortion treats the triad consisting of the affected individual and his or her two parents as the outcome, conditioning on grandparental mating types. Under a null model in which the allele under study does not confer susceptibility, either through linkage or directly, and when there are no maternally mediated genetic effects, conditional probabilities for specific triads are easily derived. A log-linear model permits a likelihood-ratio test (LRT) and allows the estimation of relative penetrances. The proposed approach is robust against genetic population stratification. Missing-data methods permit the inclusion of incomplete families, even if the missing person is the affected grandchild, as is the case when an induced abortion has followed the detection of a malformation. When screening multiple markers, one can begin by genotyping only the grandparents and the affected grandchildren. LRTs based on conditioning on grandparental mating types (i.e., ignoring the parents) have asymptotic relative efficiencies that are typically >150% (per family), compared with tests based on parents. A test for asymmetry in the number of copies carried by maternal versus paternal grandparents yields an LRT specific to maternal effects. One can then genotype the parents for only the genes that passed the initial screen. Conditioning on both the grandparents' and the affected grandchild's genotypes, a third log-linear model captures the remaining information, in an independent LRT for maternal effects.

  20. Population age and initial density in a patchy environment affect the occurrence of abrupt transitions in a birth-and-death model of Taylor's law

    USGS Publications Warehouse

    Jiang, Jiang; DeAngelis, Donald L.; Zhang, B.; Cohen, J.E.

    2014-01-01

    Taylor's power law describes an empirical relationship between the mean and variance of population densities in field data, in which the variance varies as a power, b, of the mean. Most studies report values of b varying between 1 and 2. However, Cohen (2014a) showed recently that smooth changes in environmental conditions in a model can lead to an abrupt, infinite change in b. To understand what factors can influence the occurrence of an abrupt change in b, we used both mathematical analysis and Monte Carlo samples from a model in which populations of the same species settled on patches, and each population followed independently a stochastic linear birth-and-death process. We investigated how the power relationship responds to a smooth change of population growth rate, under different sampling strategies, initial population density, and population age. We showed analytically that, if the initial populations differ only in density, and samples are taken from all patches after the same time period following a major invasion event, Taylor's law holds with exponent b=1, regardless of the population growth rate. If samples are taken at different times from patches that have the same initial population densities, we calculate an abrupt shift of b, as predicted by Cohen (2014a). The loss of linearity between log variance and log mean is a leading indicator of the abrupt shift. If both initial population densities and population ages vary among patches, estimates of b lie between 1 and 2, as in most empirical studies. But the value of b declines to ~1 as the system approaches a critical point. Our results can inform empirical studies that might be designed to demonstrate an abrupt shift in Taylor's law.

  1. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per cent of the complex amplitude be withheld from a log-amplitude and phase inversion rather than retaining them with large error-bars.

  2. Comparative Pharmacodynamics of Telavancin and Vancomycin in the Neutropenic Murine Thigh and Lung Infection Models against Staphylococcus aureus

    PubMed Central

    Lepak, Alexander J.; Zhao, Miao

    2017-01-01

    ABSTRACT The pharmacodynamics of telavancin and vancomycin were compared using neutropenic murine thigh and lung infection models. Four Staphylococcus aureus strains were included. The telavancin MIC ranged from 0.06 to 0.25 mg/liter, and the vancomycin MIC ranged from 1 to 4 mg/liter. The plasma pharmacokinetics of escalating doses (1.25, 5, 20, and 80 mg/kg of body weight) of telavancin and vancomycin were linear over the dose range. Epithelial lining fluid (ELF) pharmacokinetics for each drug revealed that penetration into the ELF mirrored the percentage of the free fraction (the fraction not protein bound) in plasma for each drug. Telavancin (0.3125 to 80 mg/kg/6 h) and vancomycin (0.3125 to 1,280 mg/kg/6 h) were administered by the subcutaneous route in treatment studies. Dose-dependent bactericidal activity against all four strains was observed in both models. A sigmoid maximum-effect model was used to determine the area under the concentration-time curve (AUC)/MIC exposure associated with net stasis and 1-log10 kill relative to the burden at the start of therapy. The 24-h plasma free drug AUC (fAUC)/MIC values associated with stasis and 1-log kill were remarkably congruent. Net stasis for telavancin was noted at fAUC/MIC values of 83 and 40.4 in the thigh and lung, respectively, and 1-log kill was noted at fAUC/MIC values of 215 and 76.4, respectively. For vancomycin, the fAUC/MIC values for stasis were 77.9 and 45.3, respectively, and those for 1-log kill were 282 and 113, respectively. The 24-h ELF total drug AUC/MIC targets in the lung model were very similar to the 24-h plasma free drug AUC/MIC targets for each drug. Integration of human pharmacokinetic data for telavancin, the results of the MIC distribution studies, and the pharmacodynamic targets identified in this study suggests that the current dosing regimen of telavancin is optimized to obtain drug exposures sufficient to treat S. aureus infections. PMID:28416551

  3. "Geo-statistics methods and neural networks in geophysical applications: A case study"

    NASA Astrophysics Data System (ADS)

    Rodriguez Sandoval, R.; Urrutia Fucugauchi, J.; Ramirez Cruz, L. C.

    2008-12-01

    The study is focus in the Ebano-Panuco basin of northeastern Mexico, which is being explored for hydrocarbon reservoirs. These reservoirs are in limestones and there is interest in determining porosity and permeability in the carbonate sequences. The porosity maps presented in this study are estimated from application of multiattribute and neural networks techniques, which combine geophysics logs and 3-D seismic data by means of statistical relationships. The multiattribute analysis is a process to predict a volume of any underground petrophysical measurement from well-log and seismic data. The data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs are neutron porosity logs. From the 3-D seismic volume a series of sample attributes is calculated. The objective of this study is to derive a set of attributes and the target log values. The selected set is determined by a process of forward stepwise regression. The analysis can be linear or nonlinear. In the linear mode the method consists of a series of weights derived by least-square minimization. In the nonlinear mode, a neural network is trained using the select attributes as inputs. In this case we used a probabilistic neural network PNN. The method is applied to a real data set from PEMEX. For better reservoir characterization the porosity distribution was estimated using both techniques. The case shown a continues improvement in the prediction of the porosity from the multiattribute to the neural network analysis. The improvement is in the training and the validation, which are important indicators of the reliability of the results. The neural network showed an improvement in resolution over the multiattribute analysis. The final maps provide more realistic results of the porosity distribution.

  4. Performance of statistical models to predict mental health and substance abuse cost.

    PubMed

    Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K

    2006-10-26

    Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.

  5. A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications

    PubMed Central

    Austin, Peter C.

    2017-01-01

    Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954

  6. Treatment of Amblyopia Using Personalized Dosing Strategies: Statistical Modelling and Clinical Implementation.

    PubMed

    Wallace, Michael P; Stewart, Catherine E; Moseley, Merrick J; Stephens, David A; Fielder, Alistair R

    2016-12-01

    To generate a statistical model for personalizing a patient's occlusion therapy regimen. Statistical modelling was undertaken on a combined data set of the Monitored Occlusion Treatment of Amblyopia Study (MOTAS) and the Randomized Occlusion Treatment of Amblyopia Study (ROTAS). This exercise permits the calculation of future patients' total effective dose (TED)-that predicted to achieve their best attainable visual acuity. Daily patching regimens (hours/day) can be calculated from the TED. Occlusion data for 149 study participants with amblyopia (anisometropic in 50, strabismic in 43, and mixed in 56) were analyzed. Median time to best observed visual acuity was 63 days (25% and 75% quartiles; 28 and 91 days). Median visual acuity in the amblyopic eye at start of occlusion was 0.40 logMAR (quartiles 0.22 and 0.68 logMAR) and at end of occlusion was 0.12 (quartiles 0.025 and 0.32 logMAR). Median lower and upper estimates of TED were 120 hours (quartiles 34 and 242 hours), and 176 hours (quartiles 84 and 316 hours). The data suggest a piecewise linear relationship (P = 0.008) between patching dose-rate (hours/day) and TED with a single breakpoint estimated at 2.16 (standard error 0.51) hours/day, suggesting doses below 2.16 hours/day are less effective. We introduce the concept of TED of occlusion. Predictors for TED are visual acuity deficit, amblyopia type, and age at start of occlusion therapy. Dose-rates prescribed within the model range from 2.5 to 12 hours/day and can be revised dynamically throughout treatment in response to recorded patient compliance: a personalized dosing strategy.

  7. Modeling relationships between traditional preadmission measures and clinical skills performance on a medical licensure examination.

    PubMed

    Roberts, William L; Pugliano, Gina; Langenau, Erik; Boulet, John R

    2012-08-01

    Medical schools employ a variety of preadmission measures to select students most likely to succeed in the program. The Medical College Admission Test (MCAT) and the undergraduate college grade point average (uGPA) are two academic measures typically used to select students in medical school. The assumption that presently used preadmission measures can predict clinical skill performance on a medical licensure examination was evaluated within a validity argument framework (Kane 1992). A hierarchical generalized linear model tested relationships between the log-odds of failing a high-stakes medical licensure performance examination and matriculant academic and non-academic preadmission measures, controlling for student-and school-variables. Data includes 3,189 matriculants from 22 osteopathic medical schools tested in 2009-2010. Unconditional unit-specific model expected average log-odds of failing the examination across medical schools is -3.05 (se = 0.11) or 5%. Student-level estimated coefficients for MCAT Verbal Reasoning scores (0.03), Physical Sciences scores (0.05), Biological Sciences scores (0.04), uGPA(science) (0.07), and uGPA(non-science) (0.26) lacked association with the log-odds of failing the COMLEX-USA Level 2-PE, controlling for all other predictors in the model. Evidence from this study shows that present preadmission measures of academic ability are not related to later clinical skill performance. Given that clinical skill performance is an important part of medical practice, selection measures should be developed to identify students who will be successful in communication and be able to demonstrate the ability to systematically collect a medical history, perform a physical examination, and synthesize this information to diagnose and manage patient conditions.

  8. One Solution of the Forward Problem of DC Resistivity Well Logging by the Method of Volume Integral Equations with Allowance for Induced Polarization

    NASA Astrophysics Data System (ADS)

    Kevorkyants, S. S.

    2018-03-01

    For theoretically studying the intensity of the influence exerted by the polarization of the rocks on the results of direct current (DC) well logging, a solution is suggested for the direct inner problem of the DC electric logging in the polarizable model of plane-layered medium containing a heterogeneity by the example of the three-layer model of the hosting medium. Initially, the solution is presented in the form of a traditional vector volume-integral equation of the second kind (IE2) for the electric current density vector. The vector IE2 is solved by the modified iteration-dissipation method. By the transformations, the initial IE2 is reduced to the equation with the contraction integral operator for an axisymmetric model of electrical well-logging of the three-layer polarizable medium intersected by an infinitely long circular cylinder. The latter simulates the borehole with a zone of penetration where the sought vector consists of the radial J r and J z axial (relative to the cylinder's axis) components. The decomposition of the obtained vector IE2 into scalar components and the discretization in the coordinates r and z lead to a heterogeneous system of linear algebraic equations with a block matrix of the coefficients representing 2x2 matrices whose elements are the triple integrals of the mixed derivatives of the second-order Green's function with respect to the parameters r, z, r', and z'. With the use of the analytical transformations and standard integrals, the integrals over the areas of the partition cells and azimuthal coordinate are reduced to single integrals (with respect to the variable t = cos ϕ on the interval [-1, 1]) calculated by the Gauss method for numerical integration. For estimating the effective coefficient of polarization of the complex medium, it is suggested to use the Siegel-Komarov formula.

  9. Reservoir Models for Gas Hydrate Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Boswell, R.

    2016-12-01

    Scientific and industrial drilling programs have now providing detailed information on gas hydrate systems that will increasingly be the subject of field experiments. The need to carefully plan these programs requires reliable prediction of reservoir response to hydrate dissociation. Currently, a major emphasis in gas hydrate modeling is the integration of thermodynamic/hydrologic phenomena with geomechanical response for both reservoir and bounding strata. However, also critical to the ultimate success of these efforts is the appropriate development of input geologic models, including several emerging issues, including (1) reservoir heterogeneity, (2) understanding of the initial petrophysical characteristics of the system (reservoirs and seals), the dynamic evolution of those characteristics during active dissociation, and the interdependency of petrophysical parameters and (3) the nature of reservoir boundaries. Heterogeneity is ubiquitous aspect of every natural reservoir, and appropriate characterization is vital. However, heterogeneity is not random. Vertical variation can be evaluated with core and well log data; however, core data often are challenged by incomplete recovery. Well logs also provide interpretation challenges, particularly where reservoirs are thinly-bedded due to limitation in vertical resolution. This imprecision will extend to any petrophysical measurements that are derived from evaluation of log data. Extrapolation of log data laterally is also complex, and should be supported by geologic mapping. Key petrophysical parameters include porosity, permeability and it many aspects, and water saturation. Field data collected to date suggest that the degree of hydrate saturation is strongly controlled by/dependant upon reservoir quality and that the ratio of free to bound water in the remaining pore space is likely also controlled by reservoir quality. Further, those parameters will also evolve during dissociation, and not necessary in a simple/linear way. Significant progress has also occurred in recent years with regard to the geologic characterization of reservoir boundaries. Vertical boundaries with overlying clay-rich "seals" are now widely-appreciated to have non-zero permeability, and lateral boundaries are sources of potential lateral fluid flow.

  10. Field estimates of polyurethane foam - air partition coefficients for hexachlorobenzene, alpha-hexachlorocyclohexane and bromoanisoles.

    PubMed

    Bidleman, Terry F; Nygren, Olle; Tysklind, Mats

    2016-09-01

    Partition coefficients of gaseous semivolatile organic compounds (SVOCs) between polyurethane foam (PUF) and air (KPA) are needed in the estimation of sampling rates for PUF disk passive air samplers. We determined KPA in field experiments by conducting long-term (24-48 h) air sampling to saturate PUF traps and shorter runs (2-4 h) to measure air concentrations. Sampling events were done at daily mean temperatures ranging from 1.9 to 17.5 °C. Target compounds were hexachlorobenzene (HCB), alpha-hexachlorocyclohexane (α-HCH), 2,4-dibromoanisole (2,4-DiBA) and 2,4,6-tribromoanisole (2,4,6-TriBA). KPA (mL g(-1)) was calculated from quantities on the PUF traps at saturation (ng g(-1)) divided by air concentrations (ng mL(-1)). Enthalpies of PUF-to-air transfer (ΔHPA, kJ mol(-1)) were determined from the slopes of log KPA/mL g(-1) versus 1/T(K) for HCB and the bromoanisoles, KPA of α-HCH was measured only at 14.3 to 17.5 °C and ΔHPA was not determined. Experimental log KPA/mL g(-1) at 15 °C were HCB = 7.37; α-HCH = 8.08; 2,4-DiBA = 7.26 and 2,4,6-TriBA = 7.26. Experimental log KPA/mL g(-1) were compared with predictions based on an octanol-air partition coefficient (log KOA) model (Shoeib and Harner, 2002a) and a polyparameter linear free relationship (pp-LFER) model (Kamprad and Goss, 2007) using different sets of solute parameters. Predicted KP values varied by factors of 3 to over 30, depending on the compound and the model. Such discrepancies provide incentive for experimental measurements of KPA for other SVOCs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Insulin resistance change and antiretroviral therapy exposure in HIV-infected and uninfected Rwandan women: a longitudinal analysis.

    PubMed

    Mutimura, Eugene; Hoover, Donald R; Shi, Qiuhu; Dusingize, Jean Claude; Sinayobye, Jean D'Amour; Cohen, Mardge; Anastos, Kathryn

    2015-01-01

    We longitudinally assessed predictors of insulin resistance (IR) change among HIV-uninfected and HIV-infected (ART-initiators and ART-non-initiators) Rwandan women. HIV-infected (HIV+) and uninfected (HIV-) women provided demographic and clinical measures: age, body mass index (BMI) in Kg/(height in meters)2, Fat-Mass (FMI) and Fat-Free-Mass (FFMI) index, fasting serum glucose and insulin. Homeostasis Model Assessment (HOMA) was calculated to estimate IR change over time in log10 transformed HOMA measured at study enrollment or prior to ART initiation in 3 groups: HIV- (n = 194), HIV+ ART-non-initiators (n=95) and HIV+ ART-initiators (n=371). ANCOVA linear regression models of change in log10-HOMA were fit with all models included the first log10 HOMA as a predictor. Mean±SD log10-HOMA was -0.18±0.39 at the 1st and -0.21±0.41 at the 2nd measure, with mean change of 0.03±0.44. In the final model (all women) BMI at 1st HOMA measure (0.014; 95% CI=0.006-0.021 per kg/m2; p<0.001) and change in BMI from 1st to 2nd measure (0.024; 95% CI=0.013-0.035 per kg/m2; p<0.001) predicted HOMA change. When restricted to subjects with FMI measures, FMI at 1st HOMA measure (0.020; 95% CI=0.010-0.030 per kg/m2; p<0.001) and change in FMI from 1st to 2nd measure (0.032; 95% CI=0.020-0.043 per kg/m2; p<0.0001) predicted change in HOMA. While ART use did not predict change in log10-HOMA, untreated HIV+ women had a significant decline in IR over time. Use or duration of AZT, d4T and EFV was not associated with HOMA change in HIV+ women. Baseline BMI and change in BMI, and in particular fat mass and change in fat mass predicted insulin resistance change over ~3 years in HIV-infected and uninfected Rwandan women. Exposure to specific ART (d4T, AZT, EFV) did not predict insulin resistance change in ART-treated HIV-infected Rwandan women.

  12. A Bayesian model averaging method for improving SMT phrase table

    NASA Astrophysics Data System (ADS)

    Duan, Nan

    2013-03-01

    Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.

  13. Hair Manganese as an Exposure Biomarker among Welders.

    PubMed

    Reiss, Boris; Simpson, Christopher D; Baker, Marissa G; Stover, Bert; Sheppard, Lianne; Seixas, Noah S

    2016-03-01

    Quantifying exposure and dose to manganese (Mn) containing airborne particles in welding fume presents many challenges. Common biological markers such as Mn in blood or Mn in urine have not proven to be practical biomarkers even in studies where positive associations were observed. However, hair Mn (MnH) as a biomarker has the advantage over blood and urine that it is less influenced by short-term variability of Mn exposure levels because of its slow growth rate. The objective of this study was to determine whether hair can be used as a biomarker for welders exposed to manganese. Hair samples (1cm) were collected from 47 welding school students and individual air Mn (MnA) exposures were measured for each subject. MnA levels for all days were estimated with a linear mixed model using welding type as a predictor. A 30-day time-weighted average MnA (MnA30d) exposure level was calculated for each hair sample. The association between MnH and MnA30d levels was then assessed. A linear relationship was observed between log-transformed MnA30d and log-transformed MnH. Doubling MnA30d exposure levels yields a 20% (95% confidence interval: 11-29%) increase in MnH. The association was similar for hair washed following two different wash procedures designed to remove external contamination. Hair shows promise as a biomarker for inhaled Mn exposure given the presence of a significant linear association between MnH and MnA30d levels. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  14. Hair Manganese as an Exposure Biomarker among Welders

    PubMed Central

    Reiss, Boris; Simpson, Christopher D.; Baker, Marissa G.; Stover, Bert; Sheppard, Lianne; Seixas, Noah S.

    2016-01-01

    Quantifying exposure and dose to manganese (Mn) containing airborne particles in welding fume presents many challenges. Common biological markers such as Mn in blood or Mn in urine have not proven to be practical biomarkers even in studies where positive associations were observed. However, hair Mn (MnH) as a biomarker has the advantage over blood and urine that it is less influenced by short-term variability of Mn exposure levels because of its slow growth rate. The objective of this study was to determine whether hair can be used as a biomarker for welders exposed to manganese. Hair samples (1cm) were collected from 47 welding school students and individual air Mn (MnA) exposures were measured for each subject. MnA levels for all days were estimated with a linear mixed model using welding type as a predictor. A 30-day time-weighted average MnA (MnA30d) exposure level was calculated for each hair sample. The association between MnH and MnA30d levels was then assessed. A linear relationship was observed between log-transformed MnA30d and log-transformed MnH. Doubling MnA30d exposure levels yields a 20% (95% confidence interval: 11–29%) increase in MnH. The association was similar for hair washed following two different wash procedures designed to remove external contamination. Hair shows promise as a biomarker for inhaled Mn exposure given the presence of a significant linear association between MnH and MnA30d levels. PMID:26409267

  15. Pulse Height Analyzer Interfacing and Computer Programming in the Environmental Laser Propagation Project

    DTIC Science & Technology

    1976-06-01

    United States Naval Postgraduate School, Monterey , California, 1974. 6. Anton , H., Elementary Linear Algebra , John Wiley & Sons, 1973. 7. Parrat, L. G...CONVERTER ln(laser & bias) PULSE HEIGHT ANALYZER © LINEAR AMPLIFIER SAMPLE TRIGGER OSCILLATOR early ln(laser & bias) SCINTILLOMETERS recent BACKGROUND...DEMODULATOR LASER CALIBRATION BOX LASER OR CAL VOLTAGE LOG CONVERTER LN (LASER OR CAL VOLT) LINEAR AMPLIFIER uLN (LASER OR CAL VOLT) PULSE HEIGHTEN ANALYZER V

  16. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Gary E.; Song, Joo Hyun; Lu, Wei

    2007-06-15

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction ofmore » lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R{sup 2}=0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R{sup 2}=0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R{sup 2}=0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.« less

  17. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry.

    PubMed

    Christensen, Gary E; Song, Joo Hyun; Lu, Wei; El Naqa, Issam; Low, Daniel A

    2007-06-01

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R2 = 0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R2 = 0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R2 = 0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.

  18. The association of serum β-hydroxybutyrate concentration with fetal number and health indicators in late-gestation ewes in commercial meat flocks in Prince Edward Island.

    PubMed

    Ratanapob, Niorn; VanLeeuwen, John; McKenna, Shawn; Wichtel, Maureen; Rodriguez-Lecompte, Juan C; Menzies, Paula; Wichtel, Jeffrey

    2018-06-01

    Late-gestation ewes are susceptible to ketonemia resulting from high energy requirement for fetal growth during the last few weeks of pregnancy. High lamb mortality is a possible consequence of effects of ketonemia on both ewes and lambs. Determining risk factors to ketonemia is a fundamental step to identify ewes at risk, in order to avoid losses caused by ketonemia. Serum β-hydroxybutyrate (BHBA) concentrations of 384 late-gestation ewe samples were determined. Physical examinations, including body condition, FAMACHA © and hygiene scoring, were performed. Udders and teeth were also examined. Fecal floatation was performed to detect gastrointestinal helminth eggs of the ewe fecal samples. General feeding management practices and season at sampling were recorded. Litter sizes were retrieved from lambing records. Factors associated with log serum BHBA concentration were determined using a linear mixed model, with flock and lambing groups as random effects. The mean serum BHBA concentration was 545.8 (±453.3) μmol/l. Ewes with a body condition score (BCS) of 2.5-3.5 had significantly lower log BHBA concentrations than ewes with a BCS of ≤2.0, by 19.7% (p = 0.035). Ewes with a BCS of >3.5 had a trend toward higher log BHBA concentrations compared to ewes with a BCS of 2.5-3.5. Ewes with a FAMACHA © score of 3 had significantly higher log BHBA concentrations than ewes with a FAMACHA © score of 1 or 2, by 12.1% (p = 0.049). Ewes in which gastrointestinal helminth eggs were detected had significantly higher log BHBA concentrations than ewes in which helminth eggs were not detected, by 12.3% (p = 0.040). An increased litter size was associated with higher log BHBA concentration (p ≤ 0.003), with the log BHBA concentrations of ewes having twins, triplets, and quadruplets or quintuplets were higher than those of ewes having singleton by 19.2%, 30.4%, and 85.2%, respectively. Season at sampling confounded the association between log BHBA concentration and FAMACHA © score, and therefore was retained in the final model even though it was not statistically significant. Intra-class correlation coefficients at the flock and lambing group levels were 0.14 and 0.32, respectively. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  19. Estimating relative risks in multicenter studies with a small number of centers - which methods to use? A simulation study.

    PubMed

    Pedroza, Claudia; Truong, Van Thi Thanh

    2017-11-02

    Analyses of multicenter studies often need to account for center clustering to ensure valid inference. For binary outcomes, it is particularly challenging to properly adjust for center when the number of centers or total sample size is small, or when there are few events per center. Our objective was to evaluate the performance of generalized estimating equation (GEE) log-binomial and Poisson models, generalized linear mixed models (GLMMs) assuming binomial and Poisson distributions, and a Bayesian binomial GLMM to account for center effect in these scenarios. We conducted a simulation study with few centers (≤30) and 50 or fewer subjects per center, using both a randomized controlled trial and an observational study design to estimate relative risk. We compared the GEE and GLMM models with a log-binomial model without adjustment for clustering in terms of bias, root mean square error (RMSE), and coverage. For the Bayesian GLMM, we used informative neutral priors that are skeptical of large treatment effects that are almost never observed in studies of medical interventions. All frequentist methods exhibited little bias, and the RMSE was very similar across the models. The binomial GLMM had poor convergence rates, ranging from 27% to 85%, but performed well otherwise. The results show that both GEE models need to use small sample corrections for robust SEs to achieve proper coverage of 95% CIs. The Bayesian GLMM had similar convergence rates but resulted in slightly more biased estimates for the smallest sample sizes. However, it had the smallest RMSE and good coverage across all scenarios. These results were very similar for both study designs. For the analyses of multicenter studies with a binary outcome and few centers, we recommend adjustment for center with either a GEE log-binomial or Poisson model with appropriate small sample corrections or a Bayesian binomial GLMM with informative priors.

  20. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  1. Mum, why do you keep on growing? Impacts of environmental variability on optimal growth and reproduction allocation strategies of annual plants.

    PubMed

    De Lara, Michel

    2006-05-01

    In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.

  2. Human Capital--Economic Growth Nexus in the Former Soviet Bloc

    ERIC Educational Resources Information Center

    Osipian, Ararat L.

    2007-01-01

    This study analyses the role and impact of higher education on per capita economic growth in the Former Soviet Bloc. It attempts to estimate the significance of educational levels for initiating substantial economic growth that now takes place in these two countries. This study estimates a system of linear and log-linear equations that account for…

  3. Dietary exposure of PBDEs resulting from a subsistence diet in three First Nation communities in the James Bay Region of Canada.

    PubMed

    Liberda, Eric N; Wainman, Bruce C; Leblanc, Alain; Dumas, Pierre; Martin, Ian; Tsuji, Leonard J S

    2011-04-01

    Concerns regarding the persistence, bioaccumulation, long-range transport, and adverse health effects of polybrominated dipheyl ethers (PBDEs) have recently come to light. PBDEs may potentially be of concern to indigenous (First Nations) people of Canada who subsist on traditional foods, but there is a paucity of information on this topic. To investigate whether the traditional diet is a major source of PBDEs in sub-Arctic First Nations populations of the Hudson Bay Lowlands (James and Hudson Bay),Ontario, Canada, a variety of tissues from wild game and fish were analyzed for PBDE content (n=147) and dietary exposure assessed and compared to the US EPA reference doses (RfDs). In addition, to examine the effect of isolation/industrialization on PBDE body burdens, the blood plasma from three First Nations (Cree Nation of Oujé-Bougoumou, Quebec; Fort Albany First Nation, Ontario; and Weenusk First Nation [Peawanuck], Ontario, Canada) were collected (n=54) and analyzed using a log-linear contingency model. The mean values of PBDEs in wild meats and fish adjusted for standard consumption values and body weight, did not exceed the US EPA RfD. Log linear modeling of the human PBDE body burden showed that PBDE body burden increases as access to manufactured goods increases. Thus, household dust from material goods containing PBDEs is likely responsible for the human exposure; the traditional First Nations diet appears to be a minor source of PBDEs. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  4. Phenanthrene and 2,2',5,5'-PCB sorption by several soils from methanol-water solutions: the effect of weathering and solute structure.

    PubMed

    Hyun, Seunghun; Kim, Minhee; Baek, Kitae; Lee, Linda S

    2010-01-01

    The effect of the sorption of phenanthrene and 2,2',5,5'-polychlorinated biphenyl (PCB52) by five differently weathered soils were measured in water and low methanol volume fraction (f(c)0.5) as a function of the apparent solution pH (pH(app)). Two weathered oxisols (A2 and DRC), and moderately weathered alfisols (Toronto) and two young soils (K5 and Webster) were used. The K(m) (linear sorption coefficient) values, which log-linearly decreases with f(c), were interpreted using a cosolvency sorption model. For phenanthrene sorption at the natural pH, the empirical constant (alpha) ranged between 0.95 and 1.14, and was in the order of oxisols (A2 and DRC)

  5. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    PubMed

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  6. Sorption behavior of microamounts of zinc on titanium oxide from aqueous solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasany, S.M.; Ghaffar, A.; Chughtai, F.A.

    1991-08-01

    To correlate soil response toward zinc, it is necessary to study its adsorption in detail on soils or on their constituents. The adsorption of microamounts of zinc on titanium oxide, prepared and characterized in this laboratory, has been studied in detail. Zinc adsorption has been found to be dependent on the pH of the aqueous solution, amount of oxide, and zinc concentration. Maximum adsorption is from pH 10 buffer. EDTA and cyanide ions inhibit adsorption significantly. The adsorption of other elements under optimal conditions has also been measured on this oxide. Sc(III) and Cs(I) show almost negligible adsorption. Zinc adsorptionmore » follows the linear form of the Freundlich adsorption isotherm: log C{sub Ads} = log A + (1/n) log C{sub Bulk} with A = 0.48 mol/g and n = 1. Except at a very low bulk concentration (3 {times} 10{sup {minus}5} mol/dm{sup 3}), Langmuir adsorption isotherm is also linear for the entire zinc concentration investigated. The limiting adsorbed concentration is estimated to be 0.18 mol/g.« less

  7. European Multicenter Study on Analytical Performance of DxN Veris System HCV Assay.

    PubMed

    Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Gismondo, Maria Rita; Hofmann, Jörg; Izopet, Jacques; Kühn, Sebastian; Lombardi, Alessandra; Marcos, Maria Angeles; Sauné, Karine; O'Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel W

    2017-04-01

    The analytical performance of the Veris HCV Assay for use on the new and fully automated Beckman Coulter DxN Veris Molecular Diagnostics System (DxN Veris System) was evaluated at 10 European virology laboratories. Precision, analytical sensitivity, specificity, and performance with negative samples, linearity, and performance with hepatitis C virus (HCV) genotypes were evaluated. Precision for all sites showed a standard deviation (SD) of 0.22 log 10 IU/ml or lower for each level tested. Analytical sensitivity determined by probit analysis was between 6.2 and 9.0 IU/ml. Specificity on 94 unique patient samples was 100%, and performance with 1,089 negative samples demonstrated 100% not-detected results. Linearity using patient samples was shown from 1.34 to 6.94 log 10 IU/ml. The assay demonstrated linearity upon dilution with all HCV genotypes. The Veris HCV Assay demonstrated an analytical performance comparable to that of currently marketed HCV assays when tested across multiple European sites. Copyright © 2017 American Society for Microbiology.

  8. Linear and nonlinear mechanical properties of a series of epoxy resins

    NASA Technical Reports Server (NTRS)

    Curliss, D. B.; Caruthers, J. M.

    1987-01-01

    The linear viscoelastic properties have been measured for a series of bisphenol-A-based epoxy resins cured with the diamine DDS. The linear viscoelastic master curves were constructed via time-temperature superposition of frequency dependent G-prime and G-double-prime isotherms. The G-double-prime master curves exhibited two sub-Tg transitions. Superposition of isotherms in the glass-to-rubber transition (i.e., alpha) and the beta transition at -60 C was achieved by simple horizontal shifts in the log frequency axis; however, in the region between alpha and beta, superposition could not be effected by simple horizontal shifts along the log frequency axis. The different temperature dependency of the alpha and beta relaxation mechanisms causes a complex response of G-double-prime in the so called alpha-prime region. A novel numerical procedure has been developed to extract the complete relaxation spectra and its temperature dependence from the G-prime and G-double-prime isothermal data in the alpha-prime region.

  9. Application of Fracture Distribution Prediction Model in Xihu Depression of East China Sea

    NASA Astrophysics Data System (ADS)

    Yan, Weifeng; Duan, Feifei; Zhang, Le; Li, Ming

    2018-02-01

    There are different responses on each of logging data with the changes of formation characteristics and outliers caused by the existence of fractures. For this reason, the development of fractures in formation can be characterized by the fine analysis of logging curves. The well logs such as resistivity, sonic transit time, density, neutron porosity and gamma ray, which are classified as conventional well logs, are more sensitive to formation fractures. In view of traditional fracture prediction model, using the simple weighted average of different logging data to calculate the comprehensive fracture index, are more susceptible to subjective factors and exist a large deviation, a statistical method is introduced accordingly. Combining with responses of conventional logging data on the development of formation fracture, a prediction model based on membership function is established, and its essence is to analyse logging data with fuzzy mathematics theory. The fracture prediction results in a well formation in NX block of Xihu depression through two models are compared with that of imaging logging, which shows that the accuracy of fracture prediction model based on membership function is better than that of traditional model. Furthermore, the prediction results are highly consistent with imaging logs and can reflect the development of cracks much better. It can provide a reference for engineering practice.

  10. Scattering linear polarization of late-type active stars

    NASA Astrophysics Data System (ADS)

    Yakobchuk, T. M.; Berdyugina, S. V.

    2018-05-01

    Context. Many active stars are covered in spots, much more so than the Sun, as indicated by spectroscopic and photometric observations. It has been predicted that star spots induce non-zero intrinsic linear polarization by breaking the visible stellar disk symmetry. Although small, this effect might be useful for star spot studies, and it is particularly significant for a future polarimetric atmosphere characterization of exoplanets orbiting active host stars. Aims: Using models for a center-to-limb variation of the intensity and polarization in presence of continuum scattering and adopting a simplified two-temperature photosphere model, we aim to estimate the intrinsic linear polarization for late-type stars of different gravity, effective temperature, and spottedness. Methods: We developed a code that simulates various spot configurations or uses arbitrary surface maps, performs numerical disk integration, and builds Stokes parameter phase curves for a star over a rotation period for a selected wavelength. It allows estimating minimum and maximum polarization values for a given set of stellar parameters and spot coverages. Results: Based on assumptions about photosphere-to-spot temperature contrasts and spot size distributions, we calculate the linear polarization for late-type stars with Teff = 3500 K-6000 K, log g = 1.0-5.0, using the plane-parallel and spherical atmosphere models. Employing random spot surface distribution, we analyze the relation between spot coverage and polarization and determine the influence of different input parameters on results. Furthermore, we consider spot configurations with polar spots and active latitudes and longitudes.

  11. The Spontaneous Ray Log: A New Aid for Constructing Pseudo-Synthetic Seismograms

    NASA Astrophysics Data System (ADS)

    Quadir, Adnan; Lewis, Charles; Rau, Ruey-Juin

    2018-02-01

    Conventional synthetic seismograms for hydrocarbon exploration combine the sonic and density logs, whereas pseudo-synthetic seismograms are constructed with a density log plus a resistivity, neutron, gamma ray, or rarely a spontaneous potential log. Herein, we introduce a new technique for constructing a pseudo-synthetic seismogram by combining the gamma ray (GR) and self-potential (SP) logs to produce the spontaneous ray (SR) log. Three wells, each of which consisted of more than 1000 m of carbonates, sandstones, and shales, were investigated; each well was divided into 12 Groups based on formation tops, and the Pearson product-moment correlation coefficient (PCC) was calculated for each "Group" from each of the GR, SP, and SR logs. The highest PCC-valued log curves for each Group were then combined to produce a single log whose values were cross-plotted against the reference well's sonic ITT values to determine a linear transform for producing a pseudo-sonic (PS) log and, ultimately, a pseudo-synthetic seismogram. The range for the Nash-Sutcliffe efficiency (NSE) acceptable value for the pseudo-sonic logs of three wells was 78-83%. This technique was tested on three wells, one of which was used as a blind test well, with satisfactory results. The PCC value between the composite PS (SR) log with low-density correction and the conventional sonic (CS) log was 86%. Because of the common occurrence of spontaneous potential and gamma ray logs in many of the hydrocarbon basins of the world, this inexpensive and straightforward technique could hold significant promise in areas that are in need of alternate ways to create pseudo-synthetic seismograms for seismic reflection interpretation.

  12. Characterization of Ascentis RP-Amide column: Lipophilicity measurement and linear solvation energy relationships.

    PubMed

    Benhaim, Deborah; Grushka, Eli

    2010-01-01

    This study investigates lipophilicity determination by chromatographic measurements using the polar embedded Ascentis RP-Amide stationary phase. As a new generation of amide-functionalized silica stationary phase, the Ascentis RP-Amide column is evaluated as a possible substitution to the n-octanol/water partitioning system for lipophilicity measurements. For this evaluation, extrapolated retention factors, log k'w, of a set of diverse compounds were determined using different methanol contents in the mobile phase. The use of n-octanol enriched mobile phase enhances the relationship between the slope (S) of the extrapolation lines and the extrapolated log k'w (the intercept of the extrapolation),as well as the correlation between log P values and the extrapolated log k'w (1:1 correlation, r2 = 0.966).In addition, the use of isocratic retention factors, at 40% methanol in the mobile phase, provides a rapid tool for lipophilicity determination. The intermolecular interactions that contribute to the retention process in the Ascentis RP-Amide phase are characterized using the solvation parameter model of Abraham.The LSER system constants for the column are very similar to the LSER constants of the n-octanol/water extraction system. Tanaka radar plots are used for quick visual comparison of the system constants of the Ascentis RP-Amide column and the n-octanol/water extraction system. The results all indicate that the Ascentis RP-Amide stationary phase can provide reliable lipophilic data. Copyright 2009 Elsevier B.V. All rights reserved.

  13. Effect of temperature and humidity on formaldehyde emissions in temporary housing units.

    PubMed

    Parthasarathy, Srinandini; Maddalena, Randy L; Russell, Marion L; Apte, Michael G

    2011-06-01

    The effect of temperature and humidity on formaldehyde emissions from samples collected from temporary housing units (THUs) was studied. The THUs were supplied by the U.S. Federal Emergency Management Administration (FEMA) to families that lost their homes in Louisiana and Mississippi during the Hurricane Katrina and Rita disasters. On the basis of a previous study, four of the composite wood surface materials that dominated contributions to indoor formaldehyde were selected to analyze the effects of temperature and humidity on the emission factors. Humidity equilibration experiments were carried out on two of the samples to determine how long the samples take to equilibrate with the surrounding environmental conditions. Small chamber experiments were then conducted to measure emission factors for the four surface materials at various temperature and humidity conditions. The samples were analyzed for formaldehyde via high-performance liquid chromatography. The experiments showed that increases in temperature or humidity contributed to an increase in emission factors. A linear regression model was built using the natural log of the percent relative humidity (RH) and inverse of temperature (in K) as independent variables and the natural log of emission factors as the dependent variable. The coefficients for the inverse of temperature and log RH with log emission factor were found to be statistically significant for all of the samples at the 95% confidence level. This study should assist in retrospectively estimating indoor formaldehyde exposure of occupants of THUs.

  14. Comparison between a Weibull proportional hazards model and a linear model for predicting the genetic merit of US Jersey sires for daughter longevity.

    PubMed

    Caraviello, D Z; Weigel, K A; Gianola, D

    2004-05-01

    Predicted transmitting abilities (PTA) of US Jersey sires for daughter longevity were calculated using a Weibull proportional hazards sire model and compared with predictions from a conventional linear animal model. Culling data from 268,008 Jersey cows with first calving from 1981 to 2000 were used. The proportional hazards model included time-dependent effects of herd-year-season contemporary group and parity by stage of lactation interaction, as well as time-independent effects of sire and age at first calving. Sire variances and parameters of the Weibull distribution were estimated, providing heritability estimates of 4.7% on the log scale and 18.0% on the original scale. The PTA of each sire was expressed as the expected risk of culling relative to daughters of an average sire. Risk ratios (RR) ranged from 0.7 to 1.3, indicating that the risk of culling for daughters of the best sires was 30% lower than for daughters of average sires and nearly 50% lower than than for daughters of the poorest sires. Sire PTA from the proportional hazards model were compared with PTA from a linear model similar to that used for routine national genetic evaluation of length of productive life (PL) using cross-validation in independent samples of herds. Models were compared using logistic regression of daughters' stayability to second, third, fourth, or fifth lactation on their sires' PTA values, with alternative approaches for weighting the contribution of each sire. Models were also compared using logistic regression of daughters' stayability to 36, 48, 60, 72, and 84 mo of life. The proportional hazards model generally yielded more accurate predictions according to these criteria, but differences in predictive ability between methods were smaller when using a Kullback-Leibler distance than with other approaches. Results of this study suggest that survival analysis methodology may provide more accurate predictions of genetic merit for longevity than conventional linear models.

  15. Alternative Regression Equations for Estimation of Annual Peak-Streamflow Frequency for Undeveloped Watersheds in Texas using PRESS Minimization

    USGS Publications Warehouse

    Asquith, William H.; Thompson, David B.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.

  16. The differential time courses of the vasodilator effects of various 1,4-dihydropyridines in isolated human small arteries are correlated to their lipophilicity.

    PubMed

    van der Lee, R; Pfaffendorf, M; van Zwieten, P A

    2000-11-01

    To investigate a possible relationship between the time courses of action of various calcium antagonists and their lipophilicity, characterized as log P-values. The functional experiments were performed in vitro in human small subcutaneous arteries (internal diameter 591 +/- 51 microm, n = 7 for each concentration), obtained from cosmetic surgery (mamma reduction and abdominoplasty). The vessels were investigated in an isometric wire myograph. The vasodilator effect of the calcium antagonists was quantified by means of log IC50-values, and the onset of the vasodilator effect for each concentration studied was expressed as time to Eeq90-values (time to reach 90% of the maximal effect). Log IC50-values were -8.46 +/- 0.09, -8.33 +/- 0.25 and -8.72 +/- 0.16 for nifedipine, felodipine and (S)-lercanidipine, respectively (not significant). On average, nifedipine reached time to Eeq90 in 11 +/- 1 min. For felodipine and (S)-lercanidipine the corresponding values were 60 +/- 11 min and 99 +/- 9 min, respectively. The differences between these values were statistically significant (P< 0.01). In spite of these differences in the in-vitro human vascular model, the three calcium antagonists are equipotent with regard to their vasodilator effects. Linear regression analysis of the correlation between the logarithm of the membrane partition coefficient (log P-values) of the calcium antagonists tested [2.50, 4.46 and 6.88 for nifedipine, felodipine and (S)-lercanidipine, respectively] and their respective values found for time to Eeq90 was highly significant. It appears that a higher log P-value is correlated with a slower onset of action.

  17. The potential of immobilized artificial membrane chromatography to predict human oral absorption.

    PubMed

    Tsopelas, Fotios; Vallianatou, Theodosia; Tsantili-Kakoulidou, Anna

    2016-01-01

    The potential of immobilized artificial membrane (IAM) chromatography to estimate human oral absorption (%HOA) was investigated. For this purpose, retention indices on IAM stationary phases reported previously by our group or measured by other authors under similar conditions were used to model %HOA data, compiled from literature sources. Considering the pH gradient in gastrointestinal tract, the highest logkw(IAM) values were considered, obtained either at pH7.4 or 5.5, defined as logkw(IAM)(best). Non linear models were established upon introduction of additional parameters and after exclusion of drugs which are substrates either to efflux or uptake transporters. The best model included Abraham's hydrogen-bond acidity parameter, molecular weight as well as the positively and negatively charged molecular fractions. For reasons of comparison between IAM chromatography and traditional lipophilicity, corresponding models were derived by replacing IAM retention factors with octanol-water distribution coefficients (logD). An overexpression of electrostatic interactions with phosphate anions was observed in the case of IAM retention as expressed by the negative contribution of the positively charged fraction F(+). The same parameter is statistically significant also in the logD model, but with a positive sign, indicating the attraction of basic drugs in the negatively charged inner membrane. To validate the obtained models a blind test set of 22 structurally diverse drugs was used, whose logkw(IAM)(best) values were determined and analyzed in the present study under similar conditions. IAM retention factors were further compared with MDCK cell lines permeability data taken from literature for a set of validation drugs. The overexpression of electrostatic interactions with phosphate anions on IAM surface was also evident in respect to MDCK permeability. In contrast to the clear classification between drugs with high and poor (or intermediate) absorption provided by MDCK permeability, %HOA plotted versus both IAM and logD data result in a saturation curve with a smoother ascending line. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Detailed 3D Geophysical Model of the Shallow Subsurface (Zancara River Basin, Iberian Peninsula)

    NASA Astrophysics Data System (ADS)

    Carbonell, R.; Marzán, I.; Martí, D.; Lobo, A.; Jean, K.; Alvarez-Marrón, J.

    2016-12-01

    Detailed knowledge of the structure and lithologies of the shallow subsurface is required when designing and building singular geological storage facilities this is the case of the study area in Villar de Cañas (Cuenca, Central Spain). In which an extensive multidisciplinary data acquisition program has been carried out. This include studies on: geology, hydrology, geochemistry, geophysics, borehole logging, etc. Because of this data infrastructure, it can be considered a subsurface imaging laboratory to test and validate indirect underground characterization approaches. The field area is located in a Miocene syncline within the Záncara River Basin (Cuenca, Spain). The sedimentary sequence consists in a transition from shales to massive gypsums, and underlying gravels. The stratigraphic succession features a complex internal structure, diffused lithological boundaries and relatively large variability of properties within the same lithology, these makes direct geological interpretation very difficult and requires of the integration of all the measured physical properties. The ERT survey, the seismic tomography data and the logs have been used jointly to build a 3-D multi-parameter model of the subsurface in a surface of 500x500 m. The Vp model (a 10x20x5 m grid) is able to map the high velocities of the massive gypsum, however it was neither able to map the details of the shale-gypsm transition (low velocity contrast) nor to differentiate the outcropping altered gypsum from the weathered shales. The integration of the electrical resistivity and the log data by means of a supervised statistical tools (Linear Discriminant Analysis, LDA) resulted in a new 3D multiparametric subsurface model. This new model integrates the different data sets resolving the uncertainties characteristic of the models obtained independently by the different techniques separately. Furthermore, this test seismic dataset has been used to test FWI approaches in order to study their capacities. (Research supports: CGL2014-56548-P, 2009-SGR-1595, CGL2013-47412-C2-1-P).

  19. Predictors of primary health care pharmaceutical expenditure by districts in Uganda and implications for budget setting and allocation.

    PubMed

    Mujasi, Paschal N; Puig-Junoy, Jaume

    2015-08-20

    There is need for the Uganda Ministry of Health to understand predictors of primary health care pharmaceutical expenditure among districts in order to guide budget setting and to improve efficiency in allocation of the set budget among districts. Cross sectional, retrospective observational study using secondary data. The value of pharmaceuticals procured by primary health care facilities in 87 randomly selected districts for the Financial Year 2011/2012 was collected. Various specifications of the dependent variable (pharmaceutical expenditure) were used: total pharmaceutical expenditure, Per capita district pharmaceutical expenditure, pharmaceutical expenditure per district health facility and pharmaceutical expenditure per outpatient department visit. Andersen's behaviour model of health services utilisation was used as conceptual framework to identify independent variables likely to influence health care utilisation and hence pharmaceutical expenditure. Econometric analysis was conducted to estimate parameters of various regression models. All models were significant overall (P < 0.01), with explanatory power ranging from 51 to 82%. The log linear model for total pharmaceutical expenditure explained about 80% of the observed variation in total pharmaceutical expenditure (Adjusted R(2) = 0.797) and contained the following variables: Immunisation coverage, Total outpatient department attendance, Urbanisation, Total number of government health facilities and total number of Health Centre IIs. The model based on Per capita Pharmaceutical expenditure explained about 50% of the observed variation in per capita pharmaceutical expenditure (Adjusted R(2) = 0.513) and was more balanced with the following variables: Outpatient per capita attendance, percentage of rural population below poverty line 2005, Male Literacy rate, Whether a district is characterised by MOH as difficult to reach or not and the Human poverty index. The log-linear model based on total pharmaceutical expenditure works acceptably well and can be considered useful for predicting future total pharmaceutical expenditure following observed trends. It can be used as a simple tool for rough estimation of the potential overall national primary health pharmaceutical expenditure to guide budget setting. The model based on pharmaceutical expenditure per capita is a more balanced model containing both need and enabling factor variables. These variables would be useful in allocating any set budget to districts.

  20. Exploring QSARs of the interaction of flavonoids with GABA (A) receptor using MLR, ANN and SVM techniques.

    PubMed

    Deeb, Omar; Shaik, Basheerulla; Agrawal, Vijay K

    2014-10-01

    Quantitative Structure-Activity Relationship (QSAR) models for binding affinity constants (log Ki) of 78 flavonoid ligands towards the benzodiazepine site of GABA (A) receptor complex were calculated using the machine learning methods: artificial neural network (ANN) and support vector machine (SVM) techniques. The models obtained were compared with those obtained using multiple linear regression (MLR) analysis. The descriptor selection and model building were performed with 10-fold cross-validation using the training data set. The SVM and MLR coefficient of determination values are 0.944 and 0.879, respectively, for the training set and are higher than those of ANN models. Though the SVM model shows improvement of training set fitting, the ANN model was superior to SVM and MLR in predicting the test set. Randomization test is employed to check the suitability of the models.

  1. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    PubMed

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  2. Effect of Stress Corrosion and Cyclic Fatigue on Fluorapatite Glass-Ceramic

    NASA Astrophysics Data System (ADS)

    Joshi, Gaurav V.

    2011-12-01

    Objective: The objective of this study was to test the following hypotheses: 1. Both cyclic degradation and stress corrosion mechanisms result in subcritical crack growth in a fluorapatite glass-ceramic. 2. There is an interactive effect of stress corrosion and cyclic fatigue to cause subcritical crack growth (SCG) for this material. 3. The material that exhibits rising toughness curve (R-curve) behavior also exhibits a cyclic degradation mechanism. Materials and Methods: The material tested was a fluorapatite glass-ceramic (IPS e.max ZirPress, Ivoclar-Vivadent). Rectangular beam specimens with dimensions of 25 mm x 4 mm x 1.2 mm were fabricated using the press-on technique. Two groups of specimens (N=30) with polished (15 mum) or air abraded surface were tested under rapid monotonic loading. Additional polished specimens were subjected to cyclic loading at two frequencies, 2 Hz (N=44) and 10 Hz (N=36), and at different stress amplitudes. All tests were performed using a fully articulating four-point flexure fixture in deionized water at 37°C. The SCG parameters were determined by using a statistical approach by Munz and Fett (1999). The fatigue lifetime data were fit to a general log-linear model in ALTA PRO software (Reliasoft). Fractographic techniques were used to determine the critical flaw sizes to estimate fracture toughness. To determine the presence of R-curve behavior, non-linear regression was used. Results: Increasing the frequency of cycling did not cause a significant decrease in lifetime. The parameters of the general log-linear model showed that only stress corrosion has a significant effect on lifetime. The parameters are presented in the following table.* SCG parameters (n=19--21) were similar for both frequencies. The regression model showed that the fracture toughness was significantly dependent (p<0.05) on critical flaw size. Conclusions: 1. Cyclic fatigue does not have a significant effect on the SCG in the fluorapatite glass-ceramic IPS e.max ZirPress. 2. There was no interactive effect between cyclic degradation and stress corrosion for this material. 3. The material exhibited a low level of R-curve behavior. It did not exhibit cyclic degradation. *Please refer to dissertation for table.

  3. Anomalous D-Log E curve with high contrast developer Kodak D8 on ultra fine grain emulsion BB640.

    PubMed

    Ulibarrena, M; Mendez, M; Blaya, S; Fimia, A

    2001-12-03

    D-Log E curves, also known as H-D curves, are used since the XIX century as a tool for describing the characteristics of silver halide emulsions. This curve has a very standard shape, with a linear region, a toe, a shoulder and a solarization region. In this work we present a distortion of the usual curve due to the action of a high contrast developer, Kodak D8, on an ultra fine grain emulsion, BB640\\cite{ov04}. The solarization effect is replaced by a linear zone where developed densities increase with increasing exposures, until all silver halide present in the emulsion is reduced by developer D8 to metallic silver. Densities higher than 11 have been obtained.

  4. Metabolic syndrome is associated with exposure to organochlorine pesticides in Anniston, AL, United States.

    PubMed

    Rosenbaum, Paula F; Weinstock, Ruth S; Silverstone, Allen E; Sjödin, Andreas; Pavuk, Marian

    2017-11-01

    The Anniston Community Health Survey, a cross-sectional study, was undertaken in 2005-2007 to study environmental exposure to polychlorinated biphenyl (PCB) and organochlorine (OC) pesticides and health outcomes among residents of Anniston, AL, United States. The examination of potential risks between these pollutants and metabolic syndrome, a cluster of cardiovascular risk factors (i.e., hypertension, central obesity, dyslipidemia and dysglycemia) was the focus of this analysis. Participants were 548 adults who completed the survey and a clinic visit, were free of diabetes, and had a serum sample for clinical laboratory parameters as well as PCB and OC pesticide concentrations. Associations between summed concentrations of 35 PCB congeners and 9 individual pesticides and metabolic syndrome were examined using generalized linear modeling and logistic regression; odds ratios (OR) and 95% confidence intervals (CI) are reported. Pollutants were evaluated as quintiles and as log transformations of continuous serum concentrations. Participants were mostly female (68%) with a mean age (SD) of 53.6 (16.2) years. The racial distribution was 56% white and 44% African American; 49% met the criteria for metabolic syndrome. In unadjusted logistic regression, statistically significant and positive associations across the majority of quintiles were noted for seven individually modeled pesticides (p,p'-DDT, p,p'-DDE, HCB, β-HCCH, oxychlor, tNONA, Mirex). Following adjustment for covariables (i.e., age, sex, race, education, marital status, current smoking, alcohol consumption, positive family history of diabetes or cardiovascular disease, liver disease, BMI), significant elevations in risk were noted for p,p'-DDT across multiple quintiles (range of ORs 1.61 to 2.36), for tNONA (range of ORs 1.62-2.80) and for p,p'-DDE [OR (95% CI)] of 2.73 (1.09-6.88) in the highest quintile relative to the first. Significant trends were observed in adjusted logistic models for log 10 HCB [OR=6.15 (1.66-22.88)], log 10 oxychlor [OR=2.09 (1.07-4.07)] and log 10 tNONA [3.19 (1.45-7.00)]. Summed PCB concentrations were significantly and positively associated with metabolic syndrome only in unadjusted models; adjustment resulted in attenuation of the ORs in both the quintile and log-transformed models. In conclusion, several OC pesticides were found to have significant associations with metabolic syndrome in the Anniston study population while no association was observed for PCBs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Metabolic syndrome is associated with exposure to organochlorine pesticides in Anniston, AL, United States☆

    PubMed Central

    Rosenbaum, Paula F.; Weinstock, Ruth S.; Silverstone, Allen E.; Sjödin, Andreas; Pavuk, Marian

    2017-01-01

    The Anniston Community Health Survey, a cross-sectional study, was undertaken in 2005–2007 to study environmental exposure to polychlorinated biphenyl (PCB) and organochlorine (OC) pesticides and health outcomes among residents of Anniston, AL, United States. The examination of potential risks between these pollutants and metabolic syndrome, a cluster of cardiovascular risk factors (i.e., hypertension, central obesity, dyslipidemia and dysglycemia) was the focus of this analysis. Participants were 548 adults who completed the survey and a clinic visit, were free of diabetes, and had a serum sample for clinical laboratory parameters as well as PCB and OC pesticide concentrations. Associations between summed concentrations of 35 PCB congeners and 9 individual pesticides and metabolic syndrome were examined using generalized linear modeling and logistic regression; odds ratios (OR) and 95% confidence intervals (CI) are reported. Pollutants were evaluated as quintiles and as log transformations of continuous serum concentrations. Participants were mostly female (68%) with a mean age (SD) of 53.6 (16.2) years. The racial distribution was 56% white and 44% African American; 49% met the criteria for metabolic syndrome. In unadjusted logistic regression, statistically significant and positive associations across the majority of quintiles were noted for seven individually modeled pesticides (p,p′-DDT, p,p′-DDE, HCB, β-HCCH, oxychlor, tNONA, Mirex). Following adjustment for covariables (i.e., age, sex, race, education, marital status, current smoking, alcohol consumption, positive family history of diabetes or cardiovascular disease, liver disease, BMI), significant elevations in risk were noted for p,p′-DDT across multiple quintiles (range of ORs 1.61 to 2.36), for tNONA (range of ORs 1.62–2.80) and for p,p′-DDE [OR (95% CI)] of 2.73 (1.09–6.88) in the highest quintile relative to the first. Significant trends were observed in adjusted logistic models for log10 HCB [OR = 6.15 (1.66–22.88)], log10 oxychlor [OR = 2.09 (1.07–4.07)] and log10 tNONA [3.19 (1.45–7.00)]. Summed PCB concentrations were significantly and positively associated with metabolic syndrome only in unadjusted models; adjustment resulted in attenuation of the ORs in both the quintile and log-transformed models. In conclusion, several OC pesticides were found to have significant associations with metabolic syndrome in the Anniston study population while no association was observed for PCBs. PMID:28779625

  6. Integrating surface and borehole geophysics in ground water studies - an example using electromagnetic soundings in south Florida

    USGS Publications Warehouse

    Paillet, Frederick; Hite, Laura; Carlson, Matthew

    1999-01-01

    Time domain surface electromagnetic soundings, borehole induction logs, and other borehole logging techniques are used to construct a realistic model for the shallow subsurface hydraulic properties of unconsolidated sediments in south Florida. Induction logs are used to calibrate surface induction soundings in units of pore water salinity by correlating water sample specific electrical conductivity with the electrical conductivity of the formation over the sampled interval for a two‐layered aquifer model. Geophysical logs are also used to show that a constant conductivity layer model is appropriate for the south Florida study. Several physically independent log measurements are used to quantify the dependence of formation electrical conductivity on such parameters as salinity, permeability, and clay mineral fraction. The combined interpretation of electromagnetic soundings and induction logs was verified by logging three validation boreholes, confirming quantitative estimates of formation conductivity and thickness in the upper model layer, and qualitative estimates of conductivity in the lower model layer.

  7. Aspects of porosity prediction using multivariate linear regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrnes, A.P.; Wilson, M.D.

    1991-03-01

    Highly accurate multiple linear regression models have been developed for sandstones of diverse compositions. Porosity reduction or enhancement processes are controlled by the fundamental variables, Pressure (P), Temperature (T), Time (t), and Composition (X), where composition includes mineralogy, size, sorting, fluid composition, etc. The multiple linear regression equation, of which all linear porosity prediction models are subsets, takes the generalized form: Porosity = C{sub 0} + C{sub 1}(P) + C{sub 2}(T) + C{sub 3}(X) + C{sub 4}(t) + C{sub 5}(PT) + C{sub 6}(PX) + C{sub 7}(Pt) + C{sub 8}(TX) + C{sub 9}(Tt) + C{sub 10}(Xt) + C{sub 11}(PTX) + C{submore » 12}(PXt) + C{sub 13}(PTt) + C{sub 14}(TXt) + C{sub 15}(PTXt). The first four primary variables are often interactive, thus requiring terms involving two or more primary variables (the form shown implies interaction and not necessarily multiplication). The final terms used may also involve simple mathematic transforms such as log X, e{sup T}, X{sup 2}, or more complex transformations such as the Time-Temperature Index (TTI). The X term in the equation above represents a suite of compositional variable and, therefore, a fully expanded equation may include a series of terms incorporating these variables. Numerous published bivariate porosity prediction models involving P (or depth) or Tt (TTI) are effective to a degree, largely because of the high degree of colinearity between p and TTI. However, all such bivariate models ignore the unique contributions of P and Tt, as well as various X terms. These simpler models become poor predictors in regions where colinear relations change, were important variables have been ignored, or where the database does not include a sufficient range or weight distribution for the critical variables.« less

  8. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses

    PubMed Central

    Liu, Ruijie; Holik, Aliaksei Z.; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E.; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.; Ritchie, Matthew E.

    2015-01-01

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean–variance relationship of the log-counts-per-million using ‘voom’. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source ‘limma’ package. PMID:25925576

  9. On the Methodology of Studying Aging in Humans

    DTIC Science & Technology

    1961-01-01

    prediction of death rates The relation of death rate to age has been extensively studied for over 100 years. As an illustration recent death rates for...log death rates appear to be linear, the simpler Gompertz curve fits closely. While on this subject of the Makeham-Gompertz function, it should be...Makeham-Gompertz curve to 5 year age specific death rates . Each fitting provided estimates of the parameters a, {j, and log c for each of the five year

  10. COMBATXXI, JDAFS, and LBC Integration Requirements for EASE

    DTIC Science & Technology

    2015-10-06

    process as linear and as new data is made available, any previous analysis is obsolete and has to start the process over again. Figure 2 proposes a...final line of the manifest file names the scenario file associated with the run. Under the usual practice, the analyst now starts the COMBATXXI...describes which events are to be logged. Finally the scenario is started with the click of a button. The simulation generates logs of a couple of sorts

  11. Analysis of the two-point velocity correlations in turbulent boundary layer flows

    NASA Technical Reports Server (NTRS)

    Oberlack, M.

    1995-01-01

    The general objective of the present work is to explore the use of Rapid Distortion Theory (RDT) in analysis of the two-point statistics of the log-layer. RDT is applicable only to unsteady flows where the non-linear turbulence-turbulence interaction can be neglected in comparison to linear turbulence-mean interactions. Here we propose to use RDT to examine the structure of the large energy-containing scales and their interaction with the mean flow in the log-region. The contents of the work are twofold: First, two-point analysis methods will be used to derive the law-of-the-wall for the special case of zero mean pressure gradient. The basic assumptions needed are one-dimensionality in the mean flow and homogeneity of the fluctuations. It will be shown that a formal solution of the two-point correlation equation can be obtained as a power series in the von Karman constant, known to be on the order of 0.4. In the second part, a detailed analysis of the two-point correlation function in the log-layer will be given. The fundamental set of equations and a functional relation for the two-point correlation function will be derived. An asymptotic expansion procedure will be used in the log-layer to match Kolmogorov's universal range and the one-point correlations to the inviscid outer region valid for large correlation distances.

  12. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  13. Finite Elements Analysis of a Composite Semi-Span Test Article With and Without Discrete Damage

    NASA Technical Reports Server (NTRS)

    Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)

    2000-01-01

    AS&M Inc. performed finite element analysis, with and without discrete damage, of a composite semi-span test article that represents the Boeing 220-passenger transport aircraft composite semi-span test article. A NASTRAN bulk data file and drawings of the test mount fixtures and semi-span components were utilized to generate the baseline finite element model. In this model, the stringer blades are represented by shell elements, and the stringer flanges are combined with the skin. Numerous modeling modifications and discrete source damage scenarios were applied to the test article model throughout the course of the study. This report details the analysis method and results obtained from the composite semi-span study. Analyses were carried out for three load cases: Braked Roll, LOG Down-Bending and 2.5G Up-Bending. These analyses included linear and nonlinear static response, as well as linear and nonlinear buckling response. Results are presented in the form of stress and strain plots. factors of safety for failed elements, buckling loads and modes, deflection prediction tables and plots, and strainage prediction tables and plots. The collected results are presented within this report for comparison to test results.

  14. Geo-spatial and log-linear analysis of pedestrian and bicyclist crashes involving school-aged children.

    PubMed

    Abdel-Aty, Mohamed; Chundi, Sai Srinivas; Lee, Chris

    2007-01-01

    There is a growing concern with the safety of school-aged children. This study identifies the locations of pedestrian/bicyclist crashes involving school-aged children and examines the conditions when these crashes are more likely to occur. The 5-year records of crashes in Orange County, Florida where school-aged children were involved were used. The spatial distribution of these crashes was investigated using the Geographic Information Systems (GIS) and the likelihoods of crash occurrence under different conditions were estimated using log-linear models. A majority of school-aged children crashes occurred in the areas near schools. Although elementary school children were generally very involved, middle and high school children were more involved in crashes, particularly on high-speed multi-lane roadways. Driver's age, gender, and alcohol use, pedestrian's/bicyclist's age, number of lanes, median type, speed limits, and speed ratio were also found to be correlated with the frequency of crashes. The result confirms that school-aged children are exposed to high crash risk near schools. High crash involvement of middle and high school children reflects that middle and high schools tend to be located near multi-lane high-speed roads. The pedestrian's/bicyclist's demographic factors and geometric characteristics of the roads adjacent to schools associated with school children's crash involvement are of interest to school districts.

  15. Multifactorial analysis of renal transplants reported to the United Network for Organ Sharing Registry: a 1994 update.

    PubMed

    Gjertson, D W

    1994-01-01

    1. From a multivariate log-linear analysis of 57,303 renal transplants between 1988 and 1994, the top 10 factors influencing one-year and 3-year cadaveric graft survival rates were ranked as follows: [table: see text] 2. Center effects accounted for 30% and 28% of all assignable variations in one-year and 3-year outcomes, respectively. Although center variation dominated 32 other variables, most factors were relatively independent of transplant center. 3. Novel to our own multifactorial analyses of the UNOS Kidney Transplant Registry were 6 pretransplant factors (recipient pretransplant dialysis, pregnancy, PRA technique, donor disposition and preservation, and ABO compatibility). Survival rates over the various combinations of these new factors were not significantly different. 4. For the first time in our multivariate analyses, 4 posttransplantation factors (delayed graft function, rejection episodes prior to discharge, induction and maintenance drug therapies) were included in the log-linear model. It is noteworthy that graft survival in both transplant periods was seriously imperiled following delayed graft function or rejection prior to discharge, yet the accounting for these pseudo-outcome variables did not alter the influence of the remaining 31 transplant factors. Finally, maintenance drug therapies strongly influenced short-term outcomes but did not influence long-term results, except for a noteworthy trend toward increased survival rates for FK506 therapy.

  16. Transformation techniques for cross-sectional and longitudinal endocrine data: application to salivary cortisol concentrations.

    PubMed

    Miller, Robert; Plessow, Franziska

    2013-06-01

    Endocrine time series often lack normality and homoscedasticity most likely due to the non-linear dynamics of their natural determinants and the immanent characteristics of the biochemical analysis tools, respectively. As a consequence, data transformation (e.g., log-transformation) is frequently applied to enable general linear model-based analyses. However, to date, data transformation techniques substantially vary across studies and the question of which is the optimum power transformation remains to be addressed. The present report aims to provide a common solution for the analysis of endocrine time series by systematically comparing different power transformations with regard to their impact on data normality and homoscedasticity. For this, a variety of power transformations of the Box-Cox family were applied to salivary cortisol data of 309 healthy participants sampled in temporal proximity to a psychosocial stressor (the Trier Social Stress Test). Whereas our analyses show that un- as well as log-transformed data are inferior in terms of meeting normality and homoscedasticity, they also provide optimum transformations for both, cross-sectional cortisol samples reflecting the distributional concentration equilibrium and longitudinal cortisol time series comprising systematically altered hormone distributions that result from simultaneously elicited pulsatile change and continuous elimination processes. Considering these dynamics of endocrine oscillations, data transformation prior to testing GLMs seems mandatory to minimize biased results. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Poverty and prevalence of antimicrobial resistance in invasive isolates.

    PubMed

    Alvarez-Uria, Gerardo; Gandra, Sumanth; Laxminarayan, Ramanan

    2016-11-01

    To evaluate the association between the income status of a country and the prevalence of antimicrobial resistance (AMR) in the three most common bacteria causing infections in hospitals and in the community: third-generation cephalosporin (3GC)-resistant Escherichia coli, methicillin-resistant Staphylococcus aureus (MRSA), and 3GC-resistant Klebsiella species. Using 2013-2014 country-specific data from the ResistanceMap repository and the World Bank, the association between the prevalence of AMR in invasive samples and the gross national income (GNI) per capita was investigated through linear regression with robust standard errors. To account for non-linear association with the dependent variable, GNI per capita was log-transformed. The models predicted an 11.3% (95% confidence interval (CI) 6.5-16.2%), 18.2% (95% CI 11-25.5%), and 12.3% (95% CI 5.5-19.1%) decrease in the prevalence of 3GC-resistant E. coli, 3GC-resistant Klebsiella species, and MRSA, respectively, for each log GNI per capita. The association was stronger for 3GC-resistant E. coli and Klebsiella species than for MRSA. A significant negative association between GNI per capita and the prevalence of MRSA and 3GC-resistant E. coli and Klebsiella species was found. These results underscore the urgent need for new policies aimed at reducing AMR in resource-poor settings. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Effect of n-octanol in the mobile phase on lipophilicity determination by reversed-phase high-performance liquid chromatography on a modified silica column.

    PubMed

    Benhaim, Deborah; Grushka, Eli

    2008-10-31

    In this study, we show that the addition of n-octanol to the mobile phase improves the chromatographic determination of lipophilicity parameters of xenobiotics (neutral solutes, acidic, neutral and basic drugs) on a Phenomenex Gemini C18 column. The Gemini C18 column is a new generation hybrid silica-based column with an extended pH range capability. The wide pH range (2-12) afforded the examination of basic drugs and acidic drugs in their neutral form. Extrapolated retention factor values, [Formula: see text] , obtained on the above column with the n-octanol-modified mobile phase were very well correlated (1:1 correlation) with literature values of logP (logarithm of the partition coefficient in n-octanol/water) of neutral compounds and neutral drugs (69). In addition, we found good linear correlations between measured [Formula: see text] values and calculated values of the logarithm of the distribution coefficient at pH 7.0 (logD(7.0)) for ionized acidic and basic drugs (r(2)=0.95). The Gemini C18 phase was characterized using the linear solvation energy relationship (LSER) model of Abraham. The LSER system constants for the column were compared to the LSER constants of n-octanol/water extraction system using the Tanaka radar plots. The comparison shows that the two methods are nearly equivalent.

  19. Automated method for measuring the extent of selective logging damage with airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Melendy, L.; Hagen, S. C.; Sullivan, F. B.; Pearson, T. R. H.; Walker, S. M.; Ellis, P.; Kustiyo; Sambodo, Ari Katmoko; Roswintiarti, O.; Hanson, M. A.; Klassen, A. W.; Palace, M. W.; Braswell, B. H.; Delgado, G. M.

    2018-05-01

    Selective logging has an impact on the global carbon cycle, as well as on the forest micro-climate, and longer-term changes in erosion, soil and nutrient cycling, and fire susceptibility. Our ability to quantify these impacts is dependent on methods and tools that accurately identify the extent and features of logging activity. LiDAR-based measurements of these features offers significant promise. Here, we present a set of algorithms for automated detection and mapping of critical features associated with logging - roads/decks, skid trails, and gaps - using commercial airborne LiDAR data as input. The automated algorithm was applied to commercial LiDAR data collected over two logging concessions in Kalimantan, Indonesia in 2014. The algorithm results were compared to measurements of the logging features collected in the field soon after logging was complete. The automated algorithm-mapped road/deck and skid trail features match closely with features measured in the field, with agreement levels ranging from 69% to 99% when adjusting for GPS location error. The algorithm performed most poorly with gaps, which, by their nature, are variable due to the unpredictable impact of tree fall versus the linear and regular features directly created by mechanical means. Overall, the automated algorithm performs well and offers significant promise as a generalizable tool useful to efficiently and accurately capture the effects of selective logging, including the potential to distinguish reduced impact logging from conventional logging.

  20. Estimating tree grades for Southern Appalachian natural forest stands

    Treesearch

    Jeffrey P. Prestemon

    1998-01-01

    Log prices can vary significantly by grade: grade 1 logs are often several times the price per unit of grade 3 logs. Because tree grading rules derive from log grading rules, a model that predicts tree grades based on tree and stand-level variables might be useful for predicting stand values. The model could then assist in the modeling of timber supply and in economic...

Top