Sample records for quantile regression analyses

  1. Predicting Word Reading Ability: A Quantile Regression Study

    ERIC Educational Resources Information Center

    McIlraith, Autumn L.

    2018-01-01

    Predictors of early word reading are well established. However, it is unclear if these predictors hold for readers across a range of word reading abilities. This study used quantile regression to investigate predictive relationships at different points in the distribution of word reading. Quantile regression analyses used preschool and…

  2. Estimating effects of limiting factors with regression quantiles

    USGS Publications Warehouse

    Cade, B.S.; Terrell, J.W.; Schroeder, R.L.

    1999-01-01

    In a recent Concepts paper in Ecology, Thomson et al. emphasized that assumptions of conventional correlation and regression analyses fundamentally conflict with the ecological concept of limiting factors, and they called for new statistical procedures to address this problem. The analytical issue is that unmeasured factors may be the active limiting constraint and may induce a pattern of unequal variation in the biological response variable through an interaction with the measured factors. Consequently, changes near the maxima, rather than at the center of response distributions, are better estimates of the effects expected when the observed factor is the active limiting constraint. Regression quantiles provide estimates for linear models fit to any part of a response distribution, including near the upper bounds, and require minimal assumptions about the form of the error distribution. Regression quantiles extend the concept of one-sample quantiles to the linear model by solving an optimization problem of minimizing an asymmetric function of absolute errors. Rank-score tests for regression quantiles provide tests of hypotheses and confidence intervals for parameters in linear models with heteroscedastic errors, conditions likely to occur in models of limiting ecological relations. We used selected regression quantiles (e.g., 5th, 10th, ..., 95th) and confidence intervals to test hypotheses that parameters equal zero for estimated changes in average annual acorn biomass due to forest canopy cover of oak (Quercus spp.) and oak species diversity. Regression quantiles also were used to estimate changes in glacier lily (Erythronium grandiflorum) seedling numbers as a function of lily flower numbers, rockiness, and pocket gopher (Thomomys talpoides fossor) activity, data that motivated the query by Thomson et al. for new statistical procedures. Both example applications showed that effects of limiting factors estimated by changes in some upper regression quantile (e

  3. Gender Gaps in Mathematics, Science and Reading Achievements in Muslim Countries: Evidence from Quantile Regression Analyses

    ERIC Educational Resources Information Center

    Shafiq, M. Najeeb

    2011-01-01

    Using quantile regression analyses, this study examines gender gaps in mathematics, science, and reading in Azerbaijan, Indonesia, Jordan, the Kyrgyz Republic, Qatar, Tunisia, and Turkey among 15 year-old students. The analyses show that girls in Azerbaijan achieve as well as boys in mathematics and science and overachieve in reading. In Jordan,…

  4. Quantile regression applied to spectral distance decay

    USGS Publications Warehouse

    Rocchini, D.; Cade, B.S.

    2008-01-01

    Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.

  5. SEMIPARAMETRIC QUANTILE REGRESSION WITH HIGH-DIMENSIONAL COVARIATES

    PubMed Central

    Zhu, Liping; Huang, Mian; Li, Runze

    2012-01-01

    This paper is concerned with quantile regression for a semiparametric regression model, in which both the conditional mean and conditional variance function of the response given the covariates admit a single-index structure. This semiparametric regression model enables us to reduce the dimension of the covariates and simultaneously retains the flexibility of nonparametric regression. Under mild conditions, we show that the simple linear quantile regression offers a consistent estimate of the index parameter vector. This is a surprising and interesting result because the single-index model is possibly misspecified under the linear quantile regression. With a root-n consistent estimate of the index vector, one may employ a local polynomial regression technique to estimate the conditional quantile function. This procedure is computationally efficient, which is very appealing in high-dimensional data analysis. We show that the resulting estimator of the quantile function performs asymptotically as efficiently as if the true value of the index vector were known. The methodologies are demonstrated through comprehensive simulation studies and an application to a real dataset. PMID:24501536

  6. GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA

    PubMed Central

    Zheng, Qi; Peng, Limin; He, Xuming

    2015-01-01

    Quantile regression has become a valuable tool to analyze heterogeneous covaraite-response associations that are often encountered in practice. The development of quantile regression methodology for high dimensional covariates primarily focuses on examination of model sparsity at a single or multiple quantile levels, which are typically prespecified ad hoc by the users. The resulting models may be sensitive to the specific choices of the quantile levels, leading to difficulties in interpretation and erosion of confidence in the results. In this article, we propose a new penalization framework for quantile regression in the high dimensional setting. We employ adaptive L1 penalties, and more importantly, propose a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantile levels. Our proposed approach achieves consistent shrinkage of regression quantile estimates across a continuous range of quantiles levels, enhancing the flexibility and robustness of the existing penalized quantile regression methods. Our theoretical results include the oracle rate of uniform convergence and weak convergence of the parameter estimators. We also use numerical studies to confirm our theoretical findings and illustrate the practical utility of our proposal. PMID:26604424

  7. Quantile regression models of animal habitat relationships

    USGS Publications Warehouse

    Cade, Brian S.

    2003-01-01

    Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative

  8. Efficient Regressions via Optimally Combining Quantile Information*

    PubMed Central

    Zhao, Zhibiao; Xiao, Zhijie

    2014-01-01

    We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481

  9. Censored quantile regression with recursive partitioning-based weights

    PubMed Central

    Wey, Andrew; Wang, Lan; Rudser, Kyle

    2014-01-01

    Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800

  10. Spatial quantile regression using INLA with applications to childhood overweight in Malawi.

    PubMed

    Mtambo, Owen P L; Masangwi, Salule J; Kazembe, Lawrence N M

    2015-04-01

    Analyses of childhood overweight have mainly used mean regression. However, using quantile regression is more appropriate as it provides flexibility to analyse the determinants of overweight corresponding to quantiles of interest. The main objective of this study was to fit a Bayesian additive quantile regression model with structured spatial effects for childhood overweight in Malawi using the 2010 Malawi DHS data. Inference was fully Bayesian using R-INLA package. The significant determinants of childhood overweight ranged from socio-demographic factors such as type of residence to child and maternal factors such as child age and maternal BMI. We observed significant positive structured spatial effects on childhood overweight in some districts of Malawi. We recommended that the childhood malnutrition policy makers should consider timely interventions based on risk factors as identified in this paper including spatial targets of interventions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Quantile Regression in the Study of Developmental Sciences

    PubMed Central

    Petscher, Yaacov; Logan, Jessica A. R.

    2014-01-01

    Linear regression analysis is one of the most common techniques applied in developmental research, but only allows for an estimate of the average relations between the predictor(s) and the outcome. This study describes quantile regression, which provides estimates of the relations between the predictor(s) and outcome, but across multiple points of the outcome’s distribution. Using data from the High School and Beyond and U.S. Sustained Effects Study databases, quantile regression is demonstrated and contrasted with linear regression when considering models with: (a) one continuous predictor, (b) one dichotomous predictor, (c) a continuous and a dichotomous predictor, and (d) a longitudinal application. Results from each example exhibited the differential inferences which may be drawn using linear or quantile regression. PMID:24329596

  12. Quantile regression analyses of associated factors for body mass index in Korean adolescents.

    PubMed

    Kim, T H; Lee, E K; Han, E

    2015-05-01

    This study examined the influence of home and school environments, and individual health-risk behaviours on body weight outcomes in Korean adolescents. This was a cross-sectional observational study. Quantile regression models to explore heterogeneity in the association of specific factors with body mass index (BMI) over the entire conditional BMI distribution was used. A nationally representative web-based survey for youths was used. Paternal education level of college or more education was associated with lower BMI for girls, whereas college or more education of mothers was associated with higher BMI for boys; for both, the magnitude of association became larger at the upper quantiles of the conditional BMI distribution. Girls with good family economic status were more likely to have higher BMIs than those with average family economic status, particularly at the upper quantile of the conditional BMI distribution. Attending a co-ed school was associated with lower BMI for both genders with a larger association at the upper quantiles. Substantial screen time for TV watching, video games, or internet surfing was associated with a higher BMI with a larger association at the upper quantiles for both girls and boys. Dental prevention was negatively associated with BMI, whereas suicide consideration was positively associated with BMIs of both genders with a larger association at a higher quantile. These findings suggest that interventions aimed at behavioural changes and positive parental roles are needed to effectively address high adolescent BMI. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  13. Variable Selection for Nonparametric Quantile Regression via Smoothing Spline AN OVA

    PubMed Central

    Lin, Chen-Yen; Bondell, Howard; Zhang, Hao Helen; Zou, Hui

    2014-01-01

    Quantile regression provides a more thorough view of the effect of covariates on a response. Nonparametric quantile regression has become a viable alternative to avoid restrictive parametric assumption. The problem of variable selection for quantile regression is challenging, since important variables can influence various quantiles in different ways. We tackle the problem via regularization in the context of smoothing spline ANOVA models. The proposed sparse nonparametric quantile regression (SNQR) can identify important variables and provide flexible estimates for quantiles. Our numerical study suggests the promising performance of the new procedure in variable selection and function estimation. Supplementary materials for this article are available online. PMID:24554792

  14. Quantile regression via vector generalized additive models.

    PubMed

    Yee, Thomas W

    2004-07-30

    One of the most popular methods for quantile regression is the LMS method of Cole and Green. The method naturally falls within a penalized likelihood framework, and consequently allows for considerable flexible because all three parameters may be modelled by cubic smoothing splines. The model is also very understandable: for a given value of the covariate, the LMS method applies a Box-Cox transformation to the response in order to transform it to standard normality; to obtain the quantiles, an inverse Box-Cox transformation is applied to the quantiles of the standard normal distribution. The purposes of this article are three-fold. Firstly, LMS quantile regression is presented within the framework of the class of vector generalized additive models. This confers a number of advantages such as a unifying theory and estimation process. Secondly, a new LMS method based on the Yeo-Johnson transformation is proposed, which has the advantage that the response is not restricted to be positive. Lastly, this paper describes a software implementation of three LMS quantile regression methods in the S language. This includes the LMS-Yeo-Johnson method, which is estimated efficiently by a new numerical integration scheme. The LMS-Yeo-Johnson method is illustrated by way of a large cross-sectional data set from a New Zealand working population. Copyright 2004 John Wiley & Sons, Ltd.

  15. Regularized quantile regression for SNP marker estimation of pig growth curves.

    PubMed

    Barroso, L M A; Nascimento, M; Nascimento, A C C; Silva, F F; Serão, N V L; Cruz, C D; Resende, M D V; Silva, F L; Azevedo, C F; Lopes, P S; Guimarães, S E F

    2017-01-01

    Genomic growth curves are generally defined only in terms of population mean; an alternative approach that has not yet been exploited in genomic analyses of growth curves is the Quantile Regression (QR). This methodology allows for the estimation of marker effects at different levels of the variable of interest. We aimed to propose and evaluate a regularized quantile regression for SNP marker effect estimation of pig growth curves, as well as to identify the chromosome regions of the most relevant markers and to estimate the genetic individual weight trajectory over time (genomic growth curve) under different quantiles (levels). The regularized quantile regression (RQR) enabled the discovery, at different levels of interest (quantiles), of the most relevant markers allowing for the identification of QTL regions. We found the same relevant markers simultaneously affecting different growth curve parameters (mature weight and maturity rate): two (ALGA0096701 and ALGA0029483) for RQR(0.2), one (ALGA0096701) for RQR(0.5), and one (ALGA0003761) for RQR(0.8). Three average genomic growth curves were obtained and the behavior was explained by the curve in quantile 0.2, which differed from the others. RQR allowed for the construction of genomic growth curves, which is the key to identifying and selecting the most desirable animals for breeding purposes. Furthermore, the proposed model enabled us to find, at different levels of interest (quantiles), the most relevant markers for each trait (growth curve parameter estimates) and their respective chromosomal positions (identification of new QTL regions for growth curves in pigs). These markers can be exploited under the context of marker assisted selection while aiming to change the shape of pig growth curves.

  16. Spectral distance decay: Assessing species beta-diversity by quantile regression

    USGS Publications Warehouse

    Rocchinl, D.; Nagendra, H.; Ghate, R.; Cade, B.S.

    2009-01-01

    Remotely sensed data represents key information for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance may allow us to quantitatively estimate how beta-diversity in species changes with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological datasets are characterized by a high number of zeroes that can add noise to the regression model. Quantile regression can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this paper, we used ordinary least square (ols) and quantile regression to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.05) considering both ols and quantile regression. Nonetheless, ols regression estimate of mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when spectral distance approaches zero, was very low compared with the intercepts of upper quantiles, which detected high species similarity when habitats are more similar. In this paper we demonstrated the power of using quantile regressions applied to spectral distance decay in order to reveal species diversity patterns otherwise lost or underestimated by ordinary least square regression. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  17. A gentle introduction to quantile regression for ecologists

    USGS Publications Warehouse

    Cade, B.S.; Noon, B.R.

    2003-01-01

    Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.

  18. Quantile regression for the statistical analysis of immunological data with many non-detects.

    PubMed

    Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth

    2012-07-07

    Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.

  19. Linear Regression Quantile Mapping (RQM) - A new approach to bias correction with consistent quantile trends

    NASA Astrophysics Data System (ADS)

    Passow, Christian; Donner, Reik

    2017-04-01

    Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate

  20. Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression

    PubMed Central

    Peng, Limin; Xu, Jinfeng; Kutner, Nancy

    2013-01-01

    Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515

  1. Estimating risks to aquatic life using quantile regression

    USGS Publications Warehouse

    Schmidt, Travis S.; Clements, William H.; Cade, Brian S.

    2012-01-01

    One of the primary goals of biological assessment is to assess whether contaminants or other stressors limit the ecological potential of running waters. It is important to interpret responses to contaminants relative to other environmental factors, but necessity or convenience limit quantification of all factors that influence ecological potential. In these situations, the concept of limiting factors is useful for data interpretation. We used quantile regression to measure risks to aquatic life exposed to metals by including all regression quantiles (τ  =  0.05–0.95, by increments of 0.05), not just the upper limit of density (e.g., 90th quantile). We measured population densities (individuals/0.1 m2) of 2 mayflies (Rhithrogena spp., Drunella spp.) and a caddisfly (Arctopsyche grandis), aqueous metal mixtures (Cd, Cu, Zn), and other limiting factors (basin area, site elevation, discharge, temperature) at 125 streams in Colorado. We used a model selection procedure to test which factor was most limiting to density. Arctopsyche grandis was limited by other factors, whereas metals limited most quantiles of density for the 2 mayflies. Metals reduced mayfly densities most at sites where other factors were not limiting. Where other factors were limiting, low mayfly densities were observed despite metal concentrations. Metals affected mayfly densities most at quantiles above the mean and not just at the upper limit of density. Risk models developed from quantile regression showed that mayfly densities observed at background metal concentrations are improbable when metal mixtures are at US Environmental Protection Agency criterion continuous concentrations. We conclude that metals limit potential density, not realized average density. The most obvious effects on mayfly populations were at upper quantiles and not mean density. Therefore, we suggest that policy developed from mean-based measures of effects may not be as useful as policy based on the concept of

  2. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    NASA Astrophysics Data System (ADS)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  3. Quantile Regression with Censored Data

    ERIC Educational Resources Information Center

    Lin, Guixian

    2009-01-01

    The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…

  4. Heritability Across the Distribution: An Application of Quantile Regression

    PubMed Central

    Petrill, Stephen A.; Hart, Sara A.; Schatschneider, Christopher; Thompson, Lee A.; Deater-Deckard, Kirby; DeThorne, Laura S.; Bartlett, Christopher

    2016-01-01

    We introduce a new method for analyzing twin data called quantile regression. Through the application presented here, quantile regression is able to assess the genetic and environmental etiology of any skill or ability, at multiple points in the distribution of that skill or ability. This method is compared to the Cherny et al. (Behav Genet 22:153–162, 1992) method in an application to four different reading-related outcomes in 304 pairs of first-grade same sex twins enrolled in the Western Reserve Reading Project. Findings across the two methods were similar; both indicated some variation across the distribution of the genetic and shared environmental influences on non-word reading. However, quantile regression provides more details about the location and size of the measured effect. Applications of the technique are discussed. PMID:21877231

  5. Multiple imputation for cure rate quantile regression with censored data.

    PubMed

    Wu, Yuanshan; Yin, Guosheng

    2017-03-01

    The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.

  6. Composite marginal quantile regression analysis for longitudinal adolescent body mass index data.

    PubMed

    Yang, Chi-Chuan; Chen, Yi-Hau; Chang, Hsing-Yi

    2017-09-20

    Childhood and adolescenthood overweight or obesity, which may be quantified through the body mass index (BMI), is strongly associated with adult obesity and other health problems. Motivated by the child and adolescent behaviors in long-term evolution (CABLE) study, we are interested in individual, family, and school factors associated with marginal quantiles of longitudinal adolescent BMI values. We propose a new method for composite marginal quantile regression analysis for longitudinal outcome data, which performs marginal quantile regressions at multiple quantile levels simultaneously. The proposed method extends the quantile regression coefficient modeling method introduced by Frumento and Bottai (Biometrics 2016; 72:74-84) to longitudinal data accounting suitably for the correlation structure in longitudinal observations. A goodness-of-fit test for the proposed modeling is also developed. Simulation results show that the proposed method can be much more efficient than the analysis without taking correlation into account and the analysis performing separate quantile regressions at different quantile levels. The application to the longitudinal adolescent BMI data from the CABLE study demonstrates the practical utility of our proposal. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Simultaneous multiple non-crossing quantile regression estimation using kernel constraints

    PubMed Central

    Liu, Yufeng; Wu, Yichao

    2011-01-01

    Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842

  8. Birthweight Related Factors in Northwestern Iran: Using Quantile Regression Method.

    PubMed

    Fallah, Ramazan; Kazemnejad, Anoshirvan; Zayeri, Farid; Shoghli, Alireza

    2015-11-18

    Birthweight is one of the most important predicting indicators of the health status in adulthood. Having a balanced birthweight is one of the priorities of the health system in most of the industrial and developed countries. This indicator is used to assess the growth and health status of the infants. The aim of this study was to assess the birthweight of the neonates by using quantile regression in Zanjan province. This analytical descriptive study was carried out using pre-registered (March 2010 - March 2012) data of neonates in urban/rural health centers of Zanjan province using multiple-stage cluster sampling. Data were analyzed using multiple linear regressions andquantile regression method and SAS 9.2 statistical software. From 8456 newborn baby, 4146 (49%) were female. The mean age of the mothers was 27.1±5.4 years. The mean birthweight of the neonates was 3104 ± 431 grams. Five hundred and seventy-three patients (6.8%) of the neonates were less than 2500 grams. In all quantiles, gestational age of neonates (p<0.05), weight and educational level of the mothers (p<0.05) showed a linear significant relationship with the i of the neonates. However, sex and birth rank of the neonates, mothers age, place of residence (urban/rural) and career were not significant in all quantiles (p>0.05). This study revealed the results of multiple linear regression and quantile regression were not identical. We strictly recommend the use of quantile regression when an asymmetric response variable or data with outliers is available.

  9. Birthweight Related Factors in Northwestern Iran: Using Quantile Regression Method

    PubMed Central

    Fallah, Ramazan; Kazemnejad, Anoshirvan; Zayeri, Farid; Shoghli, Alireza

    2016-01-01

    Introduction: Birthweight is one of the most important predicting indicators of the health status in adulthood. Having a balanced birthweight is one of the priorities of the health system in most of the industrial and developed countries. This indicator is used to assess the growth and health status of the infants. The aim of this study was to assess the birthweight of the neonates by using quantile regression in Zanjan province. Methods: This analytical descriptive study was carried out using pre-registered (March 2010 - March 2012) data of neonates in urban/rural health centers of Zanjan province using multiple-stage cluster sampling. Data were analyzed using multiple linear regressions andquantile regression method and SAS 9.2 statistical software. Results: From 8456 newborn baby, 4146 (49%) were female. The mean age of the mothers was 27.1±5.4 years. The mean birthweight of the neonates was 3104 ± 431 grams. Five hundred and seventy-three patients (6.8%) of the neonates were less than 2500 grams. In all quantiles, gestational age of neonates (p<0.05), weight and educational level of the mothers (p<0.05) showed a linear significant relationship with the i of the neonates. However, sex and birth rank of the neonates, mothers age, place of residence (urban/rural) and career were not significant in all quantiles (p>0.05). Conclusion: This study revealed the results of multiple linear regression and quantile regression were not identical. We strictly recommend the use of quantile regression when an asymmetric response variable or data with outliers is available. PMID:26925889

  10. Boosting structured additive quantile regression for longitudinal childhood obesity data.

    PubMed

    Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

    2013-07-25

    Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

  11. Comparing least-squares and quantile regression approaches to analyzing median hospital charges.

    PubMed

    Olsen, Cody S; Clark, Amy E; Thomas, Andrea M; Cook, Lawrence J

    2012-07-01

    Emergency department (ED) and hospital charges obtained from administrative data sets are useful descriptors of injury severity and the burden to EDs and the health care system. However, charges are typically positively skewed due to costly procedures, long hospital stays, and complicated or prolonged treatment for few patients. The median is not affected by extreme observations and is useful in describing and comparing distributions of hospital charges. A least-squares analysis employing a log transformation is one approach for estimating median hospital charges, corresponding confidence intervals (CIs), and differences between groups; however, this method requires certain distributional properties. An alternate method is quantile regression, which allows estimation and inference related to the median without making distributional assumptions. The objective was to compare the log-transformation least-squares method to the quantile regression approach for estimating median hospital charges, differences in median charges between groups, and associated CIs. The authors performed simulations using repeated sampling of observed statewide ED and hospital charges and charges randomly generated from a hypothetical lognormal distribution. The median and 95% CI and the multiplicative difference between the median charges of two groups were estimated using both least-squares and quantile regression methods. Performance of the two methods was evaluated. In contrast to least squares, quantile regression produced estimates that were unbiased and had smaller mean square errors in simulations of observed ED and hospital charges. Both methods performed well in simulations of hypothetical charges that met least-squares method assumptions. When the data did not follow the assumed distribution, least-squares estimates were often biased, and the associated CIs had lower than expected coverage as sample size increased. Quantile regression analyses of hospital charges provide unbiased

  12. Quantile Regression for Recurrent Gap Time Data

    PubMed Central

    Luo, Xianghua; Huang, Chiung-Yu; Wang, Lan

    2014-01-01

    Summary Evaluating covariate effects on gap times between successive recurrent events is of interest in many medical and public health studies. While most existing methods for recurrent gap time analysis focus on modeling the hazard function of gap times, a direct interpretation of the covariate effects on the gap times is not available through these methods. In this article, we consider quantile regression that can provide direct assessment of covariate effects on the quantiles of the gap time distribution. Following the spirit of the weighted risk-set method by Luo and Huang (2011, Statistics in Medicine 30, 301–311), we extend the martingale-based estimating equation method considered by Peng and Huang (2008, Journal of the American Statistical Association 103, 637–649) for univariate survival data to analyze recurrent gap time data. The proposed estimation procedure can be easily implemented in existing software for univariate censored quantile regression. Uniform consistency and weak convergence of the proposed estimators are established. Monte Carlo studies demonstrate the effectiveness of the proposed method. An application to data from the Danish Psychiatric Central Register is presented to illustrate the methods developed in this article. PMID:23489055

  13. Principles of Quantile Regression and an Application

    ERIC Educational Resources Information Center

    Chen, Fang; Chalhoub-Deville, Micheline

    2014-01-01

    Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…

  14. Modeling energy expenditure in children and adolescents using quantile regression

    USDA-ARS?s Scientific Manuscript database

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obes...

  15. Quality of life in breast cancer patients--a quantile regression analysis.

    PubMed

    Pourhoseingholi, Mohamad Amin; Safaee, Azadeh; Moghimi-Dehkordi, Bijan; Zeighami, Bahram; Faghihzadeh, Soghrat; Tabatabaee, Hamid Reza; Pourhoseingholi, Asma

    2008-01-01

    Quality of life study has an important role in health care especially in chronic diseases, in clinical judgment and in medical resources supplying. Statistical tools like linear regression are widely used to assess the predictors of quality of life. But when the response is not normal the results are misleading. The aim of this study is to determine the predictors of quality of life in breast cancer patients, using quantile regression model and compare to linear regression. A cross-sectional study conducted on 119 breast cancer patients that admitted and treated in chemotherapy ward of Namazi hospital in Shiraz. We used QLQ-C30 questionnaire to assessment quality of life in these patients. A quantile regression was employed to assess the assocciated factors and the results were compared to linear regression. All analysis carried out using SAS. The mean score for the global health status for breast cancer patients was 64.92+/-11.42. Linear regression showed that only grade of tumor, occupational status, menopausal status, financial difficulties and dyspnea were statistically significant. In spite of linear regression, financial difficulties were not significant in quantile regression analysis and dyspnea was only significant for first quartile. Also emotion functioning and duration of disease statistically predicted the QOL score in the third quartile. The results have demonstrated that using quantile regression leads to better interpretation and richer inference about predictors of the breast cancer patient quality of life.

  16. Consistent model identification of varying coefficient quantile regression with BIC tuning parameter selection

    PubMed Central

    Zheng, Qi; Peng, Limin

    2016-01-01

    Quantile regression provides a flexible platform for evaluating covariate effects on different segments of the conditional distribution of response. As the effects of covariates may change with quantile level, contemporaneously examining a spectrum of quantiles is expected to have a better capacity to identify variables with either partial or full effects on the response distribution, as compared to focusing on a single quantile. Under this motivation, we study a general adaptively weighted LASSO penalization strategy in the quantile regression setting, where a continuum of quantile index is considered and coefficients are allowed to vary with quantile index. We establish the oracle properties of the resulting estimator of coefficient function. Furthermore, we formally investigate a BIC-type uniform tuning parameter selector and show that it can ensure consistent model selection. Our numerical studies confirm the theoretical findings and illustrate an application of the new variable selection procedure. PMID:28008212

  17. Quantile Regression in the Study of Developmental Sciences

    ERIC Educational Resources Information Center

    Petscher, Yaacov; Logan, Jessica A. R.

    2014-01-01

    Linear regression analysis is one of the most common techniques applied in developmental research, but only allows for an estimate of the average relations between the predictor(s) and the outcome. This study describes quantile regression, which provides estimates of the relations between the predictor(s) and outcome, but across multiple points of…

  18. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  19. Gender Gaps in Mathematics, Science and Reading Achievements in Muslim Countries: A Quantile Regression Approach

    ERIC Educational Resources Information Center

    Shafiq, M. Najeeb

    2013-01-01

    Using quantile regression analyses, this study examines gender gaps in mathematics, science, and reading in Azerbaijan, Indonesia, Jordan, the Kyrgyz Republic, Qatar, Tunisia, and Turkey among 15-year-old students. The analyses show that girls in Azerbaijan achieve as well as boys in mathematics and science and overachieve in reading. In Jordan,…

  20. Relationship between Urbanization and Cancer Incidence in Iran Using Quantile Regression.

    PubMed

    Momenyan, Somayeh; Sadeghifar, Majid; Sarvi, Fatemeh; Khodadost, Mahmoud; Mosavi-Jarrahi, Alireza; Ghaffari, Mohammad Ebrahim; Sekhavati, Eghbal

    2016-01-01

    Quantile regression is an efficient method for predicting and estimating the relationship between explanatory variables and percentile points of the response distribution, particularly for extreme percentiles of the distribution. To study the relationship between urbanization and cancer morbidity, we here applied quantile regression. This cross-sectional study was conducted for 9 cancers in 345 cities in 2007 in Iran. Data were obtained from the Ministry of Health and Medical Education and the relationship between urbanization and cancer morbidity was investigated using quantile regression and least square regression. Fitting models were compared using AIC criteria. R (3.0.1) software and the Quantreg package were used for statistical analysis. With the quantile regression model all percentiles for breast, colorectal, prostate, lung and pancreas cancers demonstrated increasing incidence rate with urbanization. The maximum increase for breast cancer was in the 90th percentile (β=0.13, p-value<0.001), for colorectal cancer was in the 75th percentile (β=0.048, p-value<0.001), for prostate cancer the 95th percentile (β=0.55, p-value<0.001), for lung cancer was in 95th percentile (β=0.52, p-value=0.006), for pancreas cancer was in 10th percentile (β=0.011, p-value<0.001). For gastric, esophageal and skin cancers, with increasing urbanization, the incidence rate was decreased. The maximum decrease for gastric cancer was in the 90th percentile(β=0.003, p-value<0.001), for esophageal cancer the 95th (β=0.04, p-value=0.4) and for skin cancer also the 95th (β=0.145, p-value=0.071). The AIC showed that for upper percentiles, the fitting of quantile regression was better than least square regression. According to the results of this study, the significant impact of urbanization on cancer morbidity requirs more effort and planning by policymakers and administrators in order to reduce risk factors such as pollution in urban areas and ensure proper nutrition

  1. Early Warning Signals of Financial Crises with Multi-Scale Quantile Regressions of Log-Periodic Power Law Singularities.

    PubMed

    Zhang, Qun; Zhang, Qunzhi; Sornette, Didier

    2016-01-01

    We augment the existing literature using the Log-Periodic Power Law Singular (LPPLS) structures in the log-price dynamics to diagnose financial bubbles by providing three main innovations. First, we introduce the quantile regression to the LPPLS detection problem. This allows us to disentangle (at least partially) the genuine LPPLS signal and the a priori unknown complicated residuals. Second, we propose to combine the many quantile regressions with a multi-scale analysis, which aggregates and consolidates the obtained ensembles of scenarios. Third, we define and implement the so-called DS LPPLS Confidence™ and Trust™ indicators that enrich considerably the diagnostic of bubbles. Using a detailed study of the "S&P 500 1987" bubble and presenting analyses of 16 historical bubbles, we show that the quantile regression of LPPLS signals contributes useful early warning signals. The comparison between the constructed signals and the price development in these 16 historical bubbles demonstrates their significant predictive ability around the real critical time when the burst/rally occurs.

  2. Quantile Regression Models for Current Status Data

    PubMed Central

    Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen

    2016-01-01

    Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307

  3. Estimating normative limits of Heidelberg Retina Tomograph optic disc rim area with quantile regression.

    PubMed

    Artes, Paul H; Crabb, David P

    2010-01-01

    To investigate why the specificity of the Moorfields Regression Analysis (MRA) of the Heidelberg Retina Tomograph (HRT) varies with disc size, and to derive accurate normative limits for neuroretinal rim area to address this problem. Two datasets from healthy subjects (Manchester, UK, n = 88; Halifax, Nova Scotia, Canada, n = 75) were used to investigate the physiological relationship between the optic disc and neuroretinal rim area. Normative limits for rim area were derived by quantile regression (QR) and compared with those of the MRA (derived by linear regression). Logistic regression analyses were performed to quantify the association between disc size and positive classifications with the MRA, as well as with the QR-derived normative limits. In both datasets, the specificity of the MRA depended on optic disc size. The odds of observing a borderline or outside-normal-limits classification increased by approximately 10% for each 0.1 mm(2) increase in disc area (P < 0.1). The lower specificity of the MRA with large optic discs could be explained by the failure of linear regression to model the extremes of the rim area distribution (observations far from the mean). In comparison, the normative limits predicted by QR were larger for smaller discs (less specific, more sensitive), and smaller for larger discs, such that false-positive rates became independent of optic disc size. Normative limits derived by quantile regression appear to remove the size-dependence of specificity with the MRA. Because quantile regression does not rely on the restrictive assumptions of standard linear regression, it may be a more appropriate method for establishing normative limits in other clinical applications where the underlying distributions are nonnormal or have nonconstant variance.

  4. Modeling energy expenditure in children and adolescents using quantile regression

    PubMed Central

    Yang, Yunwen; Adolph, Anne L.; Puyau, Maurice R.; Vohra, Firoz A.; Zakeri, Issa F.

    2013-01-01

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obese children. First, QR models will be developed to predict minute-by-minute awake EE at different quantile levels based on heart rate (HR) and physical activity (PA) accelerometry counts, and child characteristics of age, sex, weight, and height. Second, the QR models will be used to evaluate the covariate effects of weight, PA, and HR across the conditional EE distribution. QR and ordinary least squares (OLS) regressions are estimated in 109 children, aged 5–18 yr. QR modeling of EE outperformed OLS regression for both nonobese and obese populations. Average prediction errors for QR compared with OLS were not only smaller at the median τ = 0.5 (18.6 vs. 21.4%), but also substantially smaller at the tails of the distribution (10.2 vs. 39.2% at τ = 0.1 and 8.7 vs. 19.8% at τ = 0.9). Covariate effects of weight, PA, and HR on EE for the nonobese and obese children differed across quantiles (P < 0.05). The associations (linear and quadratic) between PA and HR with EE were stronger for the obese than nonobese population (P < 0.05). In conclusion, QR provided more accurate predictions of EE compared with conventional OLS regression, especially at the tails of the distribution, and revealed substantially different covariate effects of weight, PA, and HR on EE in nonobese and obese children. PMID:23640591

  5. HIGHLIGHTING DIFFERENCES BETWEEN CONDITIONAL AND UNCONDITIONAL QUANTILE REGRESSION APPROACHES THROUGH AN APPLICATION TO ASSESS MEDICATION ADHERENCE

    PubMed Central

    BORAH, BIJAN J.; BASU, ANIRBAN

    2014-01-01

    The quantile regression (QR) framework provides a pragmatic approach in understanding the differential impacts of covariates along the distribution of an outcome. However, the QR framework that has pervaded the applied economics literature is based on the conditional quantile regression method. It is used to assess the impact of a covariate on a quantile of the outcome conditional on specific values of other covariates. In most cases, conditional quantile regression may generate results that are often not generalizable or interpretable in a policy or population context. In contrast, the unconditional quantile regression method provides more interpretable results as it marginalizes the effect over the distributions of other covariates in the model. In this paper, the differences between these two regression frameworks are highlighted, both conceptually and econometrically. Additionally, using real-world claims data from a large US health insurer, alternative QR frameworks are implemented to assess the differential impacts of covariates along the distribution of medication adherence among elderly patients with Alzheimer’s disease. PMID:23616446

  6. Robust and efficient estimation with weighted composite quantile regression

    NASA Astrophysics Data System (ADS)

    Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng

    2016-09-01

    In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.

  7. Early Warning Signals of Financial Crises with Multi-Scale Quantile Regressions of Log-Periodic Power Law Singularities

    PubMed Central

    Zhang, Qun; Zhang, Qunzhi; Sornette, Didier

    2016-01-01

    We augment the existing literature using the Log-Periodic Power Law Singular (LPPLS) structures in the log-price dynamics to diagnose financial bubbles by providing three main innovations. First, we introduce the quantile regression to the LPPLS detection problem. This allows us to disentangle (at least partially) the genuine LPPLS signal and the a priori unknown complicated residuals. Second, we propose to combine the many quantile regressions with a multi-scale analysis, which aggregates and consolidates the obtained ensembles of scenarios. Third, we define and implement the so-called DS LPPLS Confidence™ and Trust™ indicators that enrich considerably the diagnostic of bubbles. Using a detailed study of the “S&P 500 1987” bubble and presenting analyses of 16 historical bubbles, we show that the quantile regression of LPPLS signals contributes useful early warning signals. The comparison between the constructed signals and the price development in these 16 historical bubbles demonstrates their significant predictive ability around the real critical time when the burst/rally occurs. PMID:27806093

  8. Hospital charges associated with motorcycle crash factors: a quantile regression analysis.

    PubMed

    Olsen, Cody S; Thomas, Andrea M; Cook, Lawrence J

    2014-08-01

    Previous studies of motorcycle crash (MC) related hospital charges use trauma registries and hospital records, and do not adjust for the number of motorcyclists not requiring medical attention. This may lead to conservative estimates of helmet use effectiveness. MC records were probabilistically linked with emergency department and hospital records to obtain total hospital charges. Missing data were imputed. Multivariable quantile regression estimated reductions in hospital charges associated with helmet use and other crash factors. Motorcycle helmets were associated with reduced median hospital charges of $256 (42% reduction) and reduced 98th percentile of $32,390 (33% reduction). After adjusting for other factors, helmets were associated with reductions in charges in all upper percentiles studied. Quantile regression models described homogenous and heterogeneous associations between other crash factors and charges. Quantile regression comprehensively describes associations between crash factors and hospital charges. Helmet use among motorcyclists is associated with decreased hospital charges. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Influences of spatial and temporal variation on fish-habitat relationships defined by regression quantiles

    Treesearch

    Jason B. Dunham; Brian S. Cade; James W. Terrell

    2002-01-01

    We used regression quantiles to model potentially limiting relationships between the standing crop of cutthroat trout Oncorhynchus clarki and measures of stream channel morphology. Regression quantile models indicated that variation in fish density was inversely related to the width:depth ratio of streams but not to stream width or depth alone. The...

  10. Contrasting OLS and Quantile Regression Approaches to Student "Growth" Percentiles

    ERIC Educational Resources Information Center

    Castellano, Katherine Elizabeth; Ho, Andrew Dean

    2013-01-01

    Regression methods can locate student test scores in a conditional distribution, given past scores. This article contrasts and clarifies two approaches to describing these locations in terms of readily interpretable percentile ranks or "conditional status percentile ranks." The first is Betebenner's quantile regression approach that results in…

  11. Highlighting differences between conditional and unconditional quantile regression approaches through an application to assess medication adherence.

    PubMed

    Borah, Bijan J; Basu, Anirban

    2013-09-01

    The quantile regression (QR) framework provides a pragmatic approach in understanding the differential impacts of covariates along the distribution of an outcome. However, the QR framework that has pervaded the applied economics literature is based on the conditional quantile regression method. It is used to assess the impact of a covariate on a quantile of the outcome conditional on specific values of other covariates. In most cases, conditional quantile regression may generate results that are often not generalizable or interpretable in a policy or population context. In contrast, the unconditional quantile regression method provides more interpretable results as it marginalizes the effect over the distributions of other covariates in the model. In this paper, the differences between these two regression frameworks are highlighted, both conceptually and econometrically. Additionally, using real-world claims data from a large US health insurer, alternative QR frameworks are implemented to assess the differential impacts of covariates along the distribution of medication adherence among elderly patients with Alzheimer's disease. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Goodness of Fit and Misspecification in Quantile Regressions

    ERIC Educational Resources Information Center

    Furno, Marilena

    2011-01-01

    The article considers a test of specification for quantile regressions. The test relies on the increase of the objective function and the worsening of the fit when unnecessary constraints are imposed. It compares the objective functions of restricted and unrestricted models and, in its different formulations, it verifies (a) forecast ability, (b)…

  13. Analysis of the labor productivity of enterprises via quantile regression

    NASA Astrophysics Data System (ADS)

    Türkan, Semra

    2017-07-01

    In this study, we have analyzed the factors that affect the performance of Turkey's Top 500 Industrial Enterprises using quantile regression. The variable about labor productivity of enterprises is considered as dependent variable, the variableabout assets is considered as independent variable. The distribution of labor productivity of enterprises is right-skewed. If the dependent distribution is skewed, linear regression could not catch important aspects of the relationships between the dependent variable and its predictors due to modeling only the conditional mean. Hence, the quantile regression, which allows modelingany quantilesof the dependent distribution, including the median,appears to be useful. It examines whether relationships between dependent and independent variables are different for low, medium, and high percentiles. As a result of analyzing data, the effect of total assets is relatively constant over the entire distribution, except the upper tail. It hasa moderately stronger effect in the upper tail.

  14. Quantile regression in the presence of monotone missingness with sensitivity analysis

    PubMed Central

    Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.

    2016-01-01

    In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008

  15. Analysis of the Influence of Quantile Regression Model on Mainland Tourists' Service Satisfaction Performance

    PubMed Central

    Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen

    2014-01-01

    It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models. PMID:24574916

  16. Analysis of the influence of quantile regression model on mainland tourists' service satisfaction performance.

    PubMed

    Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen

    2014-01-01

    It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models.

  17. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application

  18. A quantile regression model for failure-time data with time-dependent covariates

    PubMed Central

    Gorfine, Malka; Goldberg, Yair; Ritov, Ya’acov

    2017-01-01

    Summary Since survival data occur over time, often important covariates that we wish to consider also change over time. Such covariates are referred as time-dependent covariates. Quantile regression offers flexible modeling of survival data by allowing the covariates to vary with quantiles. This article provides a novel quantile regression model accommodating time-dependent covariates, for analyzing survival data subject to right censoring. Our simple estimation technique assumes the existence of instrumental variables. In addition, we present a doubly-robust estimator in the sense of Robins and Rotnitzky (1992, Recovery of information and adjustment for dependent censoring using surrogate markers. In: Jewell, N. P., Dietz, K. and Farewell, V. T. (editors), AIDS Epidemiology. Boston: Birkhaäuser, pp. 297–331.). The asymptotic properties of the estimators are rigorously studied. Finite-sample properties are demonstrated by a simulation study. The utility of the proposed methodology is demonstrated using the Stanford heart transplant dataset. PMID:27485534

  19. Statistical downscaling modeling with quantile regression using lasso to estimate extreme rainfall

    NASA Astrophysics Data System (ADS)

    Santri, Dewi; Wigena, Aji Hamim; Djuraidah, Anik

    2016-02-01

    Rainfall is one of the climatic elements with high diversity and has many negative impacts especially extreme rainfall. Therefore, there are several methods that required to minimize the damage that may occur. So far, Global circulation models (GCM) are the best method to forecast global climate changes include extreme rainfall. Statistical downscaling (SD) is a technique to develop the relationship between GCM output as a global-scale independent variables and rainfall as a local- scale response variable. Using GCM method will have many difficulties when assessed against observations because GCM has high dimension and multicollinearity between the variables. The common method that used to handle this problem is principal components analysis (PCA) and partial least squares regression. The new method that can be used is lasso. Lasso has advantages in simultaneuosly controlling the variance of the fitted coefficients and performing automatic variable selection. Quantile regression is a method that can be used to detect extreme rainfall in dry and wet extreme. Objective of this study is modeling SD using quantile regression with lasso to predict extreme rainfall in Indramayu. The results showed that the estimation of extreme rainfall (extreme wet in January, February and December) in Indramayu could be predicted properly by the model at quantile 90th.

  20. Influences of spatial and temporal variation on fish-habitat relationships defined by regression quantiles

    USGS Publications Warehouse

    Dunham, J.B.; Cade, B.S.; Terrell, J.W.

    2002-01-01

    We used regression quantiles to model potentially limiting relationships between the standing crop of cutthroat trout Oncorhynchus clarki and measures of stream channel morphology. Regression quantile models indicated that variation in fish density was inversely related to the width:depth ratio of streams but not to stream width or depth alone. The spatial and temporal stability of model predictions were examined across years and streams, respectively. Variation in fish density with width:depth ratio (10th-90th regression quantiles) modeled for streams sampled in 1993-1997 predicted the variation observed in 1998-1999, indicating similar habitat relationships across years. Both linear and nonlinear models described the limiting relationships well, the latter performing slightly better. Although estimated relationships were transferable in time, results were strongly dependent on the influence of spatial variation in fish density among streams. Density changes with width:depth ratio in a single stream were responsible for the significant (P < 0.10) negative slopes estimated for the higher quantiles (>80th). This suggests that stream-scale factors other than width:depth ratio play a more direct role in determining population density. Much of the variation in densities of cutthroat trout among streams was attributed to the occurrence of nonnative brook trout Salvelinus fontinalis (a possible competitor) or connectivity to migratory habitats. Regression quantiles can be useful for estimating the effects of limiting factors when ecological responses are highly variable, but our results indicate that spatiotemporal variability in the data should be explicitly considered. In this study, data from individual streams and stream-specific characteristics (e.g., the occurrence of nonnative species and habitat connectivity) strongly affected our interpretation of the relationship between width:depth ratio and fish density.

  1. Quantile regression reveals hidden bias and uncertainty in habitat models

    Treesearch

    Brian S. Cade; Barry R. Noon; Curtis H. Flather

    2005-01-01

    We simulated the effects of missing information on statistical distributions of animal response that covaried with measured predictors of habitat to evaluate the utility and performance of quantile regression for providing more useful intervals of uncertainty in habitat relationships. These procedures were evaulated for conditions in which heterogeneity and hidden bias...

  2. The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.

    PubMed

    Liu, Chunping; Laporte, Audrey; Ferguson, Brian S

    2008-09-01

    In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.

  3. Managing more than the mean: Using quantile regression to identify factors related to large elk groups

    USGS Publications Warehouse

    Brennan, Angela K.; Cross, Paul C.; Creely, Scott

    2015-01-01

    Synthesis and applications. Our analysis of elk group size distributions using quantile regression suggests that private land, irrigation, open habitat, elk density and wolf abundance can affect large elk group sizes. Thus, to manage larger groups by removal or dispersal of individuals, we recommend incentivizing hunting on private land (particularly if irrigated) during the regular and late hunting seasons, promoting tolerance of wolves on private land (if elk aggregate in these areas to avoid wolves) and creating more winter range and varied habitats. Relationships to the variables of interest also differed by quantile, highlighting the importance of using quantile regression to examine response variables more completely to uncover relationships important to conservation and management.

  4. A Quantile Regression Approach to Understanding the Relations Between Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students

    PubMed Central

    Tighe, Elizabeth L.; Schatschneider, Christopher

    2015-01-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in Adult Basic Education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. PMID:25351773

  5. A Quantile Regression Approach to Understanding the Relations Among Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students.

    PubMed

    Tighe, Elizabeth L; Schatschneider, Christopher

    2016-07-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82%-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. © Hammill Institute on Disabilities 2014.

  6. Ordinary Least Squares and Quantile Regression: An Inquiry-Based Learning Approach to a Comparison of Regression Methods

    ERIC Educational Resources Information Center

    Helmreich, James E.; Krog, K. Peter

    2018-01-01

    We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…

  7. Regional trends in short-duration precipitation extremes: a flexible multivariate monotone quantile regression approach

    NASA Astrophysics Data System (ADS)

    Cannon, Alex

    2017-04-01

    Estimating historical trends in short-duration rainfall extremes at regional and local scales is challenging due to low signal-to-noise ratios and the limited availability of homogenized observational data. In addition to being of scientific interest, trends in rainfall extremes are of practical importance, as their presence calls into question the stationarity assumptions that underpin traditional engineering and infrastructure design practice. Even with these fundamental challenges, increasingly complex questions are being asked about time series of extremes. For instance, users may not only want to know whether or not rainfall extremes have changed over time, they may also want information on the modulation of trends by large-scale climate modes or on the nonstationarity of trends (e.g., identifying hiatus periods or periods of accelerating positive trends). Efforts have thus been devoted to the development and application of more robust and powerful statistical estimators for regional and local scale trends. While a standard nonparametric method like the regional Mann-Kendall test, which tests for the presence of monotonic trends (i.e., strictly non-decreasing or non-increasing changes), makes fewer assumptions than parametric methods and pools information from stations within a region, it is not designed to visualize detected trends, include information from covariates, or answer questions about the rate of change in trends. As a remedy, monotone quantile regression (MQR) has been developed as a nonparametric alternative that can be used to estimate a common monotonic trend in extremes at multiple stations. Quantile regression makes efficient use of data by directly estimating conditional quantiles based on information from all rainfall data in a region, i.e., without having to precompute the sample quantiles. The MQR method is also flexible and can be used to visualize and analyze the nonlinearity of the detected trend. However, it is fundamentally a

  8. Assessment of Weighted Quantile Sum Regression for Modeling Chemical Mixtures and Cancer Risk

    PubMed Central

    Czarnota, Jenna; Gennings, Chris; Wheeler, David C

    2015-01-01

    In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case–control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome. PMID:26005323

  9. Assessment of weighted quantile sum regression for modeling chemical mixtures and cancer risk.

    PubMed

    Czarnota, Jenna; Gennings, Chris; Wheeler, David C

    2015-01-01

    In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case-control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome.

  10. A simulation study of nonparametric total deviation index as a measure of agreement based on quantile regression.

    PubMed

    Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael

    2016-01-01

    Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.

  11. Using nonlinear quantile regression to estimate the self-thinning boundary curve

    Treesearch

    Quang V. Cao; Thomas J. Dean

    2015-01-01

    The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...

  12. Applying quantile regression for modeling equivalent property damage only crashes to identify accident blackspots.

    PubMed

    Washington, Simon; Haque, Md Mazharul; Oh, Jutaek; Lee, Dongmin

    2014-05-01

    Hot spot identification (HSID) aims to identify potential sites-roadway segments, intersections, crosswalks, interchanges, ramps, etc.-with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing

  13. Modeling distributional changes in winter precipitation of Canada using Bayesian spatiotemporal quantile regression subjected to different teleconnections

    NASA Astrophysics Data System (ADS)

    Tan, Xuezhi; Gan, Thian Yew; Chen, Shu; Liu, Bingjun

    2018-05-01

    Climate change and large-scale climate patterns may result in changes in probability distributions of climate variables that are associated with changes in the mean and variability, and severity of extreme climate events. In this paper, we applied a flexible framework based on the Bayesian spatiotemporal quantile (BSTQR) model to identify climate changes at different quantile levels and their teleconnections to large-scale climate patterns such as El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO) and Pacific-North American (PNA). Using the BSTQR model with time (year) as a covariate, we estimated changes in Canadian winter precipitation and their uncertainties at different quantile levels. There were some stations in eastern Canada showing distributional changes in winter precipitation such as an increase in low quantiles but a decrease in high quantiles. Because quantile functions in the BSTQR model vary with space and time and assimilate spatiotemporal precipitation data, the BSTQR model produced much spatially smoother and less uncertain quantile changes than the classic regression without considering spatiotemporal correlations. Using the BSTQR model with five teleconnection indices (i.e., SOI, PDO, PNA, NP and NAO) as covariates, we investigated effects of large-scale climate patterns on Canadian winter precipitation at different quantile levels. Winter precipitation responses to these five teleconnections were found to occur differently at different quantile levels. Effects of five teleconnections on Canadian winter precipitation were stronger at low and high than at medium quantile levels.

  14. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features.

    PubMed

    Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara

    2017-01-01

    In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.

  15. Factors associated with the income distribution of full-time physicians: a quantile regression approach.

    PubMed

    Shih, Ya-Chen Tina; Konrad, Thomas R

    2007-10-01

    Physician income is generally high, but quite variable; hence, physicians have divergent perspectives regarding health policy initiatives and market reforms that could affect their incomes. We investigated factors underlying the distribution of income within the physician population. Full-time physicians (N=10,777) from the restricted version of the 1996-1997 Community Tracking Study Physician Survey (CTS-PS), 1996 Area Resource File, and 1996 health maintenance organization penetration data. We conducted separate analyses for primary care physicians (PCPs) and specialists. We employed least square and quantile regression models to examine factors associated with physician incomes at the mean and at various points of the income distribution, respectively. We accounted for the complex survey design for the CTS-PS data using appropriate weighted procedures and explored endogeneity using an instrumental variables method. We detected widespread and subtle effects of many variables on physician incomes at different points (10th, 25th, 75th, and 90th percentiles) in the distribution that were undetected when employing regression estimations focusing on only the means or medians. Our findings show that the effects of managed care penetration are demonstrable at the mean of specialist incomes, but are more pronounced at higher levels. Conversely, a gender gap in earnings occurs at all levels of income of both PCPs and specialists, but is more pronounced at lower income levels. The quantile regression technique offers an analytical tool to evaluate policy effects beyond the means. A longitudinal application of this approach may enable health policy makers to identify winners and losers among segments of the physician workforce and assess how market dynamics and health policy initiatives affect the overall physician income distribution over various time intervals.

  16. Using Gamma and Quantile Regressions to Explore the Association between Job Strain and Adiposity in the ELSA-Brasil Study: Does Gender Matter?

    PubMed

    Fonseca, Maria de Jesus Mendes da; Juvanhol, Leidjaira Lopes; Rotenberg, Lúcia; Nobre, Aline Araújo; Griep, Rosane Härter; Alves, Márcia Guimarães de Mello; Cardoso, Letícia de Oliveira; Giatti, Luana; Nunes, Maria Angélica; Aquino, Estela M L; Chor, Dóra

    2017-11-17

    This paper explores the association between job strain and adiposity, using two statistical analysis approaches and considering the role of gender. The research evaluated 11,960 active baseline participants (2008-2010) in the ELSA-Brasil study. Job strain was evaluated through a demand-control questionnaire, while body mass index (BMI) and waist circumference (WC) were evaluated in continuous form. The associations were estimated using gamma regression models with an identity link function. Quantile regression models were also estimated from the final set of co-variables established by gamma regression. The relationship that was found varied by analytical approach and gender. Among the women, no association was observed between job strain and adiposity in the fitted gamma models. In the quantile models, a pattern of increasing effects of high strain was observed at higher BMI and WC distribution quantiles. Among the men, high strain was associated with adiposity in the gamma regression models. However, when quantile regression was used, that association was found not to be homogeneous across outcome distributions. In addition, in the quantile models an association was observed between active jobs and BMI. Our results point to an association between job strain and adiposity, which follows a heterogeneous pattern. Modelling strategies can produce different results and should, accordingly, be used to complement one another.

  17. Economic policy uncertainty, equity premium and dependence between their quantiles: Evidence from quantile-on-quantile approach

    NASA Astrophysics Data System (ADS)

    Raza, Syed Ali; Zaighum, Isma; Shah, Nida

    2018-02-01

    This paper examines the relationship between economic policy uncertainty and equity premium in G7 countries over a period of the monthly data from January 1989 to December 2015 using a novel technique namely QQ regression proposed by Sim and Zhou (2015). Based on QQ approach, we estimate how the quantiles of the economic policy uncertainty affect the quantiles of the equity premium. Thus, it provides a comprehensive insight into the overall dependence structure between the equity premium and economic policy uncertainty as compared to traditional techniques like OLS or quantile regression. Overall, our empirical evidence suggests the existence of a negative association between equity premium and EPU predominately in all G7 countries, especially in the extreme low and extreme high tails. However, differences exist among countries and across different quantiles of EPU and the equity premium within each country. The existence of this heterogeneity among countries is due to the differences in terms of dependency on economic policy, other stock markets, and the linkages with other country's equity market.

  18. A quantile regression approach can reveal the effect of fruit and vegetable consumption on plasma homocysteine levels.

    PubMed

    Verly, Eliseu; Steluti, Josiane; Fisberg, Regina Mara; Marchioni, Dirce Maria Lobo

    2014-01-01

    A reduction in homocysteine concentration due to the use of supplemental folic acid is well recognized, although evidence of the same effect for natural folate sources, such as fruits and vegetables (FV), is lacking. The traditional statistical analysis approaches do not provide further information. As an alternative, quantile regression allows for the exploration of the effects of covariates through percentiles of the conditional distribution of the dependent variable. To investigate how the associations of FV intake with plasma total homocysteine (tHcy) differ through percentiles in the distribution using quantile regression. A cross-sectional population-based survey was conducted among 499 residents of Sao Paulo City, Brazil. The participants provided food intake and fasting blood samples. Fruit and vegetable intake was predicted by adjusting for day-to-day variation using a proper measurement error model. We performed a quantile regression to verify the association between tHcy and the predicted FV intake. The predicted values of tHcy for each percentile model were calculated considering an increase of 200 g in the FV intake for each percentile. The results showed that tHcy was inversely associated with FV intake when assessed by linear regression whereas, the association was different when using quantile regression. The relationship with FV consumption was inverse and significant for almost all percentiles of tHcy. The coefficients increased as the percentile of tHcy increased. A simulated increase of 200 g in the FV intake could decrease the tHcy levels in the overall percentiles, but the higher percentiles of tHcy benefited more. This study confirms that the effect of FV intake on lowering the tHcy levels is dependent on the level of tHcy using an innovative statistical approach. From a public health point of view, encouraging people to increase FV intake would benefit people with high levels of tHcy.

  19. Quantile Regression for Analyzing Heterogeneity in Ultra-high Dimension

    PubMed Central

    Wang, Lan; Wu, Yichao

    2012-01-01

    Ultra-high dimensional data often display heterogeneity due to either heteroscedastic variance or other forms of non-location-scale covariate effects. To accommodate heterogeneity, we advocate a more general interpretation of sparsity which assumes that only a small number of covariates influence the conditional distribution of the response variable given all candidate covariates; however, the sets of relevant covariates may differ when we consider different segments of the conditional distribution. In this framework, we investigate the methodology and theory of nonconvex penalized quantile regression in ultra-high dimension. The proposed approach has two distinctive features: (1) it enables us to explore the entire conditional distribution of the response variable given the ultra-high dimensional covariates and provides a more realistic picture of the sparsity pattern; (2) it requires substantially weaker conditions compared with alternative methods in the literature; thus, it greatly alleviates the difficulty of model checking in the ultra-high dimension. In theoretic development, it is challenging to deal with both the nonsmooth loss function and the nonconvex penalty function in ultra-high dimensional parameter space. We introduce a novel sufficient optimality condition which relies on a convex differencing representation of the penalized loss function and the subdifferential calculus. Exploring this optimality condition enables us to establish the oracle property for sparse quantile regression in the ultra-high dimension under relaxed conditions. The proposed method greatly enhances existing tools for ultra-high dimensional data analysis. Monte Carlo simulations demonstrate the usefulness of the proposed procedure. The real data example we analyzed demonstrates that the new approach reveals substantially more information compared with alternative methods. PMID:23082036

  20. Estimating the Extreme Behaviors of Students Performance Using Quantile Regression--Evidences from Taiwan

    ERIC Educational Resources Information Center

    Chen, Sheng-Tung; Kuo, Hsiao-I.; Chen, Chi-Chung

    2012-01-01

    The two-stage least squares approach together with quantile regression analysis is adopted here to estimate the educational production function. Such a methodology is able to capture the extreme behaviors of the two tails of students' performance and the estimation outcomes have important policy implications. Our empirical study is applied to the…

  1. Factors Associated with the Income Distribution of Full-Time Physicians: A Quantile Regression Approach

    PubMed Central

    Shih, Ya-Chen Tina; Konrad, Thomas R

    2007-01-01

    Objective Physician income is generally high, but quite variable; hence, physicians have divergent perspectives regarding health policy initiatives and market reforms that could affect their incomes. We investigated factors underlying the distribution of income within the physician population. Data Sources Full-time physicians (N=10,777) from the restricted version of the 1996–1997 Community Tracking Study Physician Survey (CTS-PS), 1996 Area Resource File, and 1996 health maintenance organization penetration data. Study Design We conducted separate analyses for primary care physicians (PCPs) and specialists. We employed least square and quantile regression models to examine factors associated with physician incomes at the mean and at various points of the income distribution, respectively. We accounted for the complex survey design for the CTS-PS data using appropriate weighted procedures and explored endogeneity using an instrumental variables method. Principal Findings We detected widespread and subtle effects of many variables on physician incomes at different points (10th, 25th, 75th, and 90th percentiles) in the distribution that were undetected when employing regression estimations focusing on only the means or medians. Our findings show that the effects of managed care penetration are demonstrable at the mean of specialist incomes, but are more pronounced at higher levels. Conversely, a gender gap in earnings occurs at all levels of income of both PCPs and specialists, but is more pronounced at lower income levels. Conclusions The quantile regression technique offers an analytical tool to evaluate policy effects beyond the means. A longitudinal application of this approach may enable health policy makers to identify winners and losers among segments of the physician workforce and assess how market dynamics and health policy initiatives affect the overall physician income distribution over various time intervals. PMID:17850525

  2. Longitudinal analysis of the strengths and difficulties questionnaire scores of the Millennium Cohort Study children in England using M-quantile random-effects regression.

    PubMed

    Tzavidis, Nikos; Salvati, Nicola; Schmid, Timo; Flouri, Eirini; Midouhas, Emily

    2016-02-01

    Multilevel modelling is a popular approach for longitudinal data analysis. Statistical models conventionally target a parameter at the centre of a distribution. However, when the distribution of the data is asymmetric, modelling other location parameters, e.g. percentiles, may be more informative. We present a new approach, M -quantile random-effects regression, for modelling multilevel data. The proposed method is used for modelling location parameters of the distribution of the strengths and difficulties questionnaire scores of children in England who participate in the Millennium Cohort Study. Quantile mixed models are also considered. The analyses offer insights to child psychologists about the differential effects of risk factors on children's outcomes.

  3. Alternative configurations of Quantile Regression for estimating predictive uncertainty in water level forecasts for the Upper Severn River: a comparison

    NASA Astrophysics Data System (ADS)

    Lopez, Patricia; Verkade, Jan; Weerts, Albrecht; Solomatine, Dimitri

    2014-05-01

    Hydrological forecasting is subject to many sources of uncertainty, including those originating in initial state, boundary conditions, model structure and model parameters. Although uncertainty can be reduced, it can never be fully eliminated. Statistical post-processing techniques constitute an often used approach to estimate the hydrological predictive uncertainty, where a model of forecast error is built using a historical record of past forecasts and observations. The present study focuses on the use of the Quantile Regression (QR) technique as a hydrological post-processor. It estimates the predictive distribution of water levels using deterministic water level forecasts as predictors. This work aims to thoroughly verify uncertainty estimates using the implementation of QR that was applied in an operational setting in the UK National Flood Forecasting System, and to inter-compare forecast quality and skill in various, differing configurations of QR. These configurations are (i) 'classical' QR, (ii) QR constrained by a requirement that quantiles do not cross, (iii) QR derived on time series that have been transformed into the Normal domain (Normal Quantile Transformation - NQT), and (iv) a piecewise linear derivation of QR models. The QR configurations are applied to fourteen hydrological stations on the Upper Severn River with different catchments characteristics. Results of each QR configuration are conditionally verified for progressively higher flood levels, in terms of commonly used verification metrics and skill scores. These include Brier's probability score (BS), the continuous ranked probability score (CRPS) and corresponding skill scores as well as the Relative Operating Characteristic score (ROCS). Reliability diagrams are also presented and analysed. The results indicate that none of the four Quantile Regression configurations clearly outperforms the others.

  4. Heterogeneous effects of oil shocks on exchange rates: evidence from a quantile regression approach.

    PubMed

    Su, Xianfang; Zhu, Huiming; You, Wanhai; Ren, Yinghua

    2016-01-01

    The determinants of exchange rates have attracted considerable attention among researchers over the past several decades. Most studies, however, ignore the possibility that the impact of oil shocks on exchange rates could vary across the exchange rate returns distribution. We employ a quantile regression approach to address this issue. Our results indicate that the effect of oil shocks on exchange rates is heterogeneous across quantiles. A large US depreciation or appreciation tends to heighten the effects of oil shocks on exchange rate returns. Positive oil demand shocks lead to appreciation pressures in oil-exporting countries and this result is robust across lower and upper return distributions. These results offer rich and useful information for investors and decision-makers.

  5. Trait Mindfulness as a Limiting Factor for Residual Depressive Symptoms: An Explorative Study Using Quantile Regression

    PubMed Central

    Radford, Sholto; Eames, Catrin; Brennan, Kate; Lambert, Gwladys; Crane, Catherine; Williams, J. Mark G.; Duggan, Danielle S.; Barnhofer, Thorsten

    2014-01-01

    Mindfulness has been suggested to be an important protective factor for emotional health. However, this effect might vary with regard to context. This study applied a novel statistical approach, quantile regression, in order to investigate the relation between trait mindfulness and residual depressive symptoms in individuals with a history of recurrent depression, while taking into account symptom severity and number of episodes as contextual factors. Rather than fitting to a single indicator of central tendency, quantile regression allows exploration of relations across the entire range of the response variable. Analysis of self-report data from 274 participants with a history of three or more previous episodes of depression showed that relatively higher levels of mindfulness were associated with relatively lower levels of residual depressive symptoms. This relationship was most pronounced near the upper end of the response distribution and moderated by the number of previous episodes of depression at the higher quantiles. The findings suggest that with lower levels of mindfulness, residual symptoms are less constrained and more likely to be influenced by other factors. Further, the limiting effect of mindfulness on residual symptoms is most salient in those with higher numbers of episodes. PMID:24988072

  6. The effectiveness of drinking and driving policies for different alcohol-related fatalities: a quantile regression analysis.

    PubMed

    Ying, Yung-Hsiang; Wu, Chin-Chih; Chang, Koyin

    2013-09-27

    To understand the impact of drinking and driving laws on drinking and driving fatality rates, this study explored the different effects these laws have on areas with varying severity rates for drinking and driving. Unlike previous studies, this study employed quantile regression analysis. Empirical results showed that policies based on local conditions must be used to effectively reduce drinking and driving fatality rates; that is, different measures should be adopted to target the specific conditions in various regions. For areas with low fatality rates (low quantiles), people's habits and attitudes toward alcohol should be emphasized instead of transportation safety laws because "preemptive regulations" are more effective. For areas with high fatality rates (or high quantiles), "ex-post regulations" are more effective, and impact these areas approximately 0.01% to 0.05% more than they do areas with low fatality rates.

  7. Incremental Treatment Costs Attributable to Overweight and Obesity in Patients with Diabetes: Quantile Regression Approach.

    PubMed

    Lee, Seung-Mi; Choi, In-Sun; Han, Euna; Suh, David; Shin, Eun-Kyung; Je, Seyunghe; Lee, Sung Su; Suh, Dong-Churl

    2018-01-01

    This study aimed to estimate treatment costs attributable to overweight and obesity in patients with diabetes who were less than 65 years of age in the United States. This study used data from the Medical Expenditure Panel Survey from 2001 to 2013. Patients with diabetes were identified by using the International Classification of Diseases, Ninth Revision, Clinical Modification code (250), clinical classification codes (049 and 050), or self-reported physician diagnoses. Total treatment costs attributable to overweight and obesity were calculated as the differences in the adjusted costs compared with individuals with diabetes and normal weight. Adjusted costs were estimated by using generalized linear models or unconditional quantile regression models. The mean annual treatment costs attributable to obesity were $1,852 higher than those attributable to normal weight, while costs attributable to overweight were $133 higher. The unconditional quantile regression results indicated that the impact of obesity on total treatment costs gradually became more significant as treatment costs approached the upper quantile. Among patients with diabetes who were less than 65 years of age, patients with diabetes and obesity have significantly higher treatment costs than patients with diabetes and normal weight. The economic burden of diabetes to society will continue to increase unless more proactive preventive measures are taken to effectively treat patients with overweight or obesity. © 2017 The Obesity Society.

  8. The Effectiveness of Drinking and Driving Policies for Different Alcohol-Related Fatalities: A Quantile Regression Analysis

    PubMed Central

    Ying, Yung-Hsiang; Wu, Chin-Chih; Chang, Koyin

    2013-01-01

    To understand the impact of drinking and driving laws on drinking and driving fatality rates, this study explored the different effects these laws have on areas with varying severity rates for drinking and driving. Unlike previous studies, this study employed quantile regression analysis. Empirical results showed that policies based on local conditions must be used to effectively reduce drinking and driving fatality rates; that is, different measures should be adopted to target the specific conditions in various regions. For areas with low fatality rates (low quantiles), people’s habits and attitudes toward alcohol should be emphasized instead of transportation safety laws because “preemptive regulations” are more effective. For areas with high fatality rates (or high quantiles), “ex-post regulations” are more effective, and impact these areas approximately 0.01% to 0.05% more than they do areas with low fatality rates. PMID:24084673

  9. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    USGS Publications Warehouse

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of

  10. Determinants of Academic Attainment in the United States: A Quantile Regression Analysis of Test Scores

    ERIC Educational Resources Information Center

    Haile, Getinet Astatike; Nguyen, Anh Ngoc

    2008-01-01

    We investigate the determinants of high school students' academic attainment in mathematics, reading and science in the United States; focusing particularly on possible differential impacts of ethnicity and family background across the distribution of test scores. Using data from the NELS2000 and employing quantile regression, we find two…

  11. Environmental influence on mussel (Mytilus edulis) growth - A quantile regression approach

    NASA Astrophysics Data System (ADS)

    Bergström, Per; Lindegarth, Mats

    2016-03-01

    The need for methods for sustainable management and use of coastal ecosystems has increased in the last century. A key aspect for obtaining ecologically and economically sustainable aquaculture in threatened coastal areas is the requirement of geographic information of growth and potential production capacity. Growth varies over time and space and depends on a complex pattern of interactions between the bivalve and a diverse range of environmental factors (e.g. temperature, salinity, food availability). Understanding these processes and modelling the environmental control of bivalve growth has been central in aquaculture. In contrast to the most conventional modelling techniques, quantile regression can handle cases where not all factors are measured and provide the possibility to estimate the effect at different levels of the response distribution and give therefore a more complete picture of the relationship between environmental factors and biological response. Observation of the relationships between environmental factors and growth of the bivalve Mytilus edulis revealed relationships that varied both among level of growth rate and within the range of environmental variables along the Swedish west coast. The strongest patterns were found for water oxygen concentration level which had a negative effect on growth for all oxygen levels and growth levels. However, these patterns coincided with differences in growth among periods and very little of the remaining variability within periods could be explained indicating that interactive processes masked the importance of the individual variables. By using quantile regression and local regression (LOESS) this study was able to provide valuable information on environmental factors influencing the growth of M. edulis and important insight for the development of ecosystem based management tools of aquaculture activities, its use in mitigation efforts and successful management of human use of coastal areas.

  12. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes

    PubMed Central

    Li, Degui; Li, Runze

    2016-01-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894

  13. Understanding Child Stunting in India: A Comprehensive Analysis of Socio-Economic, Nutritional and Environmental Determinants Using Additive Quantile Regression

    PubMed Central

    Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A.

    2013-01-01

    Background Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. Objective We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Design Using cross-sectional data for children aged 0–24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. Results At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Conclusions Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role. PMID:24223839

  14. Understanding child stunting in India: a comprehensive analysis of socio-economic, nutritional and environmental determinants using additive quantile regression.

    PubMed

    Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A

    2013-01-01

    Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Using cross-sectional data for children aged 0-24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role.

  15. Finite-sample and asymptotic sign-based tests for parameters of non-linear quantile regression with Markov noise

    NASA Astrophysics Data System (ADS)

    Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.

    2017-01-01

    One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.

  16. Quantile regression and clustering analysis of standardized precipitation index in the Tarim River Basin, Xinjiang, China

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Xia, Jun; Zhang, Yongyong; Han, Jian; Wu, Xia

    2017-11-01

    Because drought is a very common and widespread natural disaster, it has attracted a great deal of academic interest. Based on 12-month time scale standardized precipitation indices (SPI12) calculated from precipitation data recorded between 1960 and 2015 at 22 weather stations in the Tarim River Basin (TRB), this study aims to identify the trends of SPI and drought duration, severity, and frequency at various quantiles and to perform cluster analysis of drought events in the TRB. The results indicated that (1) both precipitation and temperature at most stations in the TRB exhibited significant positive trends during 1960-2015; (2) multiple scales of SPIs changed significantly around 1986; (3) based on quantile regression analysis of temporal drought changes, the positive SPI slopes indicated less severe and less frequent droughts at lower quantiles, but clear variation was detected in the drought frequency; and (4) significantly different trends were found in drought frequency probably between severe droughts and drought frequency.

  17. Identifying the Safety Factors over Traffic Signs in State Roads using a Panel Quantile Regression Approach.

    PubMed

    Šarić, Željko; Xu, Xuecai; Duan, Li; Babić, Darko

    2018-06-20

    This study intended to investigate the interactions between accident rate and traffic signs in state roads located in Croatia, and accommodate the heterogeneity attributed to unobserved factors. The data from 130 state roads between 2012 and 2016 were collected from Traffic Accident Database System maintained by the Republic of Croatia Ministry of the Interior. To address the heterogeneity, a panel quantile regression model was proposed, in which quantile regression model offers a more complete view and a highly comprehensive analysis of the relationship between accident rate and traffic signs, while the panel data model accommodates the heterogeneity attributed to unobserved factors. Results revealed that (1) low visibility of material damage (MD) and death or injured (DI) increased the accident rate; (2) the number of mandatory signs and the number of warning signs were more likely to reduce the accident rate; (3)average speed limit and the number of invalid traffic signs per km exhibited a high accident rate. To our knowledge, it's the first attempt to analyze the interactions between accident consequences and traffic signs by employing a panel quantile regression model; by involving the visibility, the present study demonstrates that the low visibility causes a relatively higher risk of MD and DI; It is noteworthy that average speed limit corresponds with accident rate positively; The number of mandatory signs and the number of warning signs are more likely to reduce the accident rate; The number of invalid traffic signs per km are significant for accident rate, thus regular maintenance should be kept for a safer roadway environment.

  18. Groundwater depth prediction in a shallow aquifer in north China by a quantile regression model

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Wei, Wan; Zhao, Yong; Qiao, Jiale

    2017-01-01

    There is a close relationship between groundwater level in a shallow aquifer and the surface ecological environment; hence, it is important to accurately simulate and predict the groundwater level in eco-environmental construction projects. The multiple linear regression (MLR) model is one of the most useful methods to predict groundwater level (depth); however, the predicted values by this model only reflect the mean distribution of the observations and cannot effectively fit the extreme distribution data (outliers). The study reported here builds a prediction model of groundwater-depth dynamics in a shallow aquifer using the quantile regression (QR) method on the basis of the observed data of groundwater depth and related factors. The proposed approach was applied to five sites in Tianjin city, north China, and the groundwater depth was calculated in different quantiles, from which the optimal quantile was screened out according to the box plot method and compared to the values predicted by the MLR model. The results showed that the related factors in the five sites did not follow the standard normal distribution and that there were outliers in the precipitation and last-month (initial state) groundwater-depth factors because the basic assumptions of the MLR model could not be achieved, thereby causing errors. Nevertheless, these conditions had no effect on the QR model, as it could more effectively describe the distribution of original data and had a higher precision in fitting the outliers.

  19. More green space is related to less antidepressant prescription rates in the Netherlands: A Bayesian geoadditive quantile regression approach.

    PubMed

    Helbich, Marco; Klein, Nadja; Roberts, Hannah; Hagedoorn, Paulien; Groenewegen, Peter P

    2018-06-20

    Exposure to green space seems to be beneficial for self-reported mental health. In this study we used an objective health indicator, namely antidepressant prescription rates. Current studies rely exclusively upon mean regression models assuming linear associations. It is, however, plausible that the presence of green space is non-linearly related with different quantiles of the outcome antidepressant prescription rates. These restrictions may contribute to inconsistent findings. Our aim was: a) to assess antidepressant prescription rates in relation to green space, and b) to analyze how the relationship varies non-linearly across different quantiles of antidepressant prescription rates. We used cross-sectional data for the year 2014 at a municipality level in the Netherlands. Ecological Bayesian geoadditive quantile regressions were fitted for the 15%, 50%, and 85% quantiles to estimate green space-prescription rate correlations, controlling for physical activity levels, socio-demographics, urbanicity, etc. RESULTS: The results suggested that green space was overall inversely and non-linearly associated with antidepressant prescription rates. More important, the associations differed across the quantiles, although the variation was modest. Significant non-linearities were apparent: The associations were slightly positive in the lower quantile and strongly negative in the upper one. Our findings imply that an increased availability of green space within a municipality may contribute to a reduction in the number of antidepressant prescriptions dispensed. Green space is thus a central health and community asset, whilst a minimum level of 28% needs to be established for health gains. The highest effectiveness occurred at a municipality surface percentage higher than 79%. This inverse dose-dependent relation has important implications for setting future community-level health and planning policies. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Bayesian quantitative precipitation forecasts in terms of quantiles

    NASA Astrophysics Data System (ADS)

    Bentzien, Sabrina; Friederichs, Petra

    2014-05-01

    Ensemble prediction systems (EPS) for numerical weather predictions on the mesoscale are particularly developed to obtain probabilistic guidance for high impact weather. An EPS not only issues a deterministic future state of the atmosphere but a sample of possible future states. Ensemble postprocessing then translates such a sample of forecasts into probabilistic measures. This study focus on probabilistic quantitative precipitation forecasts in terms of quantiles. Quantiles are particular suitable to describe precipitation at various locations, since no assumption is required on the distribution of precipitation. The focus is on the prediction during high-impact events and related to the Volkswagen Stiftung funded project WEX-MOP (Mesoscale Weather Extremes - Theory, Spatial Modeling and Prediction). Quantile forecasts are derived from the raw ensemble and via quantile regression. Neighborhood method and time-lagging are effective tools to inexpensively increase the ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Since an EPS provides a large amount of potentially informative predictors, a variable selection is required in order to obtain a stable statistical model. A Bayesian formulation of quantile regression allows for inference about the selection of predictive covariates by the use of appropriate prior distributions. Moreover, the implementation of an additional process layer for the regression parameters accounts for spatial variations of the parameters. Bayesian quantile regression and its spatially adaptive extension is illustrated for the German-focused mesoscale weather prediction ensemble COSMO-DE-EPS, which runs (pre)operationally since December 2010 at the German Meteorological Service (DWD). Objective out-of-sample verification uses the quantile score (QS), a weighted absolute error between quantile forecasts and observations. The QS is a proper scoring function and can be decomposed into

  1. Forecasting peak asthma admissions in London: an application of quantile regression models.

    PubMed

    Soyiri, Ireneous N; Reidpath, Daniel D; Sarran, Christophe

    2013-07-01

    Asthma is a chronic condition of great public health concern globally. The associated morbidity, mortality and healthcare utilisation place an enormous burden on healthcare infrastructure and services. This study demonstrates a multistage quantile regression approach to predicting excess demand for health care services in the form of asthma daily admissions in London, using retrospective data from the Hospital Episode Statistics, weather and air quality. Trivariate quantile regression models (QRM) of asthma daily admissions were fitted to a 14-day range of lags of environmental factors, accounting for seasonality in a hold-in sample of the data. Representative lags were pooled to form multivariate predictive models, selected through a systematic backward stepwise reduction approach. Models were cross-validated using a hold-out sample of the data, and their respective root mean square error measures, sensitivity, specificity and predictive values compared. Two of the predictive models were able to detect extreme number of daily asthma admissions at sensitivity levels of 76 % and 62 %, as well as specificities of 66 % and 76 %. Their positive predictive values were slightly higher for the hold-out sample (29 % and 28 %) than for the hold-in model development sample (16 % and 18 %). QRMs can be used in multistage to select suitable variables to forecast extreme asthma events. The associations between asthma and environmental factors, including temperature, ozone and carbon monoxide can be exploited in predicting future events using QRMs.

  2. Forecasting peak asthma admissions in London: an application of quantile regression models

    NASA Astrophysics Data System (ADS)

    Soyiri, Ireneous N.; Reidpath, Daniel D.; Sarran, Christophe

    2013-07-01

    Asthma is a chronic condition of great public health concern globally. The associated morbidity, mortality and healthcare utilisation place an enormous burden on healthcare infrastructure and services. This study demonstrates a multistage quantile regression approach to predicting excess demand for health care services in the form of asthma daily admissions in London, using retrospective data from the Hospital Episode Statistics, weather and air quality. Trivariate quantile regression models (QRM) of asthma daily admissions were fitted to a 14-day range of lags of environmental factors, accounting for seasonality in a hold-in sample of the data. Representative lags were pooled to form multivariate predictive models, selected through a systematic backward stepwise reduction approach. Models were cross-validated using a hold-out sample of the data, and their respective root mean square error measures, sensitivity, specificity and predictive values compared. Two of the predictive models were able to detect extreme number of daily asthma admissions at sensitivity levels of 76 % and 62 %, as well as specificities of 66 % and 76 %. Their positive predictive values were slightly higher for the hold-out sample (29 % and 28 %) than for the hold-in model development sample (16 % and 18 %). QRMs can be used in multistage to select suitable variables to forecast extreme asthma events. The associations between asthma and environmental factors, including temperature, ozone and carbon monoxide can be exploited in predicting future events using QRMs.

  3. Customized Fetal Growth Charts for Parents' Characteristics, Race, and Parity by Quantile Regression Analysis: A Cross-sectional Multicenter Italian Study.

    PubMed

    Ghi, Tullio; Cariello, Luisa; Rizzo, Ludovica; Ferrazzi, Enrico; Periti, Enrico; Prefumo, Federico; Stampalija, Tamara; Viora, Elsa; Verrotti, Carla; Rizzo, Giuseppe

    2016-01-01

    The purpose of this study was to construct fetal biometric charts between 16 and 40 weeks' gestation that were customized for parental characteristics, race, and parity, using quantile regression analysis. In a multicenter cross-sectional study, 8070 sonographic examinations from low-risk pregnancies between 16 and 40 weeks' gestation were analyzed. The fetal measurements obtained were biparietal diameter, head circumference, abdominal circumference, and femur diaphysis length. Quantile regression was used to examine the impact of parental height and weight, parity, and race across biometric percentiles for the fetal measurements considered. Paternal and maternal height were significant covariates for all of the measurements considered (P < .05). Maternal weight significantly influenced head circumference, abdominal circumference, and femur diaphysis length. Parity was significantly associated with biparietal diameter and head circumference. Central African race was associated with head circumference and femur diaphysis length, whereas North African race was only associated with femur diaphysis length. In this study we constructed customized biometric growth charts using quantile regression in a large cohort of low-risk pregnancies. These charts offer the advantage of defining individualized normal ranges of fetal biometric parameters at each specific percentile corrected for parental height and weight, parity, and race. This study supports the importance of including these variables in routine sonographic screening for fetal growth abnormalities.

  4. Using Quantile and Asymmetric Least Squares Regression for Optimal Risk Adjustment.

    PubMed

    Lorenz, Normann

    2017-06-01

    In this paper, we analyze optimal risk adjustment for direct risk selection (DRS). Integrating insurers' activities for risk selection into a discrete choice model of individuals' health insurance choice shows that DRS has the structure of a contest. For the contest success function (csf) used in most of the contest literature (the Tullock-csf), optimal transfers for a risk adjustment scheme have to be determined by means of a restricted quantile regression, irrespective of whether insurers are primarily engaged in positive DRS (attracting low risks) or negative DRS (repelling high risks). This is at odds with the common practice of determining transfers by means of a least squares regression. However, this common practice can be rationalized for a new csf, but only if positive and negative DRSs are equally important; if they are not, optimal transfers have to be calculated by means of a restricted asymmetric least squares regression. Using data from German and Swiss health insurers, we find considerable differences between the three types of regressions. Optimal transfers therefore critically depend on which csf represents insurers' incentives for DRS and, if it is not the Tullock-csf, whether insurers are primarily engaged in positive or negative DRS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Accelerating Approximate Bayesian Computation with Quantile Regression: application to cosmological redshift distributions

    NASA Astrophysics Data System (ADS)

    Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.

    2018-02-01

    Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.

  6. Bumps in river profiles: uncertainty assessment and smoothing using quantile regression techniques

    NASA Astrophysics Data System (ADS)

    Schwanghart, Wolfgang; Scherler, Dirk

    2017-12-01

    The analysis of longitudinal river profiles is an important tool for studying landscape evolution. However, characterizing river profiles based on digital elevation models (DEMs) suffers from errors and artifacts that particularly prevail along valley bottoms. The aim of this study is to characterize uncertainties that arise from the analysis of river profiles derived from different, near-globally available DEMs. We devised new algorithms - quantile carving and the CRS algorithm - that rely on quantile regression to enable hydrological correction and the uncertainty quantification of river profiles. We find that globally available DEMs commonly overestimate river elevations in steep topography. The distributions of elevation errors become increasingly wider and right skewed if adjacent hillslope gradients are steep. Our analysis indicates that the AW3D DEM has the highest precision and lowest bias for the analysis of river profiles in mountainous topography. The new 12 m resolution TanDEM-X DEM has a very low precision, most likely due to the combined effect of steep valley walls and the presence of water surfaces in valley bottoms. Compared to the conventional approaches of carving and filling, we find that our new approach is able to reduce the elevation bias and errors in longitudinal river profiles.

  7. Quantile regression analysis of body mass and wages.

    PubMed

    Johar, Meliyanni; Katayama, Hajime

    2012-05-01

    Using the National Longitudinal Survey of Youth 1979, we explore the relationship between body mass and wages. We use quantile regression to provide a broad description of the relationship across the wage distribution. We also allow the relationship to vary by the degree of social skills involved in different jobs. Our results find that for female workers body mass and wages are negatively correlated at all points in their wage distribution. The strength of the relationship is larger at higher-wage levels. For male workers, the relationship is relatively constant across wage distribution but heterogeneous across ethnic groups. When controlling for the endogeneity of body mass, we find that additional body mass has a negative causal impact on the wages of white females earning more than the median wages and of white males around the median wages. Among these workers, the wage penalties are larger for those employed in jobs that require extensive social skills. These findings may suggest that labor markets reward white workers for good physical shape differently, depending on the level of wages and the type of job a worker has. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Examining Predictive Validity of Oral Reading Fluency Slope in Upper Elementary Grades Using Quantile Regression.

    PubMed

    Cho, Eunsoo; Capin, Philip; Roberts, Greg; Vaughn, Sharon

    2017-07-01

    Within multitiered instructional delivery models, progress monitoring is a key mechanism for determining whether a child demonstrates an adequate response to instruction. One measure commonly used to monitor the reading progress of students is oral reading fluency (ORF). This study examined the extent to which ORF slope predicts reading comprehension outcomes for fifth-grade struggling readers ( n = 102) participating in an intensive reading intervention. Quantile regression models showed that ORF slope significantly predicted performance on a sentence-level fluency and comprehension assessment, regardless of the students' reading skills, controlling for initial ORF performance. However, ORF slope was differentially predictive of a passage-level comprehension assessment based on students' reading skills when controlling for initial ORF status. Results showed that ORF explained unique variance for struggling readers whose posttest performance was at the upper quantiles at the end of the reading intervention, but slope was not a significant predictor of passage-level comprehension for students whose reading problems were the most difficult to remediate.

  9. Using quantile regression to examine health care expenditures during the Great Recession.

    PubMed

    Chen, Jie; Vargas-Bustamante, Arturo; Mortensen, Karoline; Thomas, Stephen B

    2014-04-01

    To examine the association between the Great Recession of 2007-2009 and health care expenditures along the health care spending distribution, with a focus on racial/ethnic disparities. Secondary data analyses of the Medical Expenditure Panel Survey (2005-2006 and 2008-2009). Quantile multivariate regressions are employed to measure the different associations between the economic recession of 2007-2009 and health care spending. Race/ethnicity and interaction terms between race/ethnicity and a recession indicator are controlled to examine whether minorities encountered disproportionately lower health spending during the economic recession. The Great Recession was significantly associated with reductions in health care expenditures at the 10th-50th percentiles of the distribution, but not at the 75th-90th percentiles. Racial and ethnic disparities were more substantial at the lower end of the health expenditure distribution; however, on average the reduction in expenditures was similar for all race/ethnic groups. The Great Recession was also positively associated with spending on emergency department visits. This study shows that the relationship between the Great Recession and health care spending varied along the health expenditure distribution. More variability was observed in the lower end of the health spending distribution compared to the higher end. © Health Research and Educational Trust.

  10. Using Quantile Regression to Examine Health Care Expenditures during the Great Recession

    PubMed Central

    Chen, Jie; Vargas-Bustamante, Arturo; Mortensen, Karoline; Thomas, Stephen B

    2014-01-01

    Objective To examine the association between the Great Recession of 2007–2009 and health care expenditures along the health care spending distribution, with a focus on racial/ethnic disparities. Data Sources/Study Setting Secondary data analyses of the Medical Expenditure Panel Survey (2005–2006 and 2008–2009). Study Design Quantile multivariate regressions are employed to measure the different associations between the economic recession of 2007–2009 and health care spending. Race/ethnicity and interaction terms between race/ethnicity and a recession indicator are controlled to examine whether minorities encountered disproportionately lower health spending during the economic recession. Principal Findings The Great Recession was significantly associated with reductions in health care expenditures at the 10th–50th percentiles of the distribution, but not at the 75th–90th percentiles. Racial and ethnic disparities were more substantial at the lower end of the health expenditure distribution; however, on average the reduction in expenditures was similar for all race/ethnic groups. The Great Recession was also positively associated with spending on emergency department visits. Conclusion This study shows that the relationship between the Great Recession and health care spending varied along the health expenditure distribution. More variability was observed in the lower end of the health spending distribution compared to the higher end. PMID:24134797

  11. Modeling soil organic carbon with Quantile Regression: Dissecting predictors' effects on carbon stocks

    NASA Astrophysics Data System (ADS)

    Lombardo, Luigi; Saia, Sergio; Schillaci, Calogero; Mai, P. Martin; Huser, Raphaël

    2018-05-01

    Soil Organic Carbon (SOC) estimation is crucial to manage both natural and anthropic ecosystems and has recently been put under the magnifying glass after the Paris agreement 2016 due to its relationship with greenhouse gas. Statistical applications have dominated the SOC stock mapping at regional scale so far. However, the community has hardly ever attempted to implement Quantile Regression (QR) to spatially predict the SOC distribution. In this contribution, we test QR to estimate SOC stock (0-30 $cm$ depth) in the agricultural areas of a highly variable semi-arid region (Sicily, Italy, around 25,000 $km2$) by using topographic and remotely sensed predictors. We also compare the results with those from available SOC stock measurement. The QR models produced robust performances and allowed to recognize dominant effects among the predictors with respect to the considered quantile. This information, currently lacking, suggests that QR can discern predictor influences on SOC stock at specific sub-domains of each predictors. In this work, the predictive map generated at the median shows lower errors than those of the Joint Research Centre and International Soil Reference, and Information Centre benchmarks. The results suggest the use of QR as a comprehensive and effective method to map SOC using legacy data in agro-ecosystems. The R code scripted in this study for QR is included.

  12. A Quantile Regression Approach to Understanding the Relations among Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students

    ERIC Educational Resources Information Center

    Tighe, Elizabeth L.; Schatschneider, Christopher

    2016-01-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological…

  13. Interquantile Shrinkage in Regression Models

    PubMed Central

    Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.

    2012-01-01

    Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546

  14. Estimating geographic variation on allometric growth and body condition of Blue Suckers with quantile regression

    USGS Publications Warehouse

    Cade, B.S.; Terrell, J.W.; Neely, B.C.

    2011-01-01

    Increasing our understanding of how environmental factors affect fish body condition and improving its utility as a metric of aquatic system health require reliable estimates of spatial variation in condition (weight at length). We used three statistical approaches that varied in how they accounted for heterogeneity in allometric growth to estimate differences in body condition of blue suckers Cycleptus elongatus across 19 large-river locations in the central USA. Quantile regression of an expanded allometric growth model provided the most comprehensive estimates, including variation in exponents within and among locations (range = 2.88–4.24). Blue suckers from more-southerly locations had the largest exponents. Mixed-effects mean regression of a similar expanded allometric growth model allowed exponents to vary among locations (range = 3.03–3.60). Mean relative weights compared across selected intervals of total length (TL = 510–594 and 594–692 mm) in a multiplicative model involved the implicit assumption that allometric exponents within and among locations were similar to the exponent (3.46) for the standard weight equation. Proportionate differences in the quantiles of weight at length for adult blue suckers (TL = 510, 594, 644, and 692 mm) compared with their average across locations ranged from 1.08 to 1.30 for southern locations (Texas, Mississippi) and from 0.84 to 1.00 for northern locations (Montana, North Dakota); proportionate differences for mean weight ranged from 1.13 to 1.17 and from 0.87 to 0.95, respectively, and those for mean relative weight ranged from 1.10 to 1.18 and from 0.86 to 0.98, respectively. Weights for fish at longer lengths varied by 600–700 g within a location and by as much as 2,000 g among southern and northern locations. Estimates for the Wabash River, Indiana (0.96–1.07 times the average; greatest increases for lower weights at shorter TLs), and for the Missouri River from Blair, Nebraska, to Sioux City, Iowa (0.90

  15. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  16. Growth curves of preschool children in the northeast of iran: a population based study using quantile regression approach.

    PubMed

    Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad

    2013-01-14

    Growth charts are widely used to assess children's growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children's weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn't require restricted assumptions, proposed for estimation reference curves and normal values.

  17. Growth Curves of Preschool Children in the Northeast of Iran: A Population Based Study Using Quantile Regression Approach

    PubMed Central

    Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad

    2013-01-01

    Introduction: Growth charts are widely used to assess children’s growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. Methods: A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. Results: The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children’s weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. Conclusion: The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn’t require restricted assumptions, proposed for estimation reference curves and normal values. PMID:23618470

  18. [Spatial heterogeneity in body condition of small yellow croaker in Yellow Sea and East China Sea based on mixed-effects model and quantile regression analysis].

    PubMed

    Liu, Zun-Lei; Yuan, Xing-Wei; Yan, Li-Ping; Yang, Lin-Lin; Cheng, Jia-Hua

    2013-09-01

    By using the 2008-2010 investigation data about the body condition of small yellow croaker in the offshore waters of southern Yellow Sea (SYS), open waters of northern East China Sea (NECS), and offshore waters of middle East China Sea (MECS), this paper analyzed the spatial heterogeneity of body length-body mass of juvenile and adult small yellow croakers by the statistical approaches of mean regression model and quantile regression model. The results showed that the residual standard errors from the analysis of covariance (ANCOVA) and the linear mixed-effects model were similar, and those from the simple linear regression were the highest. For the juvenile small yellow croakers, their mean body mass in SYS and NECS estimated by the mixed-effects mean regression model was higher than the overall average mass across the three regions, while the mean body mass in MECS was below the overall average. For the adult small yellow croakers, their mean body mass in NECS was higher than the overall average, while the mean body mass in SYS and MECS was below the overall average. The results from quantile regression indicated the substantial differences in the allometric relationships of juvenile small yellow croakers between SYS, NECS, and MECS, with the estimated mean exponent of the allometric relationship in SYS being 2.85, and the interquartile range being from 2.63 to 2.96, which indicated the heterogeneity of body form. The results from ANCOVA showed that the allometric body length-body mass relationships were significantly different between the 25th and 75th percentile exponent values (F=6.38, df=1737, P<0.01) and the 25th percentile and median exponent values (F=2.35, df=1737, P=0.039). The relationship was marginally different between the median and 75th percentile exponent values (F=2.21, df=1737, P=0.051). The estimated body length-body mass exponent of adult small yellow croakers in SYS was 3.01 (10th and 95th percentiles = 2.77 and 3.1, respectively). The

  19. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Treesearch

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  20. The N-shaped environmental Kuznets curve: an empirical evaluation using a panel quantile regression approach.

    PubMed

    Allard, Alexandra; Takman, Johanna; Uddin, Gazi Salah; Ahmed, Ali

    2018-02-01

    We evaluate the N-shaped environmental Kuznets curve (EKC) using panel quantile regression analysis. We investigate the relationship between CO 2 emissions and GDP per capita for 74 countries over the period of 1994-2012. We include additional explanatory variables, such as renewable energy consumption, technological development, trade, and institutional quality. We find evidence for the N-shaped EKC in all income groups, except for the upper-middle-income countries. Heterogeneous characteristics are, however, observed over the N-shaped EKC. Finally, we find a negative relationship between renewable energy consumption and CO 2 emissions, which highlights the importance of promoting greener energy in order to combat global warming.

  1. Variable screening via quantile partial correlation

    PubMed Central

    Ma, Shujie; Tsai, Chih-Ling

    2016-01-01

    In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683

  2. Use of Quantile Regression to Determine the Impact on Total Health Care Costs of Surgical Site Infections Following Common Ambulatory Procedures

    PubMed Central

    Olsen, Margaret A.; Tian, Fang; Wallace, Anna E.; Nickel, Katelin B.; Warren, David K.; Fraser, Victoria J.; Selvam, Nandini; Hamilton, Barton H.

    2017-01-01

    Objective To determine the impact of surgical site infections (SSIs) on healthcare costs following common ambulatory surgical procedures throughout the cost distribution. Background Data on costs of SSIs following ambulatory surgery are sparse, particularly variation beyond just mean costs. Methods We performed a retrospective cohort study of persons undergoing cholecystectomy, breast-conserving surgery (BCS), anterior cruciate ligament reconstruction (ACL), and hernia repair from 12/31/2004–12/31/2010 using commercial insurer claims data. SSIs within 90 days post-procedure were identified; infections during a hospitalization or requiring surgery were considered serious. We used quantile regression, controlling for patient, operative, and postoperative factors to examine the impact of SSIs on 180-day healthcare costs throughout the cost distribution. Results The incidence of serious and non-serious SSIs were 0.8% and 0.2% after 21,062 ACL, 0.5% and 0.3% after 57,750 cholecystectomy, 0.6% and 0.5% after 60,681 hernia, and 0.8% and 0.8% after 42,489 BCS procedures. Serious SSIs were associated with significantly higher costs than non-serious SSIs for all 4 procedures throughout the cost distribution. The attributable cost of serious SSIs increased for both cholecystectomy and hernia repair as the quantile of total costs increased ($38,410 for cholecystectomy with serious SSI vs. no SSI at the 70th percentile of costs, up to $89,371 at the 90th percentile). Conclusions SSIs, particularly serious infections resulting in hospitalization or surgical treatment, were associated with significantly increased healthcare costs after 4 common surgical procedures. Quantile regression illustrated the differential effect of serious SSIs on healthcare costs at the upper end of the cost distribution. PMID:28059961

  3. Use of Quantile Regression to Determine the Impact on Total Health Care Costs of Surgical Site Infections Following Common Ambulatory Procedures.

    PubMed

    Olsen, Margaret A; Tian, Fang; Wallace, Anna E; Nickel, Katelin B; Warren, David K; Fraser, Victoria J; Selvam, Nandini; Hamilton, Barton H

    2017-02-01

    To determine the impact of surgical site infections (SSIs) on health care costs following common ambulatory surgical procedures throughout the cost distribution. Data on costs of SSIs following ambulatory surgery are sparse, particularly variation beyond just mean costs. We performed a retrospective cohort study of persons undergoing cholecystectomy, breast-conserving surgery, anterior cruciate ligament reconstruction, and hernia repair from December 31, 2004 to December 31, 2010 using commercial insurer claims data. SSIs within 90 days post-procedure were identified; infections during a hospitalization or requiring surgery were considered serious. We used quantile regression, controlling for patient, operative, and postoperative factors to examine the impact of SSIs on 180-day health care costs throughout the cost distribution. The incidence of serious and nonserious SSIs was 0.8% and 0.2%, respectively, after 21,062 anterior cruciate ligament reconstruction, 0.5% and 0.3% after 57,750 cholecystectomy, 0.6% and 0.5% after 60,681 hernia, and 0.8% and 0.8% after 42,489 breast-conserving surgery procedures. Serious SSIs were associated with significantly higher costs than nonserious SSIs for all 4 procedures throughout the cost distribution. The attributable cost of serious SSIs increased for both cholecystectomy and hernia repair as the quantile of total costs increased ($38,410 for cholecystectomy with serious SSI vs no SSI at the 70th percentile of costs, up to $89,371 at the 90th percentile). SSIs, particularly serious infections resulting in hospitalization or surgical treatment, were associated with significantly increased health care costs after 4 common surgical procedures. Quantile regression illustrated the differential effect of serious SSIs on health care costs at the upper end of the cost distribution.

  4. Gender differences in French GPs' activity: the contribution of quantile regressions.

    PubMed

    Dumontet, Magali; Franc, Carine

    2015-05-01

    In any fee-for-service system, doctors may be encouraged to increase the number of services (private activity) they provide to receive a higher income. Studying private activity determinants helps to predict doctors' provision of care. In the context of strong feminization and heterogeneity in general practitioners' (GP) behavior, we first aim to measure the effects of the determinants of private activity. Second, we study the evolution of these effects along the private activity distribution. Third, we examine the differences between male and female GPs. From an exhaustive database of French GPs working in private practice in 2008, we performed an ordinary least squares (OLS) regression and quantile regressions (QR) on the GPs' private activity. Among other determinants, we examined the trade-offs within the GPs' household considering his/her marital status, spousal income, and children. While the OLS results showed that female GPs had less private activity than male GPs (-13%), the QR results emphasized a private activity gender gap that increased significantly in the upper tail of the distribution. We also find gender differences in the private activity determinants, including family structure, practice characteristics, and case-mix variables. For instance, having a youngest child under 12 years old had a positive effect on the level of private activity for male GPs and a negative effect for female GPs. The results allow us to understand to what extent the supply of care differs between male and female GPs. In the context of strong feminization, this is essential to consider for organizing and forecasting the GPs' supply of care.

  5. Flood quantile estimation at ungauged sites by Bayesian networks

    NASA Astrophysics Data System (ADS)

    Mediero, L.; Santillán, D.; Garrote, L.

    2012-04-01

    Estimating flood quantiles at a site for which no observed measurements are available is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. The most common technique used is the multiple regression analysis, which relates physical and climatic basin characteristic to flood quantiles. Regression equations are fitted from flood frequency data and basin characteristics at gauged sites. Regression equations are a rigid technique that assumes linear relationships between variables and cannot take the measurement errors into account. In addition, the prediction intervals are estimated in a very simplistic way from the variance of the residuals in the estimated model. Bayesian networks are a probabilistic computational structure taken from the field of Artificial Intelligence, which have been widely and successfully applied to many scientific fields like medicine and informatics, but application to the field of hydrology is recent. Bayesian networks infer the joint probability distribution of several related variables from observations through nodes, which represent random variables, and links, which represent causal dependencies between them. A Bayesian network is more flexible than regression equations, as they capture non-linear relationships between variables. In addition, the probabilistic nature of Bayesian networks allows taking the different sources of estimation uncertainty into account, as they give a probability distribution as result. A homogeneous region in the Tagus Basin was selected as case study. A regression equation was fitted taking the basin area, the annual maximum 24-hour rainfall for a given recurrence interval and the mean height as explanatory variables. Flood quantiles at ungauged sites were estimated by Bayesian networks. Bayesian networks need to be learnt from a huge enough data set. As observational data are reduced, a

  6. Structured Additive Quantile Regression for Assessing the Determinants of Childhood Anemia in Rwanda.

    PubMed

    Habyarimana, Faustin; Zewotir, Temesgen; Ramroop, Shaun

    2017-06-17

    Childhood anemia is among the most significant health problems faced by public health departments in developing countries. This study aims at assessing the determinants and possible spatial effects associated with childhood anemia in Rwanda. The 2014/2015 Rwanda Demographic and Health Survey (RDHS) data was used. The analysis was done using the structured spatial additive quantile regression model. The findings of this study revealed that the child's age; the duration of breastfeeding; gender of the child; the nutritional status of the child (whether underweight and/or wasting); whether the child had a fever; had a cough in the two weeks prior to the survey or not; whether the child received vitamin A supplementation in the six weeks before the survey or not; the household wealth index; literacy of the mother; mother's anemia status; mother's age at the birth are all significant factors associated with childhood anemia in Rwanda. Furthermore, significant structured spatial location effects on childhood anemia was found.

  7. Modeling the human development index and the percentage of poor people using quantile smoothing splines

    NASA Astrophysics Data System (ADS)

    Mulyani, Sri; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Mean regression is a statistical method to explain the relationship between the response variable and the predictor variable based on the central tendency of the data (mean) of the response variable. The parameter estimation in mean regression (with Ordinary Least Square or OLS) generates a problem if we apply it to the data with a symmetric, fat-tailed, or containing outlier. Hence, an alternative method is necessary to be used to that kind of data, for example quantile regression method. The quantile regression is a robust technique to the outlier. This model can explain the relationship between the response variable and the predictor variable, not only on the central tendency of the data (median) but also on various quantile, in order to obtain complete information about that relationship. In this study, a quantile regression is developed with a nonparametric approach such as smoothing spline. Nonparametric approach is used if the prespecification model is difficult to determine, the relation between two variables follow the unknown function. We will apply that proposed method to poverty data. Here, we want to estimate the Percentage of Poor People as the response variable involving the Human Development Index (HDI) as the predictor variable.

  8. Estimation of peak discharge quantiles for selected annual exceedance probabilities in northeastern Illinois

    USGS Publications Warehouse

    Over, Thomas M.; Saito, Riki J.; Veilleux, Andrea G.; Sharpe, Jennifer B.; Soong, David T.; Ishii, Audrey L.

    2016-06-28

    This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, regional skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at ungaged

  9. Ensuring the consistancy of Flow Direction Curve reconstructions: the 'quantile solidarity' approach

    NASA Astrophysics Data System (ADS)

    Poncelet, Carine; Andreassian, Vazken; Oudin, Ludovic

    2015-04-01

    Flow Duration Curves (FDCs) are a hydrologic tool describing the distribution of streamflows at a catchment outlet. FDCs are usually used for calibration of hydrological models, managing water quality and classifying catchments, among others. For gauged catchments, empirical FDCs can be computed from streamflow records. For ungauged catchments, on the other hand, FDCs cannot be obtained from streamflow records and must therefore be obtained in another manner, for example through reconstructions. Regression-based reconstructions are methods relying on the evaluation of quantiles separately from catchments' attributes (climatic or physical features).The advantage of this category of methods is that it is informative about the processes and it is non-parametric. However, the large number of parameters required can cause unwanted artifacts, typically reconstructions that do not always produce increasing quantiles. In this paper we propose a new approach named Quantile Solidarity (QS), which is applied under strict proxy-basin test conditions (Klemes, 1986) to a set of 600 French catchments. Half of the catchments are considered as gauged and used to calibrate the regression and compute residuals of the regression. The QS approach consists in a three-step regionalization scheme, which first links quantile values to physical descriptors, then reduces the number of regression parameters and finally exploits the spatial correlation of the residuals. The innovation is the utilisation of the parameters continuity across the quantiles to dramatically reduce the number of parameters. The second half of catchment is used as an independent validation set over which we show that the QS approach ensures strictly growing FDC reconstructions in ungauged conditions. Reference: V. KLEMEŠ (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24

  10. A quantile count model of water depth constraints on Cape Sable seaside sparrows

    USGS Publications Warehouse

    Cade, B.S.; Dong, Q.

    2008-01-01

    1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.

  11. An application of quantile random forests for predictive mapping of forest attributes

    Treesearch

    E.A. Freeman; G.G. Moisen

    2015-01-01

    Increasingly, random forest models are used in predictive mapping of forest attributes. Traditional random forests output the mean prediction from the random trees. Quantile regression forests (QRF) is an extension of random forests developed by Nicolai Meinshausen that provides non-parametric estimates of the median predicted value as well as prediction quantiles. It...

  12. Socio-demographic, clinical characteristics and utilization of mental health care services associated with SF-6D utility scores in patients with mental disorders: contributions of the quantile regression.

    PubMed

    Prigent, Amélie; Kamendje-Tchokobou, Blaise; Chevreul, Karine

    2017-11-01

    Health-related quality of life (HRQoL) is a widely used concept in the assessment of health care. Some generic HRQoL instruments, based on specific algorithms, can generate utility scores which reflect the preferences of the general population for the different health states described by the instrument. This study aimed to investigate the relationships between utility scores and potentially associated factors in patients with mental disorders followed in inpatient and/or outpatient care settings using two statistical methods. Patients were recruited in four psychiatric sectors in France. Patient responses to the SF-36 generic HRQoL instrument were used to calculate SF-6D utility scores. The relationships between utility scores and patient socio-demographic, clinical characteristics, and mental health care utilization, considered as potentially associated factors, were studied using OLS and quantile regressions. One hundred and seventy six patients were included. Women, severely ill patients and those hospitalized full-time tended to report lower utility scores, whereas psychotic disorders (as opposed to mood disorders) and part-time care were associated with higher scores. The quantile regression highlighted that the size of the associations between the utility scores and some patient characteristics varied along with the utility score distribution, and provided more accurate estimated values than OLS regression. The quantile regression may constitute a relevant complement for the analysis of factors associated with utility scores. For policy decision-making, the association of full-time hospitalization with lower utility scores while part-time care was associated with higher scores supports the further development of alternatives to full-time hospitalizations.

  13. Hybrid ARIMAX quantile regression method for forecasting short term electricity consumption in east java

    NASA Astrophysics Data System (ADS)

    Prastuti, M.; Suhartono; Salehah, NA

    2018-04-01

    The need for energy supply, especially for electricity in Indonesia has been increasing in the last past years. Furthermore, the high electricity usage by people at different times leads to the occurrence of heteroscedasticity issue. Estimate the electricity supply that could fulfilled the community’s need is very important, but the heteroscedasticity issue often made electricity forecasting hard to be done. An accurate forecast of electricity consumptions is one of the key challenges for energy provider to make better resources and service planning and also take control actions in order to balance the electricity supply and demand for community. In this paper, hybrid ARIMAX Quantile Regression (ARIMAX-QR) approach was proposed to predict the short-term electricity consumption in East Java. This method will also be compared to time series regression using RMSE, MAPE, and MdAPE criteria. The data used in this research was the electricity consumption per half-an-hour data during the period of September 2015 to April 2016. The results show that the proposed approach can be a competitive alternative to forecast short-term electricity in East Java. ARIMAX-QR using lag values and dummy variables as predictors yield more accurate prediction in both in-sample and out-sample data. Moreover, both time series regression and ARIMAX-QR methods with addition of lag values as predictor could capture accurately the patterns in the data. Hence, it produces better predictions compared to the models that not use additional lag variables.

  14. Using instant messaging to enhance the interpersonal relationships of Taiwanese adolescents: evidence from quantile regression analysis.

    PubMed

    Lee, Yueh-Chiang; Sun, Ya Chung

    2009-01-01

    Even though use of the internet by adolescents has grown exponentially, little is known about the correlation between their interaction via Instant Messaging (IM) and the evolution of their interpersonal relationships in real life. In the present study, 369 junior high school students in Taiwan responded to questions regarding their IM usage and their dispositional measures of real-life interpersonal relationships. Descriptive statistics, factor analysis, and quantile regression methods were used to analyze the data. Results indicate that (1) IM helps define adolescents' self-identity (forming and maintaining individual friendships) and social-identity (belonging to a peer group), and (2) how development of an interpersonal relationship is impacted by the use of IM since it appears that adolescents use IM to improve their interpersonal relationships in real life.

  15. Physical Activity and Pediatric Obesity: A Quantile Regression Analysis

    PubMed Central

    Mitchell, Jonathan A.; Dowda, Marsha; Pate, Russell R.; Kordas, Katarzyna; Froberg, Karsten; Sardinha, Luís B.; Kolle, Elin; Page, Angela

    2016-01-01

    Purpose We aimed to determine if moderate-to-vigorous physical activity (MVPA) and sedentary behavior (SB) were independently associated with body mass index (BMI) and waist circumference (WC) in children and adolescents. Methods Data from the International Children’s Accelerometry Database (ICAD) were used to address our objectives (N=11,115; 6-18y; 51% female). We calculated age and gender specific body mass index (BMI) and waist circumference (WC) Z-scores and used accelerometry to estimate MVPA and total SB. Self-reported television viewing was used as a measure of leisure time SB. Quantile regression was used to analyze the data. Results MVPA and total SB were associated with lower and higher BMI and WC Z-scores, respectively. These associations were strongest at the higher percentiles of the Z-score distributions. After including MVPA and total SB in the same model the MVPA associations remained, but the SB associations were no longer present. For example, each additional hour per day of MVPA was not associated with BMI Z-score at the 10th percentile (b=-0.02, P=0.170), but was associated with lower BMI Z-score at the 50th (b=-0.19, P<0.001) and 90th percentiles (b=-0.41, P<0.001). More television viewing was associated with higher BMI and WC and the associations were strongest at the higher percentiles of the Z-score distributions, with adjustment for MVPA and total SB. Conclusions Our observation of stronger associations at the higher percentiles indicate that increasing MVPA and decreasing television viewing at the population-level could shift the upper tails of the BMI and WC frequency distributions to lower values, thereby lowering the number of children and adolescents classified as obese. PMID:27755284

  16. Quantile Functions, Convergence in Quantile, and Extreme Value Distribution Theory.

    DTIC Science & Technology

    1980-11-01

    Gnanadesikan (1968). Quantile functions are advocated by Parzen (1979) as providing an approach to probability-based data analysis. Quantile functions are... Gnanadesikan , R. (1968). Probability Plotting Methods for the Analysis of Data, Biomtrika, 55, 1-17.

  17. Detecting Long-term Trend of Water Quality Indices of Dong-gang River, Taiwan Using Quantile Regression

    NASA Astrophysics Data System (ADS)

    Yang, D.; Shiau, J.

    2013-12-01

    ABSTRACT BODY: Abstract Surface water quality is an essential issue in water-supply for human uses and sustaining healthy ecosystem of rivers. However, water quality of rivers is easily influenced by anthropogenic activities such as urban development and wastewater disposal. Long-term monitoring of water quality can assess whether water quality of rivers deteriorates or not. Taiwan is a population-dense area and heavily depends on surface water for domestic, industrial, and agricultural uses. Dong-gang River is one of major resources in southern Taiwan for agricultural requirements. The water-quality data of four monitoring stations of the Dong-gang River for the period of 2000-2012 are selected for trend analysis. The parameters used to characterize water quality of rivers include biochemical oxygen demand (BOD), dissolved oxygen (DO), suspended solids (SS), and ammonia nitrogen (NH3-N). These four water-quality parameters are integrated into an index called river pollution index (RPI) to indicate the pollution level of rivers. Although widely used non-parametric Mann-Kendall test and linear regression exhibit computational efficiency to identify trends of water-quality indices, limitations of such approaches include sensitive to outliers and estimations of conditional mean only. Quantile regression, capable of identifying changes over time of any percentile values, is employed in this study to detect long-term trend of water-quality indices for the Dong-gang River located in southern Taiwan. The results show that Dong-gang River 4 stations from 2000 to 2012 monthly long-term trends in water quality.To analyze s Dong-gang River long-term water quality trends and pollution characteristics. The results showed that the bridge measuring ammonia Long-dong, BOD5 measure in that station on a downward trend, DO, and SS is on the rise, River Pollution Index (RPI) on a downward trend. The results form Chau-Jhou station also ahowed simialar trends .more and more near the

  18. Obesity inequality in Malaysia: decomposing differences by gender and ethnicity using quantile regression.

    PubMed

    Dunn, Richard A; Tan, Andrew K G; Nayga, Rodolfo M

    2012-01-01

    Obesity prevalence is unequally distributed across gender and ethnic group in Malaysia. In this paper, we examine the role of socioeconomic inequality in explaining these disparities. The body mass index (BMI) distributions of Malays and Chinese, the two largest ethnic groups in Malaysia, are estimated through the use of quantile regression. The differences in the BMI distributions are then decomposed into two parts: attributable to differences in socioeconomic endowments and attributable to differences in responses to endowments. For both males and females, the BMI distribution of Malays is shifted toward the right of the distribution of Chinese, i.e., Malays exhibit higher obesity rates. In the lower 75% of the distribution, differences in socioeconomic endowments explain none of this difference. At the 90th percentile, differences in socioeconomic endowments account for no more than 30% of the difference in BMI between ethnic groups. Our results demonstrate that the higher levels of income and education that accrue with economic development will likely not eliminate obesity inequality. This leads us to conclude that reduction of obesity inequality, as well the overall level of obesity, requires increased efforts to alter the lifestyle behaviors of Malaysians.

  19. Hospital ownership and drug utilization under a global budget: a quantile regression analysis.

    PubMed

    Zhang, Jing Hua; Chou, Shin-Yi; Deily, Mary E; Lien, Hsien-Ming

    2014-03-01

    A global budgeting system helps control the growth of healthcare spending by setting expenditure ceilings. However, the hospital global budget implemented in Taiwan in 2002 included a special provision: drug expenditures are reimbursed at face value, while other expenditures are subject to discounting. That gives hospitals, particularly those that are for-profit, an incentive to increase drug expenditures in treating patients. We calculated monthly drug expenditures by hospital departments from January 1997 to June 2006, using a sample of 348 193 patient claims to Taiwan National Health Insurance. To allow for variation among responses by departments with differing reliance on drugs and among hospitals of different ownerships, we used quantile regression to identify the effect of the hospital global budget on drug expenditures. Although drug expenditure increased in all hospital departments after the enactment of the hospital global budget, departments in for-profit hospitals that rely more heavily on drug treatments increased drug spending more, relative to public hospitals. Our findings suggest that a global budgeting system with special reimbursement provisions for certain treatment categories may alter treatment decisions and may undermine cost-containment goals, particularly among for-profit hospitals.

  20. Quantile regression and Bayesian cluster detection to identify radon prone areas.

    PubMed

    Sarra, Annalina; Fontanella, Lara; Valentini, Pasquale; Palermi, Sergio

    2016-11-01

    Albeit the dominant source of radon in indoor environments is the geology of the territory, many studies have demonstrated that indoor radon concentrations also depend on dwelling-specific characteristics. Following a stepwise analysis, in this study we propose a combined approach to delineate radon prone areas. We first investigate the impact of various building covariates on indoor radon concentrations. To achieve a more complete picture of this association, we exploit the flexible formulation of a Bayesian spatial quantile regression, which is also equipped with parameters that controls the spatial dependence across data. The quantitative knowledge of the influence of each significant building-specific factor on the measured radon levels is employed to predict the radon concentrations that would have been found if the sampled buildings had possessed standard characteristics. Those normalised radon measures should reflect the geogenic radon potential of the underlying ground, which is a quantity directly related to the geological environment. The second stage of the analysis is aimed at identifying radon prone areas, and to this end, we adopt a Bayesian model for spatial cluster detection using as reference unit the building with standard characteristics. The case study is based on a data set of more than 2000 indoor radon measures, available for the Abruzzo region (Central Italy) and collected by the Agency of Environmental Protection of Abruzzo, during several indoor radon monitoring surveys. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Automatic coronary artery segmentation based on multi-domains remapping and quantile regression in angiographies.

    PubMed

    Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong

    2016-12-01

    Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Quantile equivalence to evaluate compliance with habitat management objectives

    USGS Publications Warehouse

    Cade, Brian S.; Johnson, Pamela R.

    2011-01-01

    Equivalence estimated with linear quantile regression was used to evaluate compliance with habitat management objectives at Arapaho National Wildlife Refuge based on monitoring data collected in upland (5,781 ha; n = 511 transects) and riparian and meadow (2,856 ha, n = 389 transects) habitats from 2005 to 2008. Quantiles were used because the management objectives specified proportions of the habitat area that needed to comply with vegetation criteria. The linear model was used to obtain estimates that were averaged across 4 y. The equivalence testing framework allowed us to interpret confidence intervals for estimated proportions with respect to intervals of vegetative criteria (equivalence regions) in either a liberal, benefit-of-doubt or conservative, fail-safe approach associated with minimizing alternative risks. Simple Boolean conditional arguments were used to combine the quantile equivalence results for individual vegetation components into a joint statement for the multivariable management objectives. For example, management objective 2A required at least 809 ha of upland habitat with a shrub composition ≥0.70 sagebrush (Artemisia spp.), 20–30% canopy cover of sagebrush ≥25 cm in height, ≥20% canopy cover of grasses, and ≥10% canopy cover of forbs on average over 4 y. Shrub composition and canopy cover of grass each were readily met on >3,000 ha under either conservative or liberal interpretations of sampling variability. However, there were only 809–1,214 ha (conservative to liberal) with ≥10% forb canopy cover and 405–1,098 ha with 20–30%canopy cover of sagebrush ≥25 cm in height. Only 91–180 ha of uplands simultaneously met criteria for all four components, primarily because canopy cover of sagebrush and forbs was inversely related when considered at the spatial scale (30 m) of a sample transect. We demonstrate how the quantile equivalence analyses also can help refine the numerical specification of habitat objectives and explore

  3. Estimating earnings losses due to mental illness: a quantile regression approach.

    PubMed

    Marcotte, Dave E; Wilcox-Gök, Virginia

    2003-09-01

    The ability of workers to remain productive and sustain earnings when afflicted with mental illness depends importantly on access to appropriate treatment and on flexibility and support from employers. In the United States there is substantial variation in access to health care and sick leave and other employment flexibilities across the earnings distribution. Consequently, a worker's ability to work and how much his/her earnings are impeded likely depend upon his/her position in the earnings distribution. Because of this, focusing on average earnings losses may provide insufficient information on the impact of mental illness in the labor market. In this paper, we examine the effects of mental illness on earnings by recognizing that effects could vary across the distribution of earnings. Using data from the National Comorbidity Survey, we employ a quantile regression estimator to identify the effects at key points in the earnings distribution. We find that earnings effects vary importantly across the distribution. While average effects are often not large, mental illness more commonly imposes earnings losses at the lower tail of the distribution, especially for women. In only one case do we find an illness to have negative effects across the distribution. Mental illness can have larger negative impacts on economic outcomes than previously estimated, even if those effects are not uniform. Consequently, researchers and policy makers alike should not be placated by findings that mean earnings effects are relatively small. Such estimates miss important features of how and where mental illness is associated with real economic losses for the ill.

  4. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

    USGS Publications Warehouse

    Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

    2018-01-01

    Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

  5. Nonuniform sampling by quantiles

    NASA Astrophysics Data System (ADS)

    Craft, D. Levi; Sonstrom, Reilly E.; Rovnyak, Virginia G.; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license.

  6. Patient characteristics associated with differences in radiation exposure from pediatric abdomen-pelvis CT scans: a quantile regression analysis.

    PubMed

    Cooper, Jennifer N; Lodwick, Daniel L; Adler, Brent; Lee, Choonsik; Minneci, Peter C; Deans, Katherine J

    2017-06-01

    Computed tomography (CT) is a widely used diagnostic tool in pediatric medicine. However, due to concerns regarding radiation exposure, it is essential to identify patient characteristics associated with higher radiation burden from CT imaging, in order to more effectively target efforts towards dose reduction. Our objective was to identify the effects of various demographic and clinical patient characteristics on radiation exposure from single abdomen/pelvis CT scans in children. CT scans performed at our institution between January 2013 and August 2015 in patients under 16 years of age were processed using a software tool that estimates patient-specific organ and effective doses and merges these estimates with data from the electronic health record and billing record. Quantile regression models at the 50th, 75th, and 90th percentiles were used to estimate the effects of patients' demographic and clinical characteristics on effective dose. 2390 abdomen/pelvis CT scans (median effective dose 1.52mSv) were included. Of all characteristics examined, only older age, female gender, higher BMI, and whether the scan was a multiphase exam or an exam that required repeating for movement were significant predictors of higher effective dose at each quantile examined (all p<0.05). The effects of obesity and multiphase or repeat scanning on effective dose were magnified in higher dose scans. Older age, female gender, obesity, and multiphase or repeat scanning are all associated with increased effective dose from abdomen/pelvis CT. Targeted efforts to reduce dose from abdominal CT in these groups should be undertaken. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Nonuniform sampling by quantiles.

    PubMed

    Craft, D Levi; Sonstrom, Reilly E; Rovnyak, Virginia G; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. The heterogeneous effects of urbanization and income inequality on CO2 emissions in BRICS economies: evidence from panel quantile regression.

    PubMed

    Zhu, Huiming; Xia, Hang; Guo, Yawei; Peng, Cheng

    2018-04-12

    This paper empirically examines the effects of urbanization and income inequality on CO 2 emissions in the BRICS economies (i.e., Brazil, Russia, India, China, and South Africa) during the periods 1994-2013. The method we used is the panel quantile regression, which takes into account the unobserved individual heterogeneity and distributional heterogeneity. Our empirical results indicate that urbanization has a significant and negative impact on carbon emissions, except in the 80 th , 90 th , and 95 th quantiles. We also quantitatively investigate the direct and indirect effect of urbanization on carbon emissions, and the results show that we may underestimate urbanization's effect on carbon emissions if we ignore its indirect effect. In addition, in middle- and high-emission countries, income inequality has a significant and positive impact on carbon emissions. The results of our study indicate that in the BRICS economies, there is an inverted U-shaped environmental Kuznets curve (EKC) between the GDP per capita and carbon emissions. The conclusions of this study have important policy implications for policymakers. Policymakers should try to narrow the income gap between the rich and the poor to improve environmental quality; the BRICS economies can speed up urbanization to reduce carbon emissions, but they must improve energy efficiency and use clean energy to the greatest extent in the process.

  9. Early Home Activities and Oral Language Skills in Middle Childhood: A Quantile Analysis

    ERIC Educational Resources Information Center

    Law, James; Rush, Robert; King, Tom; Westrupp, Elizabeth; Reilly, Sheena

    2018-01-01

    Oral language development is a key outcome of elementary school, and it is important to identify factors that predict it most effectively. Commonly researchers use ordinary least squares regression with conclusions restricted to average performance conditional on relevant covariates. Quantile regression offers a more sophisticated alternative.…

  10. Using the Quantile Mapping to improve a weather generator

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Themessl, M.; Gobiet, A.

    2012-04-01

    We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.

  11. Quantile rank maps: a new tool for understanding individual brain development.

    PubMed

    Chen, Huaihou; Kelly, Clare; Castellanos, F Xavier; He, Ye; Zuo, Xi-Nian; Reiss, Philip T

    2015-05-01

    We propose a novel method for neurodevelopmental brain mapping that displays how an individual's values for a quantity of interest compare with age-specific norms. By estimating smoothly age-varying distributions at a set of brain regions of interest, we derive age-dependent region-wise quantile ranks for a given individual, which can be presented in the form of a brain map. Such quantile rank maps could potentially be used for clinical screening. Bootstrap-based confidence intervals are proposed for the quantile rank estimates. We also propose a recalibrated Kolmogorov-Smirnov test for detecting group differences in the age-varying distribution. This test is shown to be more robust to model misspecification than a linear regression-based test. The proposed methods are applied to brain imaging data from the Nathan Kline Institute Rockland Sample and from the Autism Brain Imaging Data Exchange (ABIDE) sample. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. [Socioeconomic factors conditioning obesity in adults. Evidence based on quantile regression and panel data].

    PubMed

    Temporelli, Karina L; Viego, Valentina N

    2016-08-01

    Objective To measure the effect of socioeconomic variables on the prevalence of obesity. Factors such as income level, urbanization, incorporation of women into the labor market and access to unhealthy foods are considered in this paper. Method Econometric estimates of the proportion of obese men and women by country were calculated using models based on panel data and quantile regressions, with data from 192 countries for the period 2002-2005.Levels of per capita income, urbanization, income/big mac ratio price and labor indicators for female population were considered as explanatory variables. Results Factors that have influence over obesity in adults differ between men and women; accessibility to fast food is related to male obesity, while the employment mode causes higher rates in women. The underlying socioeconomic factors for obesity are also different depending on the magnitude of this problem in each country; in countries with low prevalence, a greater level of income favor the transition to obesogenic habits, while a higher income level mitigates the problem in those countries with high rates of obesity. Discussion Identifying the socio-economic causes of the significant increase in the prevalence of obesity is essential for the implementation of effective strategies for prevention, since this condition not only affects the quality of life of those who suffer from it but also puts pressure on health systems due to the treatment costs of associated diseases.

  13. Factors Associated with Adherence to Adjuvant Endocrine Therapy Among Privately Insured and Newly Diagnosed Breast Cancer Patients: A Quantile Regression Analysis.

    PubMed

    Farias, Albert J; Hansen, Ryan N; Zeliadt, Steven B; Ornelas, India J; Li, Christopher I; Thompson, Beti

    2016-08-01

    Adherence to adjuvant endocrine therapy (AET) for estrogen receptor-positive breast cancer remains suboptimal, which suggests that women are not getting the full benefit of the treatment to reduce breast cancer recurrence and mortality. The majority of studies on adherence to AET focus on identifying factors among those women at the highest levels of adherence and provide little insight on factors that influence medication use across the distribution of adherence. To understand how factors influence adherence among women across low and high levels of adherence. A retrospective evaluation was conducted using the Truven Health MarketScan Commercial Claims and Encounters Database from 2007-2011. Privately insured women aged 18-64 years who were recently diagnosed and treated for breast cancer and who initiated AET within 12 months of primary treatment were assessed. Adherence was measured as the proportion of days covered (PDC) over a 12-month period. Simultaneous multivariable quantile regression was used to assess the association between treatment and demographic factors, use of mail order pharmacies, medication switching, and out-of-pocket costs and adherence. The effect of each variable was examined at the 40th, 60th, 80th, and 95th quantiles. Among the 6,863 women in the cohort, mail order pharmacies had the greatest influence on adherence at the 40th quantile, associated with a 29.6% (95% CI = 22.2-37.0) higher PDC compared with retail pharmacies. Out-of-pocket cost for a 30-day supply of AET greater than $20 was associated with an 8.6% (95% CI = 2.8-14.4) lower PDC versus $0-$9.99. The main factors that influenced adherence at the 95th quantile were mail order pharmacies, associated with a 4.4% higher PDC (95% CI = 3.8-5.0) versus retail pharmacies, and switching AET medication 2 or more times, associated with a 5.6% lower PDC versus not switching (95% CI = 2.3-9.0). Factors associated with adherence differed across quantiles. Addressing the use of mail order

  14. Simulating Quantile Models with Applications to Economics and Management

    NASA Astrophysics Data System (ADS)

    Machado, José A. F.

    2010-05-01

    The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.

  15. Spatially Modeling the Effects of Meteorological Drivers of PM2.5 in the Eastern United States via a Local Linear Penalized Quantile Regression Estimator.

    PubMed

    Russell, Brook T; Wang, Dewei; McMahan, Christopher S

    2017-08-01

    Fine particulate matter (PM 2.5 ) poses a significant risk to human health, with long-term exposure being linked to conditions such as asthma, chronic bronchitis, lung cancer, atherosclerosis, etc. In order to improve current pollution control strategies and to better shape public policy, the development of a more comprehensive understanding of this air pollutant is necessary. To this end, this work attempts to quantify the relationship between certain meteorological drivers and the levels of PM 2.5 . It is expected that the set of important meteorological drivers will vary both spatially and within the conditional distribution of PM 2.5 levels. To account for these characteristics, a new local linear penalized quantile regression methodology is developed. The proposed estimator uniquely selects the set of important drivers at every spatial location and for each quantile of the conditional distribution of PM 2.5 levels. The performance of the proposed methodology is illustrated through simulation, and it is then used to determine the association between several meteorological drivers and PM 2.5 over the Eastern United States (US). This analysis suggests that the primary drivers throughout much of the Eastern US tend to differ based on season and geographic location, with similarities existing between "typical" and "high" PM 2.5 levels.

  16. Association of Perceived Stress with Stressful Life Events, Lifestyle and Sociodemographic Factors: A Large-Scale Community-Based Study Using Logistic Quantile Regression

    PubMed Central

    Feizi, Awat; Aliyari, Roqayeh; Roohafza, Hamidreza

    2012-01-01

    Objective. The present paper aimed at investigating the association between perceived stress and major life events stressors in Iranian general population. Methods. In a cross-sectional large-scale community-based study, 4583 people aged 19 and older, living in Isfahan, Iran, were investigated. Logistic quantile regression was used for modeling perceived stress, measured by GHQ questionnaire, as the bounded outcome (dependent), variable, and as a function of most important stressful life events, as the predictor variables, controlling for major lifestyle and sociodemographic factors. This model provides empirical evidence of the predictors' effects heterogeneity depending on individual location on the distribution of perceived stress. Results. The results showed that among four stressful life events, family conflicts and social problems were more correlated with level of perceived stress. Higher levels of education were negatively associated with perceived stress and its coefficients monotonically decrease beyond the 30th percentile. Also, higher levels of physical activity were associated with perception of low levels of stress. The pattern of gender's coefficient over the majority of quantiles implied that females are more affected by stressors. Also high perceived stress was associated with low or middle levels of income. Conclusions. The results of current research suggested that in a developing society with high prevalence of stress, interventions targeted toward promoting financial and social equalities, social skills training, and healthy lifestyle may have the potential benefits for large parts of the population, most notably female and lower educated people. PMID:23091560

  17. Removing Batch Effects from Longitudinal Gene Expression - Quantile Normalization Plus ComBat as Best Approach for Microarray Transcriptome Data

    PubMed Central

    Müller, Christian; Schillert, Arne; Röthemeier, Caroline; Trégouët, David-Alexandre; Proust, Carole; Binder, Harald; Pfeiffer, Norbert; Beutel, Manfred; Lackner, Karl J.; Schnabel, Renate B.; Tiret, Laurence; Wild, Philipp S.; Blankenberg, Stefan

    2016-01-01

    Technical variation plays an important role in microarray-based gene expression studies, and batch effects explain a large proportion of this noise. It is therefore mandatory to eliminate technical variation while maintaining biological variability. Several strategies have been proposed for the removal of batch effects, although they have not been evaluated in large-scale longitudinal gene expression data. In this study, we aimed at identifying a suitable method for batch effect removal in a large study of microarray-based longitudinal gene expression. Monocytic gene expression was measured in 1092 participants of the Gutenberg Health Study at baseline and 5-year follow up. Replicates of selected samples were measured at both time points to identify technical variability. Deming regression, Passing-Bablok regression, linear mixed models, non-linear models as well as ReplicateRUV and ComBat were applied to eliminate batch effects between replicates. In a second step, quantile normalization prior to batch effect correction was performed for each method. Technical variation between batches was evaluated by principal component analysis. Associations between body mass index and transcriptomes were calculated before and after batch removal. Results from association analyses were compared to evaluate maintenance of biological variability. Quantile normalization, separately performed in each batch, combined with ComBat successfully reduced batch effects and maintained biological variability. ReplicateRUV performed perfectly in the replicate data subset of the study, but failed when applied to all samples. All other methods did not substantially reduce batch effects in the replicate data subset. Quantile normalization plus ComBat appears to be a valuable approach for batch correction in longitudinal gene expression data. PMID:27272489

  18. How important are determinants of obesity measured at the individual level for explaining geographic variation in body mass index distributions? Observational evidence from Canada using Quantile Regression and Blinder-Oaxaca Decomposition.

    PubMed

    Dutton, Daniel J; McLaren, Lindsay

    2016-04-01

    Obesity prevalence varies between geographic regions in Canada. The reasons for this variation are unclear but most likely implicate both individual-level and population-level factors. The objective of this study was to examine whether equalising correlates of body mass index (BMI) across these geographic regions could be reasonably expected to reduce differences in BMI distributions between regions. Using data from three cycles of the Canadian Community Health Survey (CCHS) 2001, 2003 and 2007 for males and females, we modelled between-region BMI cross-sectionally using quantile regression and Blinder-Oaxaca decomposition of the quantile regression results. We show that while individual-level variables (ie, age, income, education, physical activity level, fruit and vegetable consumption, smoking status, drinking status, family doctor status, rural status, employment in the past 12 months and marital status) may be Caucasian important correlates of BMI within geographic regions, those variables are not capable of explaining variation in BMI between regions. Equalisation of common correlates of BMI between regions cannot be reasonably expected to reduce differences in the BMI distributions between regions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Quantile regression of microgeographic variation in population characteristics of an invasive vertebrate predator

    USGS Publications Warehouse

    Siers, Shane R.; Savidge, Julie A.; Reed, Robert

    2017-01-01

    Localized ecological conditions have the potential to induce variation in population characteristics such as size distributions and body conditions. The ability to generalize the influence of ecological characteristics on such population traits may be particularly meaningful when those traits influence prospects for successful management interventions. To characterize variability in invasive Brown Treesnake population attributes within and among habitat types, we conducted systematic and seasonally-balanced surveys, collecting 100 snakes from each of 18 sites: three replicates within each of six major habitat types comprising 95% of Guam’s geographic expanse. Our study constitutes one of the most comprehensive and controlled samplings of any published snake study. Quantile regression on snake size and body condition indicated significant ecological heterogeneity, with a general trend of relative consistency of size classes and body conditions within and among scrub and Leucaena forest habitat types and more heterogeneity among ravine forest, savanna, and urban residential sites. Larger and more robust snakes were found within some savanna and urban habitat replicates, likely due to relative availability of larger prey. Compared to more homogeneous samples in the wet season, variability in size distributions and body conditions was greater during the dry season. Although there is evidence of habitat influencing Brown Treesnake populations at localized scales (e.g., the higher prevalence of larger snakes—particularly males—in savanna and urban sites), the level of variability among sites within habitat types indicates little ability to make meaningful predictions about these traits at unsampled locations. Seasonal variability within sites and habitats indicates that localized population characterization should include sampling in both wet and dry seasons. Extreme values at single replicates occasionally influenced overall habitat patterns, while pooling

  20. Quantile regression of microgeographic variation in population characteristics of an invasive vertebrate predator

    PubMed Central

    Siers, Shane R.; Savidge, Julie A.; Reed, Robert N.

    2017-01-01

    Localized ecological conditions have the potential to induce variation in population characteristics such as size distributions and body conditions. The ability to generalize the influence of ecological characteristics on such population traits may be particularly meaningful when those traits influence prospects for successful management interventions. To characterize variability in invasive Brown Treesnake population attributes within and among habitat types, we conducted systematic and seasonally-balanced surveys, collecting 100 snakes from each of 18 sites: three replicates within each of six major habitat types comprising 95% of Guam’s geographic expanse. Our study constitutes one of the most comprehensive and controlled samplings of any published snake study. Quantile regression on snake size and body condition indicated significant ecological heterogeneity, with a general trend of relative consistency of size classes and body conditions within and among scrub and Leucaena forest habitat types and more heterogeneity among ravine forest, savanna, and urban residential sites. Larger and more robust snakes were found within some savanna and urban habitat replicates, likely due to relative availability of larger prey. Compared to more homogeneous samples in the wet season, variability in size distributions and body conditions was greater during the dry season. Although there is evidence of habitat influencing Brown Treesnake populations at localized scales (e.g., the higher prevalence of larger snakes—particularly males—in savanna and urban sites), the level of variability among sites within habitat types indicates little ability to make meaningful predictions about these traits at unsampled locations. Seasonal variability within sites and habitats indicates that localized population characterization should include sampling in both wet and dry seasons. Extreme values at single replicates occasionally influenced overall habitat patterns, while pooling

  1. Quantile regression of microgeographic variation in population characteristics of an invasive vertebrate predator.

    PubMed

    Siers, Shane R; Savidge, Julie A; Reed, Robert N

    2017-01-01

    Localized ecological conditions have the potential to induce variation in population characteristics such as size distributions and body conditions. The ability to generalize the influence of ecological characteristics on such population traits may be particularly meaningful when those traits influence prospects for successful management interventions. To characterize variability in invasive Brown Treesnake population attributes within and among habitat types, we conducted systematic and seasonally-balanced surveys, collecting 100 snakes from each of 18 sites: three replicates within each of six major habitat types comprising 95% of Guam's geographic expanse. Our study constitutes one of the most comprehensive and controlled samplings of any published snake study. Quantile regression on snake size and body condition indicated significant ecological heterogeneity, with a general trend of relative consistency of size classes and body conditions within and among scrub and Leucaena forest habitat types and more heterogeneity among ravine forest, savanna, and urban residential sites. Larger and more robust snakes were found within some savanna and urban habitat replicates, likely due to relative availability of larger prey. Compared to more homogeneous samples in the wet season, variability in size distributions and body conditions was greater during the dry season. Although there is evidence of habitat influencing Brown Treesnake populations at localized scales (e.g., the higher prevalence of larger snakes-particularly males-in savanna and urban sites), the level of variability among sites within habitat types indicates little ability to make meaningful predictions about these traits at unsampled locations. Seasonal variability within sites and habitats indicates that localized population characterization should include sampling in both wet and dry seasons. Extreme values at single replicates occasionally influenced overall habitat patterns, while pooling replicates

  2. Estimation of effects of factors related to preschooler body mass index using quantile regression model.

    PubMed

    Kim, Hee Soon; Park, Yun Hee; Park, Hyun Bong; Kim, Su Hee

    2014-12-01

    The purpose of this study was to investigate Korean preschoolers' obesity-related factors through an ecological approach and to identify Korean preschoolers' obesity-related factors and the different effects of ecological variables on body mass index and its quantiles through an ecological approach. The study design was cross-sectional. Through convenience sampling, 241 cases were collected from three kindergartens and seven nurseries in the Seoul metropolitan area and Kyunggi Province in April 2013 using self-administered questionnaires from preschoolers' mothers and homeroom teachers. Results of ordinary least square regression analysis show that mother's sedentary behavior (p < .001), sedentary behavior parenting (p = .039), healthy eating parenting (p = .027), physical activity-related social capital (p = .029) were significant factors of preschoolers' body mass index. While in the 5% body mass index distribution group, gender (p = .031), preference for physical activity (p = .015), mother's sedentary behavior parenting (p = .032), healthy eating parenting (p = .005), and teacher's sedentary behavior (p = .037) showed significant influences. In the 25% group, the effects of gender and preference for physical activity were no longer significant. In the 75% and 95% group, only mother's sedentary behavior showed a statistically significant influence (p < .001, p = .012 respectively). Efforts to lower the obesity rate of preschoolers should focus on their environment, especially on the sedentary behavior of mothers, as mothers are the main nurturers of this age group. Copyright © 2014. Published by Elsevier B.V.

  3. Fitness adjusted racial disparities in central adiposity among women in the USA using quantile regression.

    PubMed

    McDonald, S; Ortaglia, A; Supino, C; Kacka, M; Clenin, M; Bottai, M

    2017-06-01

    This study comprehensively explores racial/ethnic disparities in waist circumference (WC) after adjusting for cardiorespiratory fitness (CRF), among both adult and adolescent women, across WC percentiles. Analysis was conducted using data from the 1999 to 2004 National Health and Nutrition Examination Survey. Female participants ( n  = 3,977) aged 12-49 years with complete data on CRF, height, weight and WC were included. Quantile regression models, stratified by age groups (12-15, 16-19 and 20-49 years), were used to assess the association between WC and race/ethnicity adjusting for CRF, height and age across WC percentiles (10th, 25th, 50th, 75th, 90th and 95th). For non-Hispanic (NH) Black, in both the 16-19 and 20-49 years age groups, estimated WC was significantly greater than for NH White across percentiles above the median with estimates ranging from 5.2 to 11.5 cm. For Mexican Americans, in all age groups, estimated WC tended to be significantly greater than for NH White particularly for middle percentiles (50th and 75th) with point estimates ranging from 1.9 to 8.4 cm. Significant disparities in WC between NH Black and Mexican women, as compared to NH White, remain even after adjustment for CRF. The magnitude of the disparities associated with race/ethnicity differs across WC percentiles and age groups.

  4. Quantile based Tsallis entropy in residual lifetime

    NASA Astrophysics Data System (ADS)

    Khammar, A. H.; Jahanshahi, S. M. A.

    2018-02-01

    Tsallis entropy is a generalization of type α of the Shannon entropy, that is a nonadditive entropy unlike the Shannon entropy. Shannon entropy may be negative for some distributions, but Tsallis entropy can always be made nonnegative by choosing appropriate value of α. In this paper, we derive the quantile form of this nonadditive's entropy function in the residual lifetime, namely the residual quantile Tsallis entropy (RQTE) and get the bounds for it, depending on the Renyi's residual quantile entropy. Also, we obtain relationship between RQTE and concept of proportional hazards model in the quantile setup. Based on the new measure, we propose a stochastic order and aging classes, and study its properties. Finally, we prove characterizations theorems for some well known lifetime distributions. It is shown that RQTE uniquely determines the parent distribution unlike the residual Tsallis entropy.

  5. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  6. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    PubMed

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  7. Percentile-Based ETCCDI Temperature Extremes Indices for CMIP5 Model Output: New Results through Semiparametric Quantile Regression Approach

    NASA Astrophysics Data System (ADS)

    Li, L.; Yang, C.

    2017-12-01

    Climate extremes often manifest as rare events in terms of surface air temperature and precipitation with an annual reoccurrence period. In order to represent the manifold characteristics of climate extremes for monitoring and analysis, the Expert Team on Climate Change Detection and Indices (ETCCDI) had worked out a set of 27 core indices based on daily temperature and precipitation data, describing extreme weather and climate events on an annual basis. The CLIMDEX project (http://www.climdex.org) had produced public domain datasets of such indices for data from a variety of sources, including output from global climate models (GCM) participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Among the 27 ETCCDI indices, there are six percentile-based temperature extremes indices that may fall into two groups: exceedance rates (ER) (TN10p, TN90p, TX10p and TX90p) and durations (CSDI and WSDI). Percentiles must be estimated prior to the calculation of the indices, and could more or less be biased by the adopted algorithm. Such biases will in turn be propagated to the final results of indices. The CLIMDEX used an empirical quantile estimator combined with a bootstrap resampling procedure to reduce the inhomogeneity in the annual series of the ER indices. However, there are still some problems remained in the CLIMDEX datasets, namely the overestimated climate variability due to unaccounted autocorrelation in the daily temperature data, seasonally varying biases and inconsistency between algorithms applied to the ER indices and to the duration indices. We now present new results of the six indices through a semiparametric quantile regression approach for the CMIP5 model output. By using the base-period data as a whole and taking seasonality and autocorrelation into account, this approach successfully addressed the aforementioned issues and came out with consistent results. The new datasets cover the historical and three projected (RCP2.6, RCP4.5 and RCP

  8. The use of quantile regression to forecast higher than expected respiratory deaths in a daily time series: a study of New York City data 1987-2000.

    PubMed

    Soyiri, Ireneous N; Reidpath, Daniel D

    2013-01-01

    Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal/temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept.

  9. The Use of Quantile Regression to Forecast Higher Than Expected Respiratory Deaths in a Daily Time Series: A Study of New York City Data 1987-2000

    PubMed Central

    Soyiri, Ireneous N.; Reidpath, Daniel D.

    2013-01-01

    Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal / temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept. PMID:24147122

  10. Smooth quantile normalization.

    PubMed

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  11. Technical note: Combining quantile forecasts and predictive distributions of streamflows

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano

    2017-11-01

    The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.

  12. Quantile-Specific Penetrance of Genes Affecting Lipoproteins, Adiposity and Height

    PubMed Central

    Williams, Paul T.

    2012-01-01

    Quantile-dependent penetrance is proposed to occur when the phenotypic expression of a SNP depends upon the population percentile of the phenotype. To illustrate the phenomenon, quantiles of height, body mass index (BMI), and plasma lipids and lipoproteins were compared to genetic risk scores (GRS) derived from single nucleotide polymorphisms (SNP)s having established genome-wide significance: 180 SNPs for height, 32 for BMI, 37 for low-density lipoprotein (LDL)-cholesterol, 47 for high-density lipoprotein (HDL)-cholesterol, 52 for total cholesterol, and 31 for triglycerides in 1930 subjects. Both phenotypes and GRSs were adjusted for sex, age, study, and smoking status. Quantile regression showed that the slope of the genotype-phenotype relationships increased with the percentile of BMI (P = 0.002), LDL-cholesterol (P = 3×10−8), HDL-cholesterol (P = 5×10−6), total cholesterol (P = 2.5×10−6), and triglyceride distribution (P = 7.5×10−6), but not height (P = 0.09). Compared to a GRS's phenotypic effect at the 10th population percentile, its effect at the 90th percentile was 4.2-fold greater for BMI, 4.9-fold greater for LDL-cholesterol, 1.9-fold greater for HDL-cholesterol, 3.1-fold greater for total cholesterol, and 3.3-fold greater for triglycerides. Moreover, the effect of the rs1558902 (FTO) risk allele was 6.7-fold greater at the 90th than the 10th percentile of the BMI distribution, and that of the rs3764261 (CETP) risk allele was 2.4-fold greater at the 90th than the 10th percentile of the HDL-cholesterol distribution. Conceptually, it maybe useful to distinguish environmental effects on the phenotype that in turn alters a gene's phenotypic expression (quantile-dependent penetrance) from environmental effects affecting the gene's phenotypic expression directly (gene-environment interaction). PMID:22235250

  13. CADDIS Volume 4. Data Analysis: Basic Analyses

    EPA Pesticide Factsheets

    Use of statistical tests to determine if an observation is outside the normal range of expected values. Details of CART, regression analysis, use of quantile regression analysis, CART in causal analysis, simplifying or pruning resulting trees.

  14. Quantile uncertainty and value-at-risk model risk.

    PubMed

    Alexander, Carol; Sarabia, José María

    2012-08-01

    This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value-at-Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of "model risk" in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value-at-Risk model risk and compute the required regulatory capital add-on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value-at-Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks. © 2012 Society for Risk Analysis.

  15. Environmental determinants of different blood lead levels in children: a quantile analysis from a nationwide survey.

    PubMed

    Etchevers, Anne; Le Tertre, Alain; Lucas, Jean-Paul; Bretin, Philippe; Oulhote, Youssef; Le Bot, Barbara; Glorennec, Philippe

    2015-01-01

    Blood lead levels (BLLs) have substantially decreased in recent decades in children in France. However, further reducing exposure is a public health goal because there is no clear toxicological threshold. The identification of the environmental determinants of BLLs as well as risk factors associated with high BLLs is important to update prevention strategies. We aimed to estimate the contribution of environmental sources of lead to different BLLs in children in France. We enrolled 484 children aged from 6months to 6years, in a nationwide cross-sectional survey in 2008-2009. We measured lead concentrations in blood and environmental samples (water, soils, household settled dusts, paints, cosmetics and traditional cookware). We performed two models: a multivariate generalized additive model on the geometric mean (GM), and a quantile regression model on the 10th, 25th, 50th, 75th and 90th quantile of BLLs. The GM of BLLs was 13.8μg/L (=1.38μg/dL) (95% confidence intervals (CI): 12.7-14.9) and the 90th quantile was 25.7μg/L (CI: 24.2-29.5). Household and common area dust, tap water, interior paint, ceramic cookware, traditional cosmetics, playground soil and dust, and environmental tobacco smoke were associated with the GM of BLLs. Household dust and tap water made the largest contributions to both the GM and the 90th quantile of BLLs. The concentration of lead in dust was positively correlated with all quantiles of BLLs even at low concentrations. Lead concentrations in tap water above 5μg/L were also positively correlated with the GM, 75th and 90th quantiles of BLLs in children drinking tap water. Preventative actions must target household settled dust and tap water to reduce the BLLs of children in France. The use of traditional cosmetics should be avoided whereas ceramic cookware should be limited to decorative purposes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Superquantile Regression: Theory, Algorithms, and Applications

    DTIC Science & Technology

    2014-12-01

    Example C: Stack loss data scatterplot matrix. 91 Regression α c0 caf cwt cac R̄ 2 α R̄ 2 α,Adj Least Squares NA -39.9197 0.7156 1.2953 -0.1521 0.9136...This is due to a small 92 Model Regression α c0 cwt cwt2 R̄ 2 α R̄ 2 α,Adj f2 Least Squares NA -41.9109 2.8174 — 0.7665 0.7542 Quantile 0.25 -32.0000

  17. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    PubMed

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  18. Removing technical variability in RNA-seq data using conditional quantile normalization.

    PubMed

    Hansen, Kasper D; Irizarry, Rafael A; Wu, Zhijin

    2012-04-01

    The ability to measure gene expression on a genome-wide scale is one of the most promising accomplishments in molecular biology. Microarrays, the technology that first permitted this, were riddled with problems due to unwanted sources of variability. Many of these problems are now mitigated, after a decade's worth of statistical methodology development. The recently developed RNA sequencing (RNA-seq) technology has generated much excitement in part due to claims of reduced variability in comparison to microarrays. However, we show that RNA-seq data demonstrate unwanted and obscuring variability similar to what was first observed in microarrays. In particular, we find guanine-cytosine content (GC-content) has a strong sample-specific effect on gene expression measurements that, if left uncorrected, leads to false positives in downstream results. We also report on commonly observed data distortions that demonstrate the need for data normalization. Here, we describe a statistical methodology that improves precision by 42% without loss of accuracy. Our resulting conditional quantile normalization algorithm combines robust generalized regression to remove systematic bias introduced by deterministic features such as GC-content and quantile normalization to correct for global distortions.

  19. Assessing the impact of local meteorological variables on surface ozone in Hong Kong during 2000-2015 using quantile and multiple line regression models

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Fan, Shaojia; Guo, Hai; Gao, Bo; Sun, Jiaren; Chen, Laiguo

    2016-11-01

    The quantile regression (QR) method has been increasingly introduced to atmospheric environmental studies to explore the non-linear relationship between local meteorological conditions and ozone mixing ratios. In this study, we applied QR for the first time, together with multiple linear regression (MLR), to analyze the dominant meteorological parameters influencing the mean, 10th percentile, 90th percentile and 99th percentile of maximum daily 8-h average (MDA8) ozone concentrations in 2000-2015 in Hong Kong. The dominance analysis (DA) was used to assess the relative importance of meteorological variables in the regression models. Results showed that the MLR models worked better at suburban and rural sites than at urban sites, and worked better in winter than in summer. QR models performed better in summer for 99th and 90th percentiles and performed better in autumn and winter for 10th percentile. And QR models also performed better in suburban and rural areas for 10th percentile. The top 3 dominant variables associated with MDA8 ozone concentrations, changing with seasons and regions, were frequently associated with the six meteorological parameters: boundary layer height, humidity, wind direction, surface solar radiation, total cloud cover and sea level pressure. Temperature rarely became a significant variable in any season, which could partly explain the peak of monthly average ozone concentrations in October in Hong Kong. And we found the effect of solar radiation would be enhanced during extremely ozone pollution episodes (i.e., the 99th percentile). Finally, meteorological effects on MDA8 ozone had no significant changes before and after the 2010 Asian Games.

  20. Asymptotics of nonparametric L-1 regression models with dependent data

    PubMed Central

    ZHAO, ZHIBIAO; WEI, YING; LIN, DENNIS K.J.

    2013-01-01

    We investigate asymptotic properties of least-absolute-deviation or median quantile estimates of the location and scale functions in nonparametric regression models with dependent data from multiple subjects. Under a general dependence structure that allows for longitudinal data and some spatially correlated data, we establish uniform Bahadur representations for the proposed median quantile estimates. The obtained Bahadur representations provide deep insights into the asymptotic behavior of the estimates. Our main theoretical development is based on studying the modulus of continuity of kernel weighted empirical process through a coupling argument. Progesterone data is used for an illustration. PMID:24955016

  1. Performance and robustness of probabilistic river forecasts computed with quantile regression based on multiple independent variables in the North Central USA

    NASA Astrophysics Data System (ADS)

    Hoss, F.; Fischbeck, P. S.

    2014-10-01

    This study further develops the method of quantile regression (QR) to predict exceedance probabilities of flood stages by post-processing forecasts. Using data from the 82 river gages, for which the National Weather Service's North Central River Forecast Center issues forecasts daily, this is the first QR application to US American river gages. Archived forecasts for lead times up to six days from 2001-2013 were analyzed. Earlier implementations of QR used the forecast itself as the only independent variable (Weerts et al., 2011; López López et al., 2014). This study adds the rise rate of the river stage in the last 24 and 48 h and the forecast error 24 and 48 h ago to the QR model. Including those four variables significantly improved the forecasts, as measured by the Brier Skill Score (BSS). Mainly, the resolution increases, as the original QR implementation already delivered high reliability. Combining the forecast with the other four variables results in much less favorable BSSs. Lastly, the forecast performance does not depend on the size of the training dataset, but on the year, the river gage, lead time and event threshold that are being forecast. We find that each event threshold requires a separate model configuration or at least calibration.

  2. Parameter Heterogeneity In Breast Cancer Cost Regressions – Evidence From Five European Countries

    PubMed Central

    Banks, Helen; Campbell, Harry; Douglas, Anne; Fletcher, Eilidh; McCallum, Alison; Moger, Tron Anders; Peltola, Mikko; Sveréus, Sofia; Wild, Sarah; Williams, Linda J.; Forbes, John

    2015-01-01

    Abstract We investigate parameter heterogeneity in breast cancer 1‐year cumulative hospital costs across five European countries as part of the EuroHOPE project. The paper aims to explore whether conditional mean effects provide a suitable representation of the national variation in hospital costs. A cohort of patients with a primary diagnosis of invasive breast cancer (ICD‐9 codes 174 and ICD‐10 C50 codes) is derived using routinely collected individual breast cancer data from Finland, the metropolitan area of Turin (Italy), Norway, Scotland and Sweden. Conditional mean effects are estimated by ordinary least squares for each country, and quantile regressions are used to explore heterogeneity across the conditional quantile distribution. Point estimates based on conditional mean effects provide a good approximation of treatment response for some key demographic and diagnostic specific variables (e.g. age and ICD‐10 diagnosis) across the conditional quantile distribution. For many policy variables of interest, however, there is considerable evidence of parameter heterogeneity that is concealed if decisions are based solely on conditional mean results. The use of quantile regression methods reinforce the need to consider beyond an average effect given the greater recognition that breast cancer is a complex disease reflecting patient heterogeneity. © 2015 The Authors. Health Economics Published by John Wiley & Sons Ltd. PMID:26633866

  3. Smooth conditional distribution function and quantiles under random censorship.

    PubMed

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  4. Intersection of All Top Quantile

    EPA Pesticide Factsheets

    This layer combines the Top quantiles of the CES, CEVA, and EJSM layers so that viewers can see the overlap of 00e2??hot spots00e2?? for each method. This layer was created by James Sadd of Occidental College of Los Angeles

  5. Association between Physical Activity and Teacher-Reported Academic Performance among Fifth-Graders in Shanghai: A Quantile Regression

    PubMed Central

    Zhang, Yunting; Zhang, Donglan; Jiang, Yanrui; Sun, Wanqi; Wang, Yan; Chen, Wenjuan; Li, Shenghui; Shi, Lu; Shen, Xiaoming; Zhang, Jun; Jiang, Fan

    2015-01-01

    Introduction A growing body of literature reveals the causal pathways between physical activity and brain function, indicating that increasing physical activity among children could improve rather than undermine their scholastic performance. However, past studies of physical activity and scholastic performance among students often relied on parent-reported grade information, and did not explore whether the association varied among different levels of scholastic performance. Our study among fifth-grade students in Shanghai sought to determine the association between regular physical activity and teacher-reported academic performance scores (APS), with special attention to the differential associational patterns across different strata of scholastic performance. Method A total of 2,225 students were chosen through a stratified random sampling, and a complete sample of 1470 observations were used for analysis. We used a quantile regression analysis to explore whether the association between physical activity and teacher-reported APS differs by distribution of APS. Results Minimal-intensity physical activity such as walking was positively associated with academic performance scores (β = 0.13, SE = 0.04). The magnitude of the association tends to be larger at the lower end of the APS distribution (β = 0.24, SE = 0.08) than in the higher end of the distribution (β = 0.00, SE = 0.07). Conclusion Based upon teacher-reported student academic performance, there is no evidence that spending time on frequent physical activity would undermine student’s APS. Those students who are below the average in their academic performance could be worse off in academic performance if they give up minimal-intensity physical activity. Therefore, cutting physical activity time in schools could hurt the scholastic performance among those students who were already at higher risk for dropping out due to inadequate APS. PMID:25774525

  6. Development of Growth Charts of Pakistani Children Aged 4-15 Years Using Quantile Regression: A Cross-sectional Study

    PubMed Central

    Khan, Nazeer; Siddiqui, Junaid S; Baig-Ansari, Naila

    2018-01-01

    Background Growth charts are essential tools used by pediatricians as well as public health researchers in assessing and monitoring the well-being of pediatric populations. Development of these growth charts, especially for children above five years of age, is challenging and requires current anthropometric data and advanced statistical analysis. These growth charts are generally presented as a series of smooth centile curves. A number of modeling approaches are available for generating growth charts and applying these on national datasets is important for generating country-specific reference growth charts. Objective To demonstrate that quantile regression (QR) as a viable statistical approach to construct growth reference charts and to assess the applicability of the World Health Organization (WHO) 2007 growth standards to a large Pakistani population of school-going children. Methodology This is a secondary data analysis using anthropometric data of 9,515 students from a Pakistani survey conducted between 2007 and 2014 in four cities of Pakistan. Growth reference charts were created using QR as well as the LMS (Box-Cox transformation (L), the median (M), and the generalized coefficient of variation (S)) method and then compared with WHO 2007 growth standards. Results Centile values estimated by the LMS method and QR procedure had few differences. The centile values attained from QR procedure of BMI-for-age, weight-for-age, and height-for-age of Pakistani children were lower than the standard WHO 2007 centile. Conclusion QR should be considered as an alternative method to develop growth charts for its simplicity and lack of necessity to transform data. WHO 2007 standards are not suitable for Pakistani children. PMID:29632748

  7. Development of Growth Charts of Pakistani Children Aged 4-15 Years Using Quantile Regression: A Cross-sectional Study.

    PubMed

    Iftikhar, Sundus; Khan, Nazeer; Siddiqui, Junaid S; Baig-Ansari, Naila

    2018-02-02

    Background Growth charts are essential tools used by pediatricians as well as public health researchers in assessing and monitoring the well-being of pediatric populations. Development of these growth charts, especially for children above five years of age, is challenging and requires current anthropometric data and advanced statistical analysis. These growth charts are generally presented as a series of smooth centile curves. A number of modeling approaches are available for generating growth charts and applying these on national datasets is important for generating country-specific reference growth charts. Objective To demonstrate that quantile regression (QR) as a viable statistical approach to construct growth reference charts and to assess the applicability of the World Health Organization (WHO) 2007 growth standards to a large Pakistani population of school-going children. Methodology This is a secondary data analysis using anthropometric data of 9,515 students from a Pakistani survey conducted between 2007 and 2014 in four cities of Pakistan. Growth reference charts were created using QR as well as the LMS (Box-Cox transformation (L), the median (M), and the generalized coefficient of variation (S)) method and then compared with WHO 2007 growth standards. Results Centile values estimated by the LMS method and QR procedure had few differences. The centile values attained from QR procedure of BMI-for-age, weight-for-age, and height-for-age of Pakistani children were lower than the standard WHO 2007 centile. Conclusion QR should be considered as an alternative method to develop growth charts for its simplicity and lack of necessity to transform data. WHO 2007 standards are not suitable for Pakistani children.

  8. Non-inferiority tests for anti-infective drugs using control group quantiles.

    PubMed

    Fay, Michael P; Follmann, Dean A

    2016-12-01

    In testing for non-inferiority of anti-infective drugs, the primary endpoint is often the difference in the proportion of failures between the test and control group at a landmark time. The landmark time is chosen to approximately correspond to the qth historic quantile of the control group, and the non-inferiority margin is selected to be reasonable for the target level q. For designing these studies, a troubling issue is that the landmark time must be pre-specified, but there is no guarantee that the proportion of control failures at the landmark time will be close to the target level q. If the landmark time is far from the target control quantile, then the pre-specified non-inferiority margin may not longer be reasonable. Exact variable margin tests have been developed by Röhmel and Kieser to address this problem, but these tests can have poor power if the observed control failure rate at the landmark time is far from its historic value. We develop a new variable margin non-inferiority test where we continue sampling until a pre-specified proportion of failures, q, have occurred in the control group, where q is the target quantile level. The test does not require any assumptions on the failure time distributions, and hence, no knowledge of the true [Formula: see text] control quantile for the study is needed. Our new test is exact and has power comparable to (or greater than) its competitors when the true control quantile from the study equals (or differs moderately from) its historic value. Our nivm R package performs the test and gives confidence intervals on the difference in failure rates at the true target control quantile. The tests can be applied to time to cure or other numeric variables as well. A substantial proportion of new anti-infective drugs being developed use non-inferiority tests in their development, and typically, a pre-specified landmark time and its associated difference margin are set at the design stage to match a specific target control

  9. Intersection of Screening Methods High Quantile

    EPA Pesticide Factsheets

    This layer combines the high quantiles of the CES, CEVA, and EJSM layers so that viewers can see the overlap of 00e2??hot spots00e2?? for each method. This layer was created by James Sadd of Occidental College of Los Angeles

  10. Using quantile regression to examine the effects of inequality across the mortality distribution in the U.S. counties

    PubMed Central

    Yang, Tse-Chuan; Chen, Vivian Yi-Ju; Shoff, Carla; Matthews, Stephen A.

    2012-01-01

    The U.S. has experienced a resurgence of income inequality in the past decades. The evidence regarding the mortality implications of this phenomenon has been mixed. This study employs a rarely used method in mortality research, quantile regression (QR), to provide insight into the ongoing debate of whether income inequality is a determinant of mortality and to investigate the varying relationship between inequality and mortality throughout the mortality distribution. Analyzing a U.S. dataset where the five-year (1998–2002) average mortality rates were combined with other county-level covariates, we found that the association between inequality and mortality was not constant throughout the mortality distribution and the impact of inequality on mortality steadily increased until the 80th percentile. When accounting for all potential confounders, inequality was significantly and positively related to mortality; however, this inequality–mortality relationship did not hold across the mortality distribution. A series of Wald tests confirmed this varying inequality–mortality relationship, especially between the lower and upper tails. The large variation in the estimated coefficients of the Gini index suggested that inequality had the greatest influence on those counties with a mortality rate of roughly 9.95 deaths per 1000 population (80th percentile) compared to any other counties. Furthermore, our results suggest that the traditional analytic methods that focus on mean or median value of the dependent variable can be, at most, applied to a narrow 20 percent of observations. This study demonstrates the value of QR. Our findings provide some insight as to why the existing evidence for the inequality–mortality relationship is mixed and suggest that analytical issues may play a role in clarifying whether inequality is a robust determinant of population health. PMID:22497847

  11. Data quantile-quantile plots: quantifying the time evolution of space climatology

    NASA Astrophysics Data System (ADS)

    Tindale, Elizabeth; Chapman, Sandra

    2017-04-01

    The solar wind is inherently variable across a wide range of spatio-temporal scales; embedded in the flow are the signatures of distinct non-linear physical processes from evolving turbulence to the dynamical solar corona. In-situ satellite observations of solar wind magnetic field and velocity are at minute and below time resolution and now extend over several solar cycles. Each solar cycle is unique, and the space climatology challenge is to quantify how solar wind variability changes within, and across, each distinct solar cycle, and how this in turn drives space weather at earth. We will demonstrate a novel statistical method, that of data-data quantile-quantile (DQQ) plots, which quantifies how the underlying statistical distribution of a given observable is changing in time. Importantly this method does not require any assumptions concerning the underlying functional form of the distribution and can identify multi-component behaviour that is changing in time. This can be used to determine when a sub-range of a given observable is undergoing a change in statistical distribution, or where the moments of the distribution only are changing and the functional form of the underlying distribution is not changing in time. The method is quite general; for this application we use data from the WIND satellite to compare the solar wind across the minima and maxima of solar cycles 23 and 24 [1], and how these changes are manifest in parameters that quantify coupling to the earth's magnetosphere. [1] Tindale, E., and S.C. Chapman (2016), Geophys. Res. Lett., 43(11), doi: 10.1002/2016GL068920.

  12. Topological and canonical kriging for design flood prediction in ungauged catchments: an improvement over a traditional regional regression approach?

    USGS Publications Warehouse

    Archfield, Stacey A.; Pugliese, Alessio; Castellarin, Attilio; Skøien, Jon O.; Kiang, Julie E.

    2013-01-01

    In the United States, estimation of flood frequency quantiles at ungauged locations has been largely based on regional regression techniques that relate measurable catchment descriptors to flood quantiles. More recently, spatial interpolation techniques of point data have been shown to be effective for predicting streamflow statistics (i.e., flood flows and low-flow indices) in ungauged catchments. Literature reports successful applications of two techniques, canonical kriging, CK (or physiographical-space-based interpolation, PSBI), and topological kriging, TK (or top-kriging). CK performs the spatial interpolation of the streamflow statistic of interest in the two-dimensional space of catchment descriptors. TK predicts the streamflow statistic along river networks taking both the catchment area and nested nature of catchments into account. It is of interest to understand how these spatial interpolation methods compare with generalized least squares (GLS) regression, one of the most common approaches to estimate flood quantiles at ungauged locations. By means of a leave-one-out cross-validation procedure, the performance of CK and TK was compared to GLS regression equations developed for the prediction of 10, 50, 100 and 500 yr floods for 61 streamgauges in the southeast United States. TK substantially outperforms GLS and CK for the study area, particularly for large catchments. The performance of TK over GLS highlights an important distinction between the treatments of spatial correlation when using regression-based or spatial interpolation methods to estimate flood quantiles at ungauged locations. The analysis also shows that coupling TK with CK slightly improves the performance of TK; however, the improvement is marginal when compared to the improvement in performance over GLS.

  13. Quantiles for Finite Mixtures of Normal Distributions

    ERIC Educational Resources Information Center

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  14. Multi-element stochastic spectral projection for high quantile estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, Jordan, E-mail: jordan.ko@mac.com; Garnier, Josselin

    2013-06-15

    We investigate quantile estimation by multi-element generalized Polynomial Chaos (gPC) metamodel where the exact numerical model is approximated by complementary metamodels in overlapping domains that mimic the model’s exact response. The gPC metamodel is constructed by the non-intrusive stochastic spectral projection approach and function evaluation on the gPC metamodel can be considered as essentially free. Thus, large number of Monte Carlo samples from the metamodel can be used to estimate α-quantile, for moderate values of α. As the gPC metamodel is an expansion about the means of the inputs, its accuracy may worsen away from these mean values where themore » extreme events may occur. By increasing the approximation accuracy of the metamodel, we may eventually improve accuracy of quantile estimation but it is very expensive. A multi-element approach is therefore proposed by combining a global metamodel in the standard normal space with supplementary local metamodels constructed in bounded domains about the design points corresponding to the extreme events. To improve the accuracy and to minimize the sampling cost, sparse-tensor and anisotropic-tensor quadratures are tested in addition to the full-tensor Gauss quadrature in the construction of local metamodels; different bounds of the gPC expansion are also examined. The global and local metamodels are combined in the multi-element gPC (MEgPC) approach and it is shown that MEgPC can be more accurate than Monte Carlo or importance sampling methods for high quantile estimations for input dimensions roughly below N=8, a limit that is very much case- and α-dependent.« less

  15. Spline methods for approximating quantile functions and generating random samples

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1985-01-01

    Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.

  16. Robust small area estimation of poverty indicators using M-quantile approach (Case study: Sub-district level in Bogor district)

    NASA Astrophysics Data System (ADS)

    Girinoto, Sadik, Kusman; Indahwati

    2017-03-01

    The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.

  17. Explaining Variation in Instructional Time: An Application of Quantile Regression

    ERIC Educational Resources Information Center

    Corey, Douglas Lyman; Phelps, Geoffrey; Ball, Deborah Loewenberg; Demonte, Jenny; Harrison, Delena

    2012-01-01

    This research is conducted in the context of a large-scale study of three nationally disseminated comprehensive school reform projects (CSRs) and examines how school- and classroom-level factors contribute to variation in instructional time in English language arts and mathematics. When using mean-based OLS regression techniques such as…

  18. Bayesian estimation of extreme flood quantiles using a rainfall-runoff model and a stochastic daily rainfall generator

    NASA Astrophysics Data System (ADS)

    Costa, Veber; Fernandes, Wilson

    2017-11-01

    Extreme flood estimation has been a key research topic in hydrological sciences. Reliable estimates of such events are necessary as structures for flood conveyance are continuously evolving in size and complexity and, as a result, their failure-associated hazards become more and more pronounced. Due to this fact, several estimation techniques intended to improve flood frequency analysis and reducing uncertainty in extreme quantile estimation have been addressed in the literature in the last decades. In this paper, we develop a Bayesian framework for the indirect estimation of extreme flood quantiles from rainfall-runoff models. In the proposed approach, an ensemble of long daily rainfall series is simulated with a stochastic generator, which models extreme rainfall amounts with an upper-bounded distribution function, namely, the 4-parameter lognormal model. The rationale behind the generation model is that physical limits for rainfall amounts, and consequently for floods, exist and, by imposing an appropriate upper bound for the probabilistic model, more plausible estimates can be obtained for those rainfall quantiles with very low exceedance probabilities. Daily rainfall time series are converted into streamflows by routing each realization of the synthetic ensemble through a conceptual hydrologic model, the Rio Grande rainfall-runoff model. Calibration of parameters is performed through a nonlinear regression model, by means of the specification of a statistical model for the residuals that is able to accommodate autocorrelation, heteroscedasticity and nonnormality. By combining the outlined steps in a Bayesian structure of analysis, one is able to properly summarize the resulting uncertainty and estimating more accurate credible intervals for a set of flood quantiles of interest. The method for extreme flood indirect estimation was applied to the American river catchment, at the Folsom dam, in the state of California, USA. Results show that most floods

  19. Matching a Distribution by Matching Quantiles Estimation

    PubMed Central

    Sgouropoulos, Nikolaos; Yao, Qiwei; Yastremiz, Claudia

    2015-01-01

    Motivated by the problem of selecting representative portfolios for backtesting counterparty credit risks, we propose a matching quantiles estimation (MQE) method for matching a target distribution by that of a linear combination of a set of random variables. An iterative procedure based on the ordinary least-squares estimation (OLS) is proposed to compute MQE. MQE can be easily modified by adding a LASSO penalty term if a sparse representation is desired, or by restricting the matching within certain range of quantiles to match a part of the target distribution. The convergence of the algorithm and the asymptotic properties of the estimation, both with or without LASSO, are established. A measure and an associated statistical test are proposed to assess the goodness-of-match. The finite sample properties are illustrated by simulation. An application in selecting a counterparty representative portfolio with a real dataset is reported. The proposed MQE also finds applications in portfolio tracking, which demonstrates the usefulness of combining MQE with LASSO. PMID:26692592

  20. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  1. Comparison of different hydrological similarity measures to estimate flow quantiles

    NASA Astrophysics Data System (ADS)

    Rianna, M.; Ridolfi, E.; Napolitano, F.

    2017-07-01

    This paper aims to evaluate the influence of hydrological similarity measures on the definition of homogeneous regions. To this end, several attribute sets have been analyzed in the context of the Region of Influence (ROI) procedure. Several combinations of geomorphological, climatological, and geographical characteristics are also used to cluster potentially homogeneous regions. To verify the goodness of the resulting pooled sites, homogeneity tests arecarried out. Through a Monte Carlo simulation and a jack-knife procedure, flow quantiles areestimated for the regions effectively resulting as homogeneous. The analysis areperformed in both the so-called gauged and ungauged scenarios to analyze the effect of hydrological measures on flow quantiles estimation.

  2. Restoration of Monotonicity Respecting in Dynamic Regression

    PubMed Central

    Huang, Yijian

    2017-01-01

    Dynamic regression models, including the quantile regression model and Aalen’s additive hazards model, are widely adopted to investigate evolving covariate effects. Yet lack of monotonicity respecting with standard estimation procedures remains an outstanding issue. Advances have recently been made, but none provides a complete resolution. In this article, we propose a novel adaptive interpolation method to restore monotonicity respecting, by successively identifying and then interpolating nearest monotonicity-respecting points of an original estimator. Under mild regularity conditions, the resulting regression coefficient estimator is shown to be asymptotically equivalent to the original. Our numerical studies have demonstrated that the proposed estimator is much more smooth and may have better finite-sample efficiency than the original as well as, when available as only in special cases, other competing monotonicity-respecting estimators. Illustration with a clinical study is provided. PMID:29430068

  3. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the

  4. Regional flow duration curves: Geostatistical techniques versus multivariate regression

    USGS Publications Warehouse

    Pugliese, Alessio; Farmer, William H.; Castellarin, Attilio; Archfield, Stacey A.; Vogel, Richard M.

    2016-01-01

    A period-of-record flow duration curve (FDC) represents the relationship between the magnitude and frequency of daily streamflows. Prediction of FDCs is of great importance for locations characterized by sparse or missing streamflow observations. We present a detailed comparison of two methods which are capable of predicting an FDC at ungauged basins: (1) an adaptation of the geostatistical method, Top-kriging, employing a linear weighted average of dimensionless empirical FDCs, standardised with a reference streamflow value; and (2) regional multiple linear regression of streamflow quantiles, perhaps the most common method for the prediction of FDCs at ungauged sites. In particular, Top-kriging relies on a metric for expressing the similarity between catchments computed as the negative deviation of the FDC from a reference streamflow value, which we termed total negative deviation (TND). Comparisons of these two methods are made in 182 largely unregulated river catchments in the southeastern U.S. using a three-fold cross-validation algorithm. Our results reveal that the two methods perform similarly throughout flow-regimes, with average Nash-Sutcliffe Efficiencies 0.566 and 0.662, (0.883 and 0.829 on log-transformed quantiles) for the geostatistical and the linear regression models, respectively. The differences between the reproduction of FDC's occurred mostly for low flows with exceedance probability (i.e. duration) above 0.98.

  5. Random regression analyses using B-splines to model growth of Australian Angus cattle

    PubMed Central

    Meyer, Karin

    2005-01-01

    Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011

  6. Stochastic variability in stress, sleep duration, and sleep quality across the distribution of body mass index: insights from quantile regression.

    PubMed

    Yang, Tse-Chuan; Matthews, Stephen A; Chen, Vivian Y-J

    2014-04-01

    Obesity has become a problem in the USA and identifying modifiable factors at the individual level may help to address this public health concern. A burgeoning literature has suggested that sleep and stress may be associated with obesity; however, little is know about whether these two factors moderate each other and even less is known about whether their impacts on obesity differ by gender. This study investigates whether sleep and stress are associated with body mass index (BMI) respectively, explores whether the combination of stress and sleep is also related to BMI, and demonstrates how these associations vary across the distribution of BMI values. We analyze the data from 3,318 men and 6,689 women in the Philadelphia area using quantile regression (QR) to evaluate the relationships between sleep, stress, and obesity by gender. Our substantive findings include: (1) high and/or extreme stress were related to roughly an increase of 1.2 in BMI after accounting for other covariates; (2) the pathways linking sleep and BMI differed by gender, with BMI for men increasing by 0.77-1 units with reduced sleep duration and BMI for women declining by 0.12 unit with 1 unit increase in sleep quality; (3) stress- and sleep-related variables were confounded, but there was little evidence for moderation between these two; (4) the QR results demonstrate that the association between high and/or extreme stress to BMI varied stochastically across the distribution of BMI values, with an upward trend, suggesting that stress played a more important role among adults with higher BMI (i.e., BMI > 26 for both genders); and (5) the QR plots of sleep-related variables show similar patterns, with stronger effects on BMI at the upper end of BMI distribution. Our findings suggested that sleep and stress were two seemingly independent predictors for BMI and their relationships with BMI were not constant across the BMI distribution.

  7. Asymmetric impact of rainfall on India's food grain production: evidence from quantile autoregressive distributed lag model

    NASA Astrophysics Data System (ADS)

    Pal, Debdatta; Mitra, Subrata Kumar

    2018-01-01

    This study used a quantile autoregressive distributed lag (QARDL) model to capture asymmetric impact of rainfall on food production in India. It was found that the coefficient corresponding to the rainfall in the QARDL increased till the 75th quantile and started decreasing thereafter, though it remained in the positive territory. Another interesting finding is that at the 90th quantile and above the coefficients of rainfall though remained positive was not statistically significant and therefore, the benefit of high rainfall on crop production was not conclusive. However, the impact of other determinants, such as fertilizer and pesticide consumption, is quite uniform over the whole range of the distribution of food grain production.

  8. Numerical analysis of the accuracy of bivariate quantile distributions utilizing copulas compared to the GUM supplement 2 for oil pressure balance uncertainties

    NASA Astrophysics Data System (ADS)

    Ramnath, Vishal

    2017-11-01

    In the field of pressure metrology the effective area is Ae = A0 (1 + λP) where A0 is the zero-pressure area and λ is the distortion coefficient and the conventional practise is to construct univariate probability density functions (PDFs) for A0 and λ. As a result analytical generalized non-Gaussian bivariate joint PDFs has not featured prominently in pressure metrology. Recently extended lambda distribution based quantile functions have been successfully utilized for summarizing univariate arbitrary PDF distributions of gas pressure balances. Motivated by this development we investigate the feasibility and utility of extending and applying quantile functions to systems which naturally exhibit bivariate PDFs. Our approach is to utilize the GUM Supplement 1 methodology to solve and generate Monte Carlo based multivariate uncertainty data for an oil based pressure balance laboratory standard that is used to generate known high pressures, and which are in turn cross-floated against another pressure balance transfer standard in order to deduce the transfer standard's respective area. We then numerically analyse the uncertainty data by formulating and constructing an approximate bivariate quantile distribution that directly couples A0 and λ in order to compare and contrast its accuracy to an exact GUM Supplement 2 based uncertainty quantification analysis.

  9. What We Have Learned from the Recent Meta-analyses on Diagnostic Methods for Atherosclerotic Plaque Regression.

    PubMed

    Biondi-Zoccai, Giuseppe; Mastrangeli, Simona; Romagnoli, Enrico; Peruzzi, Mariangela; Frati, Giacomo; Roever, Leonardo; Giordano, Arturo

    2018-01-17

    Atherosclerosis has major morbidity and mortality implications globally. While it has often been considered an irreversible degenerative process, recent evidence provides compelling proof that atherosclerosis can be reversed. Plaque regression is however difficult to appraise and quantify, with competing diagnostic methods available. Given the potential of evidence synthesis to provide clinical guidance, we aimed to review recent meta-analyses on diagnostic methods for atherosclerotic plaque regression. We identified 8 meta-analyses published between 2015 and 2017, including 79 studies and 14,442 patients, followed for a median of 12 months. They reported on atherosclerotic plaque regression appraised with carotid duplex ultrasound, coronary computed tomography, carotid magnetic resonance, coronary intravascular ultrasound, and coronary optical coherence tomography. Overall, all meta-analyses showed significant atherosclerotic plaque regression with lipid-lowering therapy, with the most notable effects on echogenicity, lipid-rich necrotic core volume, wall/plaque volume, dense calcium volume, and fibrous cap thickness. Significant interactions were found with concomitant changes in low density lipoprotein cholesterol, high density lipoprotein cholesterol, and C-reactive protein levels, and with ethnicity. Atherosclerotic plaque regression and conversion to a stable phenotype is possible with intensive medical therapy and can be demonstrated in patients using a variety of non-invasive and invasive imaging modalities.

  10. L-statistics for Repeated Measurements Data With Application to Trimmed Means, Quantiles and Tolerance Intervals.

    PubMed

    Assaad, Houssein I; Choudhary, Pankaj K

    2013-01-01

    The L -statistics form an important class of estimators in nonparametric statistics. Its members include trimmed means and sample quantiles and functions thereof. This article is devoted to theory and applications of L -statistics for repeated measurements data, wherein the measurements on the same subject are dependent and the measurements from different subjects are independent. This article has three main goals: (a) Show that the L -statistics are asymptotically normal for repeated measurements data. (b) Present three statistical applications of this result, namely, location estimation using trimmed means, quantile estimation and construction of tolerance intervals. (c) Obtain a Bahadur representation for sample quantiles. These results are generalizations of similar results for independently and identically distributed data. The practical usefulness of these results is illustrated by analyzing a real data set involving measurement of systolic blood pressure. The properties of the proposed point and interval estimators are examined via simulation.

  11. A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates

    NASA Astrophysics Data System (ADS)

    Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh

    2016-10-01

    We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the

  12. Applications of MIDAS regression in analysing trends in water quality

    NASA Astrophysics Data System (ADS)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  13. Can quantile mapping improve precipitation extremes from regional climate models?

    NASA Astrophysics Data System (ADS)

    Tani, Satyanarayana; Gobiet, Andreas

    2015-04-01

    The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.

  14. Improving Global Forecast System of extreme precipitation events with regional statistical model: Application of quantile-based probabilistic forecasts

    NASA Astrophysics Data System (ADS)

    Shastri, Hiteshri; Ghosh, Subimal; Karmakar, Subhankar

    2017-02-01

    Forecasting of extreme precipitation events at a regional scale is of high importance due to their severe impacts on society. The impacts are stronger in urban regions due to high flood potential as well high population density leading to high vulnerability. Although significant scientific improvements took place in the global models for weather forecasting, they are still not adequate at a regional scale (e.g., for an urban region) with high false alarms and low detection. There has been a need to improve the weather forecast skill at a local scale with probabilistic outcome. Here we develop a methodology with quantile regression, where the reliably simulated variables from Global Forecast System are used as predictors and different quantiles of rainfall are generated corresponding to that set of predictors. We apply this method to a flood-prone coastal city of India, Mumbai, which has experienced severe floods in recent years. We find significant improvements in the forecast with high detection and skill scores. We apply the methodology to 10 ensemble members of Global Ensemble Forecast System and find a reduction in ensemble uncertainty of precipitation across realizations with respect to that of original precipitation forecasts. We validate our model for the monsoon season of 2006 and 2007, which are independent of the training/calibration data set used in the study. We find promising results and emphasize to implement such data-driven methods for a better probabilistic forecast at an urban scale primarily for an early flood warning.

  15. Superquantile/CVaR Risk Measures: Second-Order Theory

    DTIC Science & Technology

    2015-07-31

    order superquantile risk minimization as well as superquantile regression , a proposed second-order version of quantile regression . Keywords...minimization as well as superquantile regression , a proposed second-order version of quantile regression . 15. SUBJECT TERMS 16. SECURITY...superquantilies, because it is deeply tied to generalized regression . The joint formula (3) is central to quantile regression , a well known alternative

  16. Use of Multiple Regression and Use-Availability Analyses in Determining Habitat Selection by Gray Squirrels (Sciurus Carolinensis)

    Treesearch

    John W. Edwards; Susan C. Loeb; David C. Guynn

    1994-01-01

    Multiple regression and use-availability analyses are two methods for examining habitat selection. Use-availability analysis is commonly used to evaluate macrohabitat selection whereas multiple regression analysis can be used to determine microhabitat selection. We compared these techniques using behavioral observations (n = 5534) and telemetry locations (n = 2089) of...

  17. The use of segmented regression in analysing interrupted time series studies: an example in pre-hospital ambulance care.

    PubMed

    Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M

    2014-06-19

    An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.

  18. A nonparametric method for assessment of interactions in a median regression model for analyzing right censored data.

    PubMed

    Lee, MinJae; Rahbar, Mohammad H; Talebi, Hooshang

    2018-01-01

    We propose a nonparametric test for interactions when we are concerned with investigation of the simultaneous effects of two or more factors in a median regression model with right censored survival data. Our approach is developed to detect interaction in special situations, when the covariates have a finite number of levels with a limited number of observations in each level, and it allows varying levels of variance and censorship at different levels of the covariates. Through simulation studies, we compare the power of detecting an interaction between the study group variable and a covariate using our proposed procedure with that of the Cox Proportional Hazard (PH) model and censored quantile regression model. We also assess the impact of censoring rate and type on the standard error of the estimators of parameters. Finally, we illustrate application of our proposed method to real life data from Prospective Observational Multicenter Major Trauma Transfusion (PROMMTT) study to test an interaction effect between type of injury and study sites using median time for a trauma patient to receive three units of red blood cells. The results from simulation studies indicate that our procedure performs better than both Cox PH model and censored quantile regression model based on statistical power for detecting the interaction, especially when the number of observations is small. It is also relatively less sensitive to censoring rates or even the presence of conditionally independent censoring that is conditional on the levels of covariates.

  19. Tests of Sunspot Number Sequences: 3. Effects of Regression Procedures on the Calibration of Historic Sunspot Data

    NASA Astrophysics Data System (ADS)

    Lockwood, M.; Owens, M. J.; Barnard, L.; Usoskin, I. G.

    2016-11-01

    We use sunspot-group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups [RB] above a variable cut-off threshold of observed total whole spot area (uncorrected for foreshortening) to simulate what a lower-acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number [RA] using a variety of regression techniques. It is found that a very high correlation between RA and RB (r_{AB} > 0.98) does not prevent large errors in the intercalibration (for example sunspot-maximum values can be over 30 % too large even for such levels of r_{AB}). In generating the backbone sunspot number [R_{BB}], Svalgaard and Schatten ( Solar Phys., 2016) force regression fits to pass through the scatter-plot origin, which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot-cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile ("Q-Q") plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least-squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot-group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar-cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.

  20. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  1. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies

    PubMed Central

    Vatcheva, Kristina P.; Lee, MinJae; McCormick, Joseph B.; Rahbar, Mohammad H.

    2016-01-01

    The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epidemiologic studies. We used simulated datasets and real life data from the Cameron County Hispanic Cohort to demonstrate the adverse effects of multicollinearity in the regression analysis and encourage researchers to consider the diagnostic for multicollinearity as one of the steps in regression analysis. PMID:27274911

  2. Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies.

    PubMed

    Vatcheva, Kristina P; Lee, MinJae; McCormick, Joseph B; Rahbar, Mohammad H

    2016-04-01

    The adverse impact of ignoring multicollinearity on findings and data interpretation in regression analysis is very well documented in the statistical literature. The failure to identify and report multicollinearity could result in misleading interpretations of the results. A review of epidemiological literature in PubMed from January 2004 to December 2013, illustrated the need for a greater attention to identifying and minimizing the effect of multicollinearity in analysis of data from epidemiologic studies. We used simulated datasets and real life data from the Cameron County Hispanic Cohort to demonstrate the adverse effects of multicollinearity in the regression analysis and encourage researchers to consider the diagnostic for multicollinearity as one of the steps in regression analysis.

  3. Extreme climatic events drive mammal irruptions: regression analysis of 100-year trends in desert rainfall and temperature

    PubMed Central

    Greenville, Aaron C; Wardle, Glenda M; Dickman, Chris R

    2012-01-01

    Extreme climatic events, such as flooding rains, extended decadal droughts and heat waves have been identified increasingly as important regulators of natural populations. Climate models predict that global warming will drive changes in rainfall and increase the frequency and severity of extreme events. Consequently, to anticipate how organisms will respond we need to document how changes in extremes of temperature and rainfall compare to trends in the mean values of these variables and over what spatial scales the patterns are consistent. Using the longest historical weather records available for central Australia – 100 years – and quantile regression methods, we investigate if extreme climate events have changed at similar rates to median events, if annual rainfall has increased in variability, and if the frequency of large rainfall events has increased over this period. Specifically, we compared local (individual weather stations) and regional (Simpson Desert) spatial scales, and quantified trends in median (50th quantile) and extreme weather values (5th, 10th, 90th, and 95th quantiles). We found that median and extreme annual minimum and maximum temperatures have increased at both spatial scales over the past century. Rainfall changes have been inconsistent across the Simpson Desert; individual weather stations showed increases in annual rainfall, increased frequency of large rainfall events or more prolonged droughts, depending on the location. In contrast to our prediction, we found no evidence that intra-annual rainfall had become more variable over time. Using long-term live-trapping records (22 years) of desert small mammals as a case study, we demonstrate that irruptive events are driven by extreme rainfalls (>95th quantile) and that increases in the magnitude and frequency of extreme rainfall events are likely to drive changes in the populations of these species through direct and indirect changes in predation pressure and wildfires. PMID:23170202

  4. Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data

    PubMed Central

    Abram, Samantha V.; Helwig, Nathaniel E.; Moodie, Craig A.; DeYoung, Colin G.; MacDonald, Angus W.; Waller, Niels G.

    2016-01-01

    Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks. PMID:27516732

  5. Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data.

    PubMed

    Abram, Samantha V; Helwig, Nathaniel E; Moodie, Craig A; DeYoung, Colin G; MacDonald, Angus W; Waller, Niels G

    2016-01-01

    Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks.

  6. Superquantile/CVaR Risk Measures: Second-Order Theory

    DTIC Science & Technology

    2014-07-17

    order version of quantile regression . Keywords: superquantiles, conditional value-at-risk, second-order superquantiles, mixed superquan- tiles... quantile regression . 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 26 19a...second-order superquantiles is in the domain of generalized regression . We laid out in [16] a parallel methodology to that of quantile regression

  7. Estimation of peak discharge quantiles for selected annual exceedance probabilities in Northeastern Illinois.

    DOT National Transportation Integrated Search

    2016-06-01

    This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, : 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years,...

  8. A method to preserve trends in quantile mapping bias correction of climate modeled temperature

    NASA Astrophysics Data System (ADS)

    Grillakis, Manolis G.; Koutroulis, Aristeidis G.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.

    2017-09-01

    Bias correction of climate variables is a standard practice in climate change impact (CCI) studies. Various methodologies have been developed within the framework of quantile mapping. However, it is well known that quantile mapping may significantly modify the long-term statistics due to the time dependency of the temperature bias. Here, a method to overcome this issue without compromising the day-to-day correction statistics is presented. The methodology separates the modeled temperature signal into a normalized and a residual component relative to the modeled reference period climatology, in order to adjust the biases only for the former and preserve the signal of the later. The results show that this method allows for the preservation of the originally modeled long-term signal in the mean, the standard deviation and higher and lower percentiles of temperature. To illustrate the improvements, the methodology is tested on daily time series obtained from five Euro CORDEX regional climate models (RCMs).

  9. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. (c) 2016 APA, all rights reserved).

  10. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    PubMed Central

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  11. Using Dual Regression to Investigate Network Shape and Amplitude in Functional Connectivity Analyses

    PubMed Central

    Nickerson, Lisa D.; Smith, Stephen M.; Öngür, Döst; Beckmann, Christian F.

    2017-01-01

    Independent Component Analysis (ICA) is one of the most popular techniques for the analysis of resting state FMRI data because it has several advantageous properties when compared with other techniques. Most notably, in contrast to a conventional seed-based correlation analysis, it is model-free and multivariate, thus switching the focus from evaluating the functional connectivity of single brain regions identified a priori to evaluating brain connectivity in terms of all brain resting state networks (RSNs) that simultaneously engage in oscillatory activity. Furthermore, typical seed-based analysis characterizes RSNs in terms of spatially distributed patterns of correlation (typically by means of simple Pearson's coefficients) and thereby confounds together amplitude information of oscillatory activity and noise. ICA and other regression techniques, on the other hand, retain magnitude information and therefore can be sensitive to both changes in the spatially distributed nature of correlations (differences in the spatial pattern or “shape”) as well as the amplitude of the network activity. Furthermore, motion can mimic amplitude effects so it is crucial to use a technique that retains such information to ensure that connectivity differences are accurately localized. In this work, we investigate the dual regression approach that is frequently applied with group ICA to assess group differences in resting state functional connectivity of brain networks. We show how ignoring amplitude effects and how excessive motion corrupts connectivity maps and results in spurious connectivity differences. We also show how to implement the dual regression to retain amplitude information and how to use dual regression outputs to identify potential motion effects. Two key findings are that using a technique that retains magnitude information, e.g., dual regression, and using strict motion criteria are crucial for controlling both network amplitude and motion-related amplitude

  12. Use of principal-component, correlation, and stepwise multiple-regression analyses to investigate selected physical and hydraulic properties of carbonate-rock aquifers

    USGS Publications Warehouse

    Brown, C. Erwin

    1993-01-01

    Correlation analysis in conjunction with principal-component and multiple-regression analyses were applied to laboratory chemical and petrographic data to assess the usefulness of these techniques in evaluating selected physical and hydraulic properties of carbonate-rock aquifers in central Pennsylvania. Correlation and principal-component analyses were used to establish relations and associations among variables, to determine dimensions of property variation of samples, and to filter the variables containing similar information. Principal-component and correlation analyses showed that porosity is related to other measured variables and that permeability is most related to porosity and grain size. Four principal components are found to be significant in explaining the variance of data. Stepwise multiple-regression analysis was used to see how well the measured variables could predict porosity and (or) permeability for this suite of rocks. The variation in permeability and porosity is not totally predicted by the other variables, but the regression is significant at the 5% significance level. ?? 1993.

  13. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    USGS Publications Warehouse

    Legleiter, Carl

    2016-01-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  14. Random regression analyses using B-splines functions to model growth from birth to adult age in Canchim cattle.

    PubMed

    Baldi, F; Alencar, M M; Albuquerque, L G

    2010-12-01

    The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.

  15. Differential and Long-Term Language Impact on Math

    ERIC Educational Resources Information Center

    Chen, Fang; Chalhoub-Deville, Micheline

    2016-01-01

    Literature provides consistent evidence that there is a strong relationship between language proficiency and math achievement. However, research results show conflicts supporting either an increasing or a decreasing longitudinal relationship between the two. This study explored a longitudinal data and adopted quantile regression analyses to…

  16. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    NASA Astrophysics Data System (ADS)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  17. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin

  18. Use of Flood Seasonality in Pooling-Group Formation and Quantile Estimation: An Application in Great Britain

    NASA Astrophysics Data System (ADS)

    Formetta, Giuseppe; Bell, Victoria; Stewart, Elizabeth

    2018-02-01

    Regional flood frequency analysis is one of the most commonly applied methods for estimating extreme flood events at ungauged sites or locations with short measurement records. It is based on: (i) the definition of a homogeneous group (pooling-group) of catchments, and on (ii) the use of the pooling-group data to estimate flood quantiles. Although many methods to define a pooling-group (pooling schemes, PS) are based on catchment physiographic similarity measures, in the last decade methods based on flood seasonality similarity have been contemplated. In this paper, two seasonality-based PS are proposed and tested both in terms of the homogeneity of the pooling-groups they generate and in terms of the accuracy in estimating extreme flood events. The method has been applied in 420 catchments in Great Britain (considered as both gauged and ungauged) and compared against the current Flood Estimation Handbook (FEH) PS. Results for gauged sites show that, compared to the current PS, the seasonality-based PS performs better both in terms of homogeneity of the pooling-group and in terms of the accuracy of flood quantile estimates. For ungauged locations, a national-scale hydrological model has been used for the first time to quantify flood seasonality. Results show that in 75% of the tested locations the seasonality-based PS provides an improvement in the accuracy of the flood quantile estimates. The remaining 25% were located in highly urbanized, groundwater-dependent catchments. The promising results support the aspiration that large-scale hydrological models complement traditional methods for estimating design floods.

  19. Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments

    USGS Publications Warehouse

    Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.

    2004-01-01

    The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.

  20. Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments

    NASA Astrophysics Data System (ADS)

    Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.

    2004-07-01

    The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.

  1. Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    DOE PAGES

    Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.

    2016-02-16

    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less

  2. Classification of Satellite Derived Chlorophyll a Space-Time Series by Means of Quantile Regression: An Application to the Adriatic Sea

    NASA Astrophysics Data System (ADS)

    Girardi, P.; Pastres, R.; Gaetan, C.; Mangin, A.; Taji, M. A.

    2015-12-01

    In this paper, we present the results of a classification of Adriatic waters, based on spatial time series of remotely sensed Chlorophyll type-a. The study was carried out using a clustering procedure combining quantile smoothing and an agglomerative clustering algorithms. The smoothing function includes a seasonal term, thus allowing one to classify areas according to “similar” seasonal evolution, as well as according to “similar” trends. This methodology, which is here applied for the first time to Ocean Colour data, is more robust with respect to other classical methods, as it does not require any assumption on the probability distribution of the data. This approach was applied to the classification of an eleven year long time series, from January 2002 to December 2012, of monthly values of Chlorophyll type-a concentrations covering the whole Adriatic Sea. The data set was made available by ACRI (http://hermes.acri.fr) in the framework of the Glob-Colour Project (http://www.globcolour.info). Data were obtained by calibrating Ocean Colour data provided by different satellite missions, such as MERIS, SeaWiFS and MODIS. The results clearly show the presence of North-South and West-East gradient in the level of Chlorophyll, which is consistent with literature findings. This analysis could provide a sound basis for the identification of “water bodies” and of Chlorophyll type-a thresholds which define their Good Ecological Status, in terms of trophic level, as required by the implementation of the Marine Strategy Framework Directive. The forthcoming availability of Sentinel-3 OLCI data, in continuity of the previous missions, and with perspective of more than a 15-year monitoring system, offers a real opportunity of expansion of our study as a strong support to the implementation of both the EU Marine Strategy Framework Directive and the UNEP-MAP Ecosystem Approach in the Mediterranean.

  3. On the distortion of elevation dependent warming signals by quantile mapping

    NASA Astrophysics Data System (ADS)

    Jury, Martin W.; Mendlik, Thomas; Maraun, Douglas

    2017-04-01

    Elevation dependent warming (EDW), the amplification of warming under climate change with elevation, is likely to accelerate changes in e.g. cryospheric and hydrological systems. Responsible for EDW is a mixture of processes including snow albedo feedback, cloud formations or the location of aerosols. The degree of incorporation of this processes varies across state of the art climate models. In a recent study we were preparing bias corrected model output of CMIP5 GCMs and CORDEX RCMs over the Himalayan region for the glacier modelling community. In a first attempt we used quantile mapping (QM) to generate this data. A beforehand model evaluation showed that more than two third of the 49 included climate models were able to reproduce positive trend differences between areas of higher and lower elevations in winter, clearly visible in all of our five observational datasets used. Regrettably, we noticed that height dependent trend signals provided by models were distorted, most of the time in the direction of less EDW, sometimes even reversing EDW signals present in the models before the bias correction. As a consequence, we refrained from using quantile mapping for our task, as EDW poses one important factor influencing the climate in high altitudes for the nearer and more distant future, and used a climate change signal preserving bias correction approach. Here we present our findings of the distortion of the EDW temperature change by QM and discuss the influence of QM on different statistical properties as well as their modifications.

  4. Fragile--Handle with Care: Regression Analyses That Include Categorical Data.

    ERIC Educational Resources Information Center

    Brown, Diane Peacock

    In education and the social sciences, problems of interest to researchers and users of research often involve variables that do not meet the assumptions of regression in the area of an equal interval scale relative to a zero point. Various coding schemes exist that allow the use of regression while still answering the researcher's questions of…

  5. Regionalisation of a distributed method for flood quantiles estimation: Revaluation of local calibration hypothesis to enhance the spatial structure of the optimised parameter

    NASA Astrophysics Data System (ADS)

    Odry, Jean; Arnaud, Patrick

    2016-04-01

    The SHYREG method (Aubert et al., 2014) associates a stochastic rainfall generator and a rainfall-runoff model to produce rainfall and flood quantiles on a 1 km2 mesh covering the whole French territory. The rainfall generator is based on the description of rainy events by descriptive variables following probability distributions and is characterised by a high stability. This stochastic generator is fully regionalised, and the rainfall-runoff transformation is calibrated with a single parameter. Thanks to the stability of the approach, calibration can be performed against only flood quantiles associated with observated frequencies which can be extracted from relatively short time series. The aggregation of SHYREG flood quantiles to the catchment scale is performed using an areal reduction factor technique unique on the whole territory. Past studies demonstrated the accuracy of SHYREG flood quantiles estimation for catchments where flow data are available (Arnaud et al., 2015). Nevertheless, the parameter of the rainfall-runoff model is independently calibrated for each target catchment. As a consequence, this parameter plays a corrective role and compensates approximations and modelling errors which makes difficult to identify its proper spatial pattern. It is an inherent objective of the SHYREG approach to be completely regionalised in order to provide a complete and accurate flood quantiles database throughout France. Consequently, it appears necessary to identify the model configuration in which the calibrated parameter could be regionalised with acceptable performances. The revaluation of some of the method hypothesis is a necessary step before the regionalisation. Especially the inclusion or the modification of the spatial variability of imposed parameters (like production and transfer reservoir size, base flow addition and quantiles aggregation function) should lead to more realistic values of the only calibrated parameter. The objective of the work presented

  6. Methodological uncertainties in multi-regression analyses of middle-atmospheric data series.

    PubMed

    Kerzenmacher, Tobias E; Keckhut, Philippe; Hauchecorne, Alain; Chanin, Marie-Lise

    2006-07-01

    Multi-regression analyses have often been used recently to detect trends, in particular in ozone or temperature data sets in the stratosphere. The confidence in detecting trends depends on a number of factors which generate uncertainties. Part of these uncertainties comes from the random variability and these are what is usually considered. They can be statistically estimated from residual deviations between the data and the fitting model. However, interferences between different sources of variability affecting the data set, such as the Quasi-Biennal Oscillation (QBO), volcanic aerosols, solar flux variability and the trend can also be a critical source of errors. This type of error has hitherto not been well quantified. In this work an artificial data series has been generated to carry out such estimates. The sources of errors considered here are: the length of the data series, the dependence on the choice of parameters used in the fitting model and the time evolution of the trend in the data series. Curves provided here, will permit future studies to test the magnitude of the methodological bias expected for a given case, as shown in several real examples. It is found that, if the data series is shorter than a decade, the uncertainties are very large, whatever factors are chosen to identify the source of the variability. However the errors can be limited when dealing with natural variability, if a sufficient number of periods (for periodic forcings) are covered by the analysed dataset. However when analysing the trend, the response to volcanic eruption induces a bias, whatever the length of the data series. The signal to noise ratio is a key factor: doubling the noise increases the period for which data is required in order to obtain an error smaller than 10%, from 1 to 3-4 decades. Moreover, if non-linear trends are superimposed on the data, and if the length of the series is longer than five years, a non-linear function has to be used to estimate trends. When

  7. The Economic Returns to Field of Study and Competencies among Higher Education Graduates in Ireland

    ERIC Educational Resources Information Center

    Kelly, Elish; O'Connell, Philip J.; Smyth, Emer

    2010-01-01

    This paper looks at the economic returns to different fields of study in Ireland in 2004 and also the value placed on various job-related competencies, accumulated on completion of higher education, in the Irish labour market. In examining these issues, the paper also analyses, through quantile regression, how the returns vary across the earnings…

  8. The Application of Censored Regression Models in Low Streamflow Analyses

    NASA Astrophysics Data System (ADS)

    Kroll, C.; Luz, J.

    2003-12-01

    Estimation of low streamflow statistics at gauged and ungauged river sites is often a daunting task. This process is further confounded by the presence of intermittent streamflows, where streamflow is sometimes reported as zero, within a region. Streamflows recorded as zero may be zero, or may be less than the measurement detection limit. Such data is often referred to as censored data. Numerous methods have been developed to characterize intermittent streamflow series. Logit regression has been proposed to develop regional models of the probability annual lowflows series (such as 7-day lowflows) are zero. In addition, Tobit regression, a method of regression that allows for censored dependent variables, has been proposed for lowflow regional regression models in regions where the lowflow statistic of interest estimated as zero at some sites in the region. While these methods have been proposed, their use in practice has been limited. Here a delete-one jackknife simulation is presented to examine the performance of Logit and Tobit models of 7-day annual minimum flows in 6 USGS water resource regions in the United States. For the Logit model, an assessment is made of whether sites are correctly classified as having at least 10% of 7-day annual lowflows equal to zero. In such a situation, the 7-day, 10-year lowflow (Q710), a commonly employed low streamflow statistic, would be reported as zero. For the Tobit model, a comparison is made between results from the Tobit model, and from performing either ordinary least squares (OLS) or principal component regression (PCR) after the zero sites are dropped from the analysis. Initial results for the Logit model indicate this method to have a high probability of correctly classifying sites into groups with Q710s as zero and non-zero. Initial results also indicate the Tobit model produces better results than PCR and OLS when more than 5% of the sites in the region have Q710 values calculated as zero.

  9. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

    PubMed

    Onder, Seyhan; Mutlu, Mert

    2017-09-01

    Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

  10. Random regression analyses using B-spline functions to model growth of Nellore cattle.

    PubMed

    Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G

    2012-02-01

    The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions

  11. Logistic regression applied to natural hazards: rare event logistic regression with replications

    NASA Astrophysics Data System (ADS)

    Guns, M.; Vanacker, V.

    2012-06-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  12. Comparing the index-flood and multiple-regression methods using L-moments

    NASA Astrophysics Data System (ADS)

    Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.

    In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin

  13. The nonlinear relations of the approximate number system and mathematical language to early mathematics development.

    PubMed

    Purpura, David J; Logan, Jessica A R

    2015-12-01

    Both mathematical language and the approximate number system (ANS) have been identified as strong predictors of early mathematics performance. Yet, these relations may be different depending on a child's developmental level. The purpose of this study was to evaluate the relations between these domains across different levels of ability. Participants included 114 children who were assessed in the fall and spring of preschool on a battery of academic and cognitive tasks. Children were 3.12 to 5.26 years old (M = 4.18, SD = .58) and 53.6% were girls. Both mixed-effect and quantile regressions were conducted. The mixed-effect regressions indicated that mathematical language, but not the ANS, nor other cognitive domains, predicted mathematics performance. However, the quantile regression analyses revealed a more nuanced relation among domains. Specifically, it was found that mathematical language and the ANS predicted mathematical performance at different points on the ability continuum. These dual nonlinear relations indicate that different mechanisms may enhance mathematical acquisition dependent on children's developmental abilities. (c) 2015 APA, all rights reserved).

  14. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    PubMed

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  15. Gender difference in the association between food away-from-home consumption and body weight outcomes among Chinese adults.

    PubMed

    Du, Wen-Wen; Zhang, Bing; Wang, Hui-Jun; Wang, Zhi-Hong; Su, Chang; Zhang, Ji-Guo; Zhang, Ji; Jia, Xiao-Fang; Jiang, Hong-Ru

    2016-11-01

    The present study aimed to explore the associations between food away-from-home (FAFH) consumption and body weight outcomes among Chinese adults. FAFH was defined as food prepared at restaurants and the percentage of energy from FAFH was calculated. Measured BMI and waist circumference (WC) were used as body weight outcomes. Quantile regression models for BMI and WC were performed separately by gender. Information on demographic, socio-economic, diet and health parameters at individual, household and community levels was collected in twelve provinces of China. A cross-sectional sample of 7738 non-pregnant individuals aged 18-60 years from the China Health and Nutrition Survey 2011 was analysed. For males, quantile regression models showed that percentage of energy from FAFH was associated with an increase in BMI of 0·01, 0·01, 0·01, 0·02, 0·02 and 0·03 kg/m2 at the 5th, 25th, 50th, 75th, 90th and 95th quantile, and an increase in WC of 0·04, 0·06, 0·06, 0·04, 0·06, 0·05 and 0·07 cm at the 5th, 10th, 25th, 50th, 75th, 90th and 95th quantile. For females, percentage of energy from FAFH was associated with 0·01, 0·01, 0·01 and 0·02 kg/m2 increase in BMI at the 10th, 25th, 90th and 95th quantile, and with 0·05, 0·04, 0·03 and 0·03 cm increase in WC at the 5th, 10th, 25th and 75th quantile. Our findings suggest that FAFH consumption is relatively more important for BMI and WC among males rather than females in China. Public health initiatives are needed to encourage Chinese adults to make healthy food choices when eating out.

  16. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care.

    PubMed

    Kowalski, Amanda

    2016-01-02

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.

  17. Robust neural network with applications to credit portfolio data analysis.

    PubMed

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2010-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.

  18. The 2011 heat wave in Greater Houston: Effects of land use on temperature.

    PubMed

    Zhou, Weihe; Ji, Shuang; Chen, Tsun-Hsuan; Hou, Yi; Zhang, Kai

    2014-11-01

    Effects of land use on temperatures during severe heat waves have been rarely studied. This paper examines land use-temperature associations during the 2011 heat wave in Greater Houston. We obtained high resolution of satellite-derived land use data from the US National Land Cover Database, and temperature observations at 138 weather stations from Weather Underground, Inc (WU) during the August of 2011, which was the hottest month in Houston since 1889. Land use regression and quantile regression methods were applied to the monthly averages of daily maximum/mean/minimum temperatures and 114 land use-related predictors. Although selected variables vary with temperature metric, distance to the coastline consistently appears among all models. Other variables are generally related to high developed intensity, open water or wetlands. In addition, our quantile regression analysis shows that distance to the coastline and high developed intensity areas have larger impacts on daily average temperatures at higher quantiles, and open water area has greater impacts on daily minimum temperatures at lower quantiles. By utilizing both land use regression and quantile regression on a recent heat wave in one of the largest US metropolitan areas, this paper provides a new perspective on the impacts of land use on temperatures. Our models can provide estimates of heat exposures for epidemiological studies, and our findings can be combined with demographic variables, air conditioning and relevant diseases information to identify 'hot spots' of population vulnerability for public health interventions to reduce heat-related health effects during heat waves. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. The Income-Health Relationship 'Beyond the Mean': New Evidence from Biomarkers.

    PubMed

    Carrieri, Vincenzo; Jones, Andrew M

    2017-07-01

    The relationship between income and health is one of the most explored topics in health economics but less is known about this relationship at different points of the health distribution. Analysis based solely on the mean may miss important information in other parts of the distribution. This is especially relevant when clinical concern is focused on the tail of the distribution and when evaluating the income gradient at different points of the distribution and decomposing income-related inequalities in health is of interest. We use the unconditional quantile regression approach to analyse the income gradient across the entire distribution of objectively measured blood-based biomarkers. We apply an Oaxaca-Blinder decomposition at various quantiles of the biomarker distributions to analyse gender differentials in biomarkers and to measure the contribution of income (and other covariates) to these differentials. Using data from the Health Survey for England, we find a non-linear relationship between income and health and a strong gradient with respect to income at the highest quantiles of the biomarker distributions. We find that there is heterogeneity in the association of health to income across genders, which accounts for a substantial percentage of the gender differentials in observed health. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  1. Downscaling of daily precipitation using a hybrid model of Artificial Neural Network, Wavelet, and Quantile Mapping in Gharehsoo River Basin, Iran

    NASA Astrophysics Data System (ADS)

    Taie Semiromi, M.; Koch, M.

    2017-12-01

    Although linear/regression statistical downscaling methods are very straightforward and widely used, and they can be applied to a single predictor-predictand pair or spatial fields of predictors-predictands, the greatest constraint is the requirement of a normal distribution of the predictor and the predictand values, which means that it cannot be used to predict the distribution of daily rainfall because it is typically non-normal. To tacked with such a limitation, the current study aims to introduce a new developed hybrid technique taking advantages from Artificial Neural Networks (ANNs), Wavelet and Quantile Mapping (QM) for downscaling of daily precipitation for 10 rain-gauge stations located in Gharehsoo River Basin, Iran. With the purpose of daily precipitation downscaling, the study makes use of Second Generation Canadian Earth System Model (CanESM2) developed by Canadian Centre for Climate Modeling and Analysis (CCCma). Climate projections are available for three representative concentration pathways (RCPs) namely RCP 2.6, RCP 4.5 and RCP 8.5 for up to 2100. In this regard, 26 National Centers for Environmental Prediction (NCEP) reanalysis large-scale variables which have potential physical relationships with precipitation, were selected as candidate predictors. Afterwards, predictor screening was conducted using correlation, partial correlation and explained variance between predictors and predictand (precipitation). Depending on each rain-gauge station between two and three predictors were selected which their decomposed details (D) and approximation (A) obtained from discrete wavelet analysis were fed as inputs to the neural networks. After downscaling of daily precipitation, bias correction was conducted using quantile mapping. Out of the complete time series available, i.e. 1978-2005, two third of which namely 1978-1996 was used for calibration of QM and the reminder, i.e. 1997-2005 was considered for the validation. Result showed that the proposed

  2. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care

    PubMed Central

    Kowalski, Amanda

    2015-01-01

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member’s injury to induce variation in an individual’s own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from −0.76 to −1.49, which are an order of magnitude larger than previous estimates. PMID:26977117

  3. miRNA Temporal Analyzer (mirnaTA): a bioinformatics tool for identifying differentially expressed microRNAs in temporal studies using normal quantile transformation.

    PubMed

    Cer, Regina Z; Herrera-Galeano, J Enrique; Anderson, Joseph J; Bishop-Lilly, Kimberly A; Mokashi, Vishwesh P

    2014-01-01

    Understanding the biological roles of microRNAs (miRNAs) is a an active area of research that has produced a surge of publications in PubMed, particularly in cancer research. Along with this increasing interest, many open-source bioinformatics tools to identify existing and/or discover novel miRNAs in next-generation sequencing (NGS) reads become available. While miRNA identification and discovery tools are significantly improved, the development of miRNA differential expression analysis tools, especially in temporal studies, remains substantially challenging. Further, the installation of currently available software is non-trivial and steps of testing with example datasets, trying with one's own dataset, and interpreting the results require notable expertise and time. Subsequently, there is a strong need for a tool that allows scientists to normalize raw data, perform statistical analyses, and provide intuitive results without having to invest significant efforts. We have developed miRNA Temporal Analyzer (mirnaTA), a bioinformatics package to identify differentially expressed miRNAs in temporal studies. mirnaTA is written in Perl and R (Version 2.13.0 or later) and can be run across multiple platforms, such as Linux, Mac and Windows. In the current version, mirnaTA requires users to provide a simple, tab-delimited, matrix file containing miRNA name and count data from a minimum of two to a maximum of 20 time points and three replicates. To recalibrate data and remove technical variability, raw data is normalized using Normal Quantile Transformation (NQT), and linear regression model is used to locate any miRNAs which are differentially expressed in a linear pattern. Subsequently, remaining miRNAs which do not fit a linear model are further analyzed in two different non-linear methods 1) cumulative distribution function (CDF) or 2) analysis of variances (ANOVA). After both linear and non-linear analyses are completed, statistically significant miRNAs (P < 0

  4. Bayesian regression analyses of radiation modality effects on pericardial and pleural effusion and survival in esophageal cancer.

    PubMed

    He, Liru; Chapple, Andrew; Liao, Zhongxing; Komaki, Ritsuko; Thall, Peter F; Lin, Steven H

    2016-10-01

    To evaluate radiation modality effects on pericardial effusion (PCE), pleural effusion (PE) and survival in esophageal cancer (EC) patients. We analyzed data from 470 EC patients treated with definitive concurrent chemoradiotherapy (CRT). Bayesian semi-competing risks (SCR) regression models were fit to assess effects of radiation modality and prognostic covariates on the risks of PCE and PE, and death either with or without these preceding events. Bayesian piecewise exponential regression models were fit for overall survival, the time to PCE or death, and the time to PE or death. All models included propensity score as a covariate to correct for potential selection bias. Median times to onset of PCE and PE after RT were 7.1 and 6.1months for IMRT, and 6.5 and 5.4months for 3DCRT, respectively. Compared to 3DCRT, the IMRT group had significantly lower risks of PE, PCE, and death. The respective probabilities of a patient being alive without either PCE or PE at 3-years and 5-years were 0.29 and 0.21 for IMRT compared to 0.13 and 0.08 for 3DCRT. In the SCR regression analyses, IMRT was associated with significantly lower risks of PCE (HR=0.26) and PE (HR=0.49), and greater overall survival (probability of beneficial effect (pbe)>0.99), after controlling for known clinical prognostic factors. IMRT reduces the incidence and postpones the onset of PCE and PE, and increases survival probability, compared to 3DCRT. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Bayesian Unimodal Density Regression for Causal Inference

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2011-01-01

    Karabatsos and Walker (2011) introduced a new Bayesian nonparametric (BNP) regression model. Through analyses of real and simulated data, they showed that the BNP regression model outperforms other parametric and nonparametric regression models of common use, in terms of predictive accuracy of the outcome (dependent) variable. The other,…

  6. Improved Dietary Guidelines for Vitamin D: Application of Individual Participant Data (IPD)-Level Meta-Regression Analyses

    PubMed Central

    Cashman, Kevin D.; Ritz, Christian; Kiely, Mairead

    2017-01-01

    Dietary Reference Values (DRVs) for vitamin D have a key role in the prevention of vitamin D deficiency. However, despite adopting similar risk assessment protocols, estimates from authoritative agencies over the last 6 years have been diverse. This may have arisen from diverse approaches to data analysis. Modelling strategies for pooling of individual subject data from cognate vitamin D randomized controlled trials (RCTs) are likely to provide the most appropriate DRV estimates. Thus, the objective of the present work was to undertake the first-ever individual participant data (IPD)-level meta-regression, which is increasingly recognized as best practice, from seven winter-based RCTs (with 882 participants ranging in age from 4 to 90 years) of the vitamin D intake–serum 25-hydroxyvitamin D (25(OH)D) dose-response. Our IPD-derived estimates of vitamin D intakes required to maintain 97.5% of 25(OH)D concentrations >25, 30, and 50 nmol/L across the population are 10, 13, and 26 µg/day, respectively. In contrast, standard meta-regression analyses with aggregate data (as used by several agencies in recent years) from the same RCTs estimated that a vitamin D intake requirement of 14 µg/day would maintain 97.5% of 25(OH)D >50 nmol/L. These first IPD-derived estimates offer improved dietary recommendations for vitamin D because the underpinning modeling captures the between-person variability in response of serum 25(OH)D to vitamin D intake. PMID:28481259

  7. Meta-regression analyses, meta-analyses, and trial sequential analyses of the effects of supplementation with beta-carotene, vitamin A, and vitamin E singly or in different combinations on all-cause mortality: do we have evidence for lack of harm?

    PubMed

    Bjelakovic, Goran; Nikolova, Dimitrinka; Gluud, Christian

    2013-01-01

    Evidence shows that antioxidant supplements may increase mortality. Our aims were to assess whether different doses of beta-carotene, vitamin A, and vitamin E affect mortality in primary and secondary prevention randomized clinical trials with low risk of bias. The present study is based on our 2012 Cochrane systematic review analyzing beneficial and harmful effects of antioxidant supplements in adults. Using random-effects meta-analyses, meta-regression analyses, and trial sequential analyses, we examined the association between beta-carotene, vitamin A, and vitamin E, and mortality according to their daily doses and doses below and above the recommended daily allowances (RDA). We included 53 randomized trials with low risk of bias (241,883 participants, aged 18 to 103 years, 44.6% women) assessing beta-carotene, vitamin A, and vitamin E. Meta-regression analysis showed that the dose of vitamin A was significantly positively associated with all-cause mortality. Beta-carotene in a dose above 9.6 mg significantly increased mortality (relative risk (RR) 1.06, 95% confidence interval (CI) 1.02 to 1.09, I(2) = 13%). Vitamin A in a dose above the RDA (> 800 µg) did not significantly influence mortality (RR 1.08, 95% CI 0.98 to 1.19, I(2) = 53%). Vitamin E in a dose above the RDA (> 15 mg) significantly increased mortality (RR 1.03, 95% CI 1.00 to 1.05, I(2) = 0%). Doses below the RDAs did not affect mortality, but data were sparse. Beta-carotene and vitamin E in doses higher than the RDA seem to significantly increase mortality, whereas we lack information on vitamin A. Dose of vitamin A was significantly associated with increased mortality in meta-regression. We lack information on doses below the RDA. All essential compounds to stay healthy cannot be synthesized in our body. Therefore, these compounds must be taken through our diet or obtained in other ways [1]. Oxidative stress has been suggested to cause a variety of diseases [2]. Therefore, it is speculated that

  8. Probabilistic forecasting for extreme NO2 pollution episodes.

    PubMed

    Aznarte, José L

    2017-10-01

    In this study, we investigate the convenience of quantile regression to predict extreme concentrations of NO 2 . Contrarily to the usual point-forecasting, where a single value is forecast for each horizon, probabilistic forecasting through quantile regression allows for the prediction of the full probability distribution, which in turn allows to build models specifically fit for the tails of this distribution. Using data from the city of Madrid, including NO 2 concentrations as well as meteorological measures, we build models that predict extreme NO 2 concentrations, outperforming point-forecasting alternatives, and we prove that the predictions are accurate, reliable and sharp. Besides, we study the relative importance of the independent variables involved, and show how the important variables for the median quantile are different than those important for the upper quantiles. Furthermore, we present a method to compute the probability of exceedance of thresholds, which is a simple and comprehensible manner to present probabilistic forecasts maximizing their usefulness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Visual field progression in glaucoma: estimating the overall significance of deterioration with permutation analyses of pointwise linear regression (PoPLR).

    PubMed

    O'Leary, Neil; Chauhan, Balwantray C; Artes, Paul H

    2012-10-01

    To establish a method for estimating the overall statistical significance of visual field deterioration from an individual patient's data, and to compare its performance to pointwise linear regression. The Truncated Product Method was used to calculate a statistic S that combines evidence of deterioration from individual test locations in the visual field. The overall statistical significance (P value) of visual field deterioration was inferred by comparing S with its permutation distribution, derived from repeated reordering of the visual field series. Permutation of pointwise linear regression (PoPLR) and pointwise linear regression were evaluated in data from patients with glaucoma (944 eyes, median mean deviation -2.9 dB, interquartile range: -6.3, -1.2 dB) followed for more than 4 years (median 10 examinations over 8 years). False-positive rates were estimated from randomly reordered series of this dataset, and hit rates (proportion of eyes with significant deterioration) were estimated from the original series. The false-positive rates of PoPLR were indistinguishable from the corresponding nominal significance levels and were independent of baseline visual field damage and length of follow-up. At P < 0.05, the hit rates of PoPLR were 12, 29, and 42%, at the fifth, eighth, and final examinations, respectively, and at matching specificities they were consistently higher than those of pointwise linear regression. In contrast to population-based progression analyses, PoPLR provides a continuous estimate of statistical significance for visual field deterioration individualized to a particular patient's data. This allows close control over specificity, essential for monitoring patients in clinical practice and in clinical trials.

  10. A Simulation Investigation of Principal Component Regression.

    ERIC Educational Resources Information Center

    Allen, David E.

    Regression analysis is one of the more common analytic tools used by researchers. However, multicollinearity between the predictor variables can cause problems in using the results of regression analyses. Problems associated with multicollinearity include entanglement of relative influences of variables due to reduced precision of estimation,…

  11. Predictors of High Profit and High Deficit Outliers under SwissDRG of a Tertiary Care Center

    PubMed Central

    Mehra, Tarun; Müller, Christian Thomas Benedikt; Volbracht, Jörk; Seifert, Burkhardt; Moos, Rudolf

    2015-01-01

    Principles Case weights of Diagnosis Related Groups (DRGs) are determined by the average cost of cases from a previous billing period. However, a significant amount of cases are largely over- or underfunded. We therefore decided to analyze earning outliers of our hospital as to search for predictors enabling a better grouping under SwissDRG. Methods 28,893 inpatient cases without additional private insurance discharged from our hospital in 2012 were included in our analysis. Outliers were defined by the interquartile range method. Predictors for deficit and profit outliers were determined with logistic regressions. Predictors were shortlisted with the LASSO regularized logistic regression method and compared to results of Random forest analysis. 10 of these parameters were selected for quantile regression analysis as to quantify their impact on earnings. Results Psychiatric diagnosis and admission as an emergency case were significant predictors for higher deficit with negative regression coefficients for all analyzed quantiles (p<0.001). Admission from an external health care provider was a significant predictor for a higher deficit in all but the 90% quantile (p<0.001 for Q10, Q20, Q50, Q80 and p = 0.0017 for Q90). Burns predicted higher earnings for cases which were favorably remunerated (p<0.001 for the 90% quantile). Osteoporosis predicted a higher deficit in the most underfunded cases, but did not predict differences in earnings for balanced or profitable cases (Q10 and Q20: p<0.00, Q50: p = 0.10, Q80: p = 0.88 and Q90: p = 0.52). ICU stay, mechanical and patient clinical complexity level score (PCCL) predicted higher losses at the 10% quantile but also higher profits at the 90% quantile (p<0.001). Conclusion We suggest considering psychiatric diagnosis, admission as an emergencay case and admission from an external health care provider as DRG split criteria as they predict large, consistent and significant losses. PMID:26517545

  12. Predictors of High Profit and High Deficit Outliers under SwissDRG of a Tertiary Care Center.

    PubMed

    Mehra, Tarun; Müller, Christian Thomas Benedikt; Volbracht, Jörk; Seifert, Burkhardt; Moos, Rudolf

    2015-01-01

    Case weights of Diagnosis Related Groups (DRGs) are determined by the average cost of cases from a previous billing period. However, a significant amount of cases are largely over- or underfunded. We therefore decided to analyze earning outliers of our hospital as to search for predictors enabling a better grouping under SwissDRG. 28,893 inpatient cases without additional private insurance discharged from our hospital in 2012 were included in our analysis. Outliers were defined by the interquartile range method. Predictors for deficit and profit outliers were determined with logistic regressions. Predictors were shortlisted with the LASSO regularized logistic regression method and compared to results of Random forest analysis. 10 of these parameters were selected for quantile regression analysis as to quantify their impact on earnings. Psychiatric diagnosis and admission as an emergency case were significant predictors for higher deficit with negative regression coefficients for all analyzed quantiles (p<0.001). Admission from an external health care provider was a significant predictor for a higher deficit in all but the 90% quantile (p<0.001 for Q10, Q20, Q50, Q80 and p = 0.0017 for Q90). Burns predicted higher earnings for cases which were favorably remunerated (p<0.001 for the 90% quantile). Osteoporosis predicted a higher deficit in the most underfunded cases, but did not predict differences in earnings for balanced or profitable cases (Q10 and Q20: p<0.00, Q50: p = 0.10, Q80: p = 0.88 and Q90: p = 0.52). ICU stay, mechanical and patient clinical complexity level score (PCCL) predicted higher losses at the 10% quantile but also higher profits at the 90% quantile (p<0.001). We suggest considering psychiatric diagnosis, admission as an emergency case and admission from an external health care provider as DRG split criteria as they predict large, consistent and significant losses.

  13. Classification and regression tree analysis vs. multivariable linear and logistic regression methods as statistical tools for studying haemophilia.

    PubMed

    Henrard, S; Speybroeck, N; Hermans, C

    2015-11-01

    Haemophilia is a rare genetic haemorrhagic disease characterized by partial or complete deficiency of coagulation factor VIII, for haemophilia A, or IX, for haemophilia B. As in any other medical research domain, the field of haemophilia research is increasingly concerned with finding factors associated with binary or continuous outcomes through multivariable models. Traditional models include multiple logistic regressions, for binary outcomes, and multiple linear regressions for continuous outcomes. Yet these regression models are at times difficult to implement, especially for non-statisticians, and can be difficult to interpret. The present paper sought to didactically explain how, why, and when to use classification and regression tree (CART) analysis for haemophilia research. The CART method is non-parametric and non-linear, based on the repeated partitioning of a sample into subgroups based on a certain criterion. Breiman developed this method in 1984. Classification trees (CTs) are used to analyse categorical outcomes and regression trees (RTs) to analyse continuous ones. The CART methodology has become increasingly popular in the medical field, yet only a few examples of studies using this methodology specifically in haemophilia have to date been published. Two examples using CART analysis and previously published in this field are didactically explained in details. There is increasing interest in using CART analysis in the health domain, primarily due to its ease of implementation, use, and interpretation, thus facilitating medical decision-making. This method should be promoted for analysing continuous or categorical outcomes in haemophilia, when applicable. © 2015 John Wiley & Sons Ltd.

  14. Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis.

    PubMed

    Nieves, Jeri W; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J Americo M; Sorenson, Eric J; D'Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi

    2016-12-01

    There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale-Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5-68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of "good" micronutrients and "good" food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated the usefulness of the weighted quantile sum regression method in the evaluation of diet. Those responsible for nutritional

  15. Prenatal lead exposure and fetal growth: Smaller infants have heightened susceptibility.

    PubMed

    Rodosthenous, Rodosthenis S; Burris, Heather H; Svensson, Katherine; Amarasiriwardena, Chitra J; Cantoral, Alejandra; Schnaas, Lourdes; Mercado-García, Adriana; Coull, Brent A; Wright, Robert O; Téllez-Rojo, Martha M; Baccarelli, Andrea A

    2017-02-01

    As population lead levels decrease, the toxic effects of lead may be distributed to more sensitive populations, such as infants with poor fetal growth. To determine the association of prenatal lead exposure and fetal growth; and to evaluate whether infants with poor fetal growth are more susceptible to lead toxicity than those with normal fetal growth. We examined the association of second trimester maternal blood lead levels (BLL) with birthweight-for-gestational age (BWGA) z-score in 944 mother-infant participants of the PROGRESS cohort. We determined the association between maternal BLL and BWGA z-score by using both linear and quantile regression. We estimated odds ratios for small-for-gestational age (SGA) infants between maternal BLL quartiles using logistic regression. Maternal age, body mass index, socioeconomic status, parity, household smoking exposure, hemoglobin levels, and infant sex were included as confounders. While linear regression showed a negative association between maternal BLL and BWGA z-score (β=-0.06 z-score units per log 2 BLL increase; 95% CI: -0.13, 0.003; P=0.06), quantile regression revealed larger magnitudes of this association in the <30th percentiles of BWGA z-score (β range [-0.08, -0.13] z-score units per log 2 BLL increase; all P values<0.05). Mothers in the highest BLL quartile had an odds ratio of 1.62 (95% CI: 0.99-2.65) for having a SGA infant compared to the lowest BLL quartile. While both linear and quantile regression showed a negative association between prenatal lead exposure and birthweight, quantile regression revealed that smaller infants may represent a more susceptible subpopulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Logistic regression for circular data

    NASA Astrophysics Data System (ADS)

    Al-Daffaie, Kadhem; Khan, Shahjahan

    2017-05-01

    This paper considers the relationship between a binary response and a circular predictor. It develops the logistic regression model by employing the linear-circular regression approach. The maximum likelihood method is used to estimate the parameters. The Newton-Raphson numerical method is used to find the estimated values of the parameters. A data set from weather records of Toowoomba city is analysed by the proposed methods. Moreover, a simulation study is considered. The R software is used for all computations and simulations.

  17. Regression Analysis: Legal Applications in Institutional Research

    ERIC Educational Resources Information Center

    Frizell, Julie A.; Shippen, Benjamin S., Jr.; Luna, Andrew L.

    2008-01-01

    This article reviews multiple regression analysis, describes how its results should be interpreted, and instructs institutional researchers on how to conduct such analyses using an example focused on faculty pay equity between men and women. The use of multiple regression analysis will be presented as a method with which to compare salaries of…

  18. Heterogeneity in drug abuse among juvenile offenders: is mixture regression more informative than standard regression?

    PubMed

    Montgomery, Katherine L; Vaughn, Michael G; Thompson, Sanna J; Howard, Matthew O

    2013-11-01

    Research on juvenile offenders has largely treated this population as a homogeneous group. However, recent findings suggest that this at-risk population may be considerably more heterogeneous than previously believed. This study compared mixture regression analyses with standard regression techniques in an effort to explain how known factors such as distress, trauma, and personality are associated with drug abuse among juvenile offenders. Researchers recruited 728 juvenile offenders from Missouri juvenile correctional facilities for participation in this study. Researchers investigated past-year substance use in relation to the following variables: demographic characteristics (gender, ethnicity, age, familial use of public assistance), antisocial behavior, and mental illness symptoms (psychopathic traits, psychiatric distress, and prior trauma). Results indicated that standard and mixed regression approaches identified significant variables related to past-year substance use among this population; however, the mixture regression methods provided greater specificity in results. Mixture regression analytic methods may help policy makers and practitioners better understand and intervene with the substance-related subgroups of juvenile offenders.

  19. Pressure Points in Reading Comprehension: A Quantile Multiple Regression Analysis

    ERIC Educational Resources Information Center

    Logan, Jessica

    2017-01-01

    The goal of this study was to examine how selected pressure points or areas of vulnerability are related to individual differences in reading comprehension and whether the importance of these pressure points varies as a function of the level of children's reading comprehension. A sample of 245 third-grade children were given an assessment battery…

  20. Teacher Salaries and Teacher Aptitude: An Analysis Using Quantile Regressions

    ERIC Educational Resources Information Center

    Gilpin, Gregory A.

    2012-01-01

    This study investigates the relationship between salaries and scholastic aptitude for full-time public high school humanities and mathematics/sciences teachers. For identification, we rely on variation in salaries between adjacent school districts within the same state. The results indicate that teacher aptitude is positively correlated with…

  1. Survival Data and Regression Models

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2014-12-01

    We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.

  2. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE PAGES

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; ...

    2016-11-21

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  3. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  4. Shrinkage regression-based methods for microarray missing value imputation.

    PubMed

    Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng

    2013-01-01

    Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.

  5. Linear regression metamodeling as a tool to summarize and present simulation model results.

    PubMed

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  6. The gender gap reloaded: are school characteristics linked to labor market performance?

    PubMed

    Konstantopoulos, Spyros; Constant, Amelie

    2008-06-01

    This study examines the wage gender gap of young adults in the 1970s, 1980s, and 2000 in the US. Using quantile regression we estimate the gender gap across the entire wage distribution. We also study the importance of high school characteristics in predicting future labor market performance. We conduct analyses for three major racial/ethnic groups in the US: Whites, Blacks, and Hispanics, employing data from two rich longitudinal studies: NLS and NELS. Our results indicate that while some school characteristics are positive and significant predictors of future wages for Whites, they are less so for the two minority groups. We find significant wage gender disparities favoring men across all three surveys in the 1970s, 1980s, and 2000. The wage gender gap is more pronounced in higher paid jobs (90th quantile) for all groups, indicating the presence of a persistent and alarming "glass ceiling."

  7. Height-income association in developing countries: Evidence from 14 countries.

    PubMed

    Patel, Pankaj C; Devaraj, Srikant

    2017-12-28

    The purpose of this study was to assess whether the height-income association is positive in developing countries, and whether income differences between shorter and taller individuals in developing countries are explained by differences in endowment (ie, taller individuals have a higher income than shorter individuals because of characteristics such as better social skills) or due to discrimination (ie, shorter individuals have a lower income despite having comparable characteristics). Instrumental variable regression, Oaxaca-Blinder decomposition, quantile regression, and quantile decomposition analyses were applied to a sample of 45 108 respondents from 14 developing countries represented in the Research on Early Life and Aging Trends and Effects (RELATE) study. For a one-centimeter increase in country- and sex-adjusted median height, real income adjusted for purchasing power parity increased by 1.37%. The income differential between shorter and taller individuals was explained by discrimination and not by differences in endowments; however, the effect of discrimination decreased at higher values of country- and sex-adjusted height. Taller individuals in developing countries may realize higher income despite having characteristics similar to those of shorter individuals. © 2017 Wiley Periodicals, Inc.

  8. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    PubMed

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We

  9. The more total cognitive load is reduced by cues, the better retention and transfer of multimedia learning: A meta-analysis and two meta-regression analyses.

    PubMed

    Xie, Heping; Wang, Fuxing; Hao, Yanbin; Chen, Jiaxue; An, Jing; Wang, Yuxin; Liu, Huashan

    2017-01-01

    Cueing facilitates retention and transfer of multimedia learning. From the perspective of cognitive load theory (CLT), cueing has a positive effect on learning outcomes because of the reduction in total cognitive load and avoidance of cognitive overload. However, this has not been systematically evaluated. Moreover, what remains ambiguous is the direct relationship between the cue-related cognitive load and learning outcomes. A meta-analysis and two subsequent meta-regression analyses were conducted to explore these issues. Subjective total cognitive load (SCL) and scores on a retention test and transfer test were selected as dependent variables. Through a systematic literature search, 32 eligible articles encompassing 3,597 participants were included in the SCL-related meta-analysis. Among them, 25 articles containing 2,910 participants were included in the retention-related meta-analysis and the following retention-related meta-regression, while there were 29 articles containing 3,204 participants included in the transfer-related meta-analysis and the transfer-related meta-regression. The meta-analysis revealed a statistically significant cueing effect on subjective ratings of cognitive load (d = -0.11, 95% CI = [-0.19, -0.02], p < 0.05), retention performance (d = 0.27, 95% CI = [0.08, 0.46], p < 0.01), and transfer performance (d = 0.34, 95% CI = [0.12, 0.56], p < 0.01). The subsequent meta-regression analyses showed that dSCL for cueing significantly predicted dretention for cueing (β = -0.70, 95% CI = [-1.02, -0.38], p < 0.001), as well as dtransfer for cueing (β = -0.60, 95% CI = [-0.92, -0.28], p < 0.001). Thus in line with CLT, adding cues in multimedia materials can indeed reduce SCL and promote learning outcomes, and the more SCL is reduced by cues, the better retention and transfer of multimedia learning.

  10. The more total cognitive load is reduced by cues, the better retention and transfer of multimedia learning: A meta-analysis and two meta-regression analyses

    PubMed Central

    Hao, Yanbin; Chen, Jiaxue; An, Jing; Wang, Yuxin; Liu, Huashan

    2017-01-01

    Cueing facilitates retention and transfer of multimedia learning. From the perspective of cognitive load theory (CLT), cueing has a positive effect on learning outcomes because of the reduction in total cognitive load and avoidance of cognitive overload. However, this has not been systematically evaluated. Moreover, what remains ambiguous is the direct relationship between the cue-related cognitive load and learning outcomes. A meta-analysis and two subsequent meta-regression analyses were conducted to explore these issues. Subjective total cognitive load (SCL) and scores on a retention test and transfer test were selected as dependent variables. Through a systematic literature search, 32 eligible articles encompassing 3,597 participants were included in the SCL-related meta-analysis. Among them, 25 articles containing 2,910 participants were included in the retention-related meta-analysis and the following retention-related meta-regression, while there were 29 articles containing 3,204 participants included in the transfer-related meta-analysis and the transfer-related meta-regression. The meta-analysis revealed a statistically significant cueing effect on subjective ratings of cognitive load (d = −0.11, 95% CI = [−0.19, −0.02], p < 0.05), retention performance (d = 0.27, 95% CI = [0.08, 0.46], p < 0.01), and transfer performance (d = 0.34, 95% CI = [0.12, 0.56], p < 0.01). The subsequent meta-regression analyses showed that dSCL for cueing significantly predicted dretention for cueing (β = −0.70, 95% CI = [−1.02, −0.38], p < 0.001), as well as dtransfer for cueing (β = −0.60, 95% CI = [−0.92, −0.28], p < 0.001). Thus in line with CLT, adding cues in multimedia materials can indeed reduce SCL and promote learning outcomes, and the more SCL is reduced by cues, the better retention and transfer of multimedia learning. PMID:28854205

  11. Prenatal Lead Exposure and Fetal Growth: Smaller Infants Have Heightened Susceptibility

    PubMed Central

    Rodosthenous, Rodosthenis S.; Burris, Heather H.; Svensson, Katherine; Amarasiriwardena, Chitra J.; Cantoral, Alejandra; Schnaas, Lourdes; Mercado-García, Adriana; Coull, Brent A.; Wright, Robert O.; Téllez-Rojo, Martha M.; Baccarelli, Andrea A.

    2016-01-01

    Background As population lead levels decrease, the toxic effects of lead may be distributed to more sensitive populations, such as infants with poor fetal growth. Objectives To determine the association of prenatal lead exposure and fetal growth; and to evaluate whether infants with poor fetal growth are more susceptible to lead toxicity than those with normal fetal growth. Methods We examined the association of second trimester maternal blood lead levels (BLL) with birthweight-for-gestational age (BWGA) z-score in 944 mother-infant participants of the PROGRESS cohort. We determined the association between maternal BLL and BWGA z-score by using both linear and quantile regression. We estimated odds ratios for small-for-gestational age (SGA) infants between maternal BLL quartiles using logistic regression. Maternal age, body mass index, socioeconomic status, parity, household smoking exposure, hemoglobin levels, and infant sex were included as confounders. Results While linear regression showed a negative association between maternal BLL and BWGA z-score (β=−0.06 z-score units per log2 BLL increase; 95% CI: −0.13, 0.003; P=0.06), quantile regression revealed larger magnitudes of this association in the <30th percentiles of BWGA z-score (β range [−0.08, −0.13] z-score units per log2 BLL increase; all P values <0.05). Mothers in the highest BLL quartile had an odds ratio of 1.62 (95% CI: 0.99–2.65) for having a SGA infant compared to the lowest BLL quartile. Conclusions While both linear and quantile regression showed a negative association between prenatal lead exposure and birthweight, quantile regression revealed that smaller infants may represent a more susceptible subpopulation. PMID:27923585

  12. Bisphenol-A exposures and behavioural aberrations: median and linear spline and meta-regression analyses of 12 toxicity studies in rodents.

    PubMed

    Peluso, Marco E M; Munnia, Armelle; Ceppi, Marcello

    2014-11-05

    Exposures to bisphenol-A, a weak estrogenic chemical, largely used for the production of plastic containers, can affect the rodent behaviour. Thus, we examined the relationships between bisphenol-A and the anxiety-like behaviour, spatial skills, and aggressiveness, in 12 toxicity studies of rodent offspring from females orally exposed to bisphenol-A, while pregnant and/or lactating, by median and linear splines analyses. Subsequently, the meta-regression analysis was applied to quantify the behavioural changes. U-shaped, inverted U-shaped and J-shaped dose-response curves were found to describe the relationships between bisphenol-A with the behavioural outcomes. The occurrence of anxiogenic-like effects and spatial skill changes displayed U-shaped and inverted U-shaped curves, respectively, providing examples of effects that are observed at low-doses. Conversely, a J-dose-response relationship was observed for aggressiveness. When the proportion of rodents expressing certain traits or the time that they employed to manifest an attitude was analysed, the meta-regression indicated that a borderline significant increment of anxiogenic-like effects was present at low-doses regardless of sexes (β)=-0.8%, 95% C.I. -1.7/0.1, P=0.076, at ≤120 μg bisphenol-A. Whereas, only bisphenol-A-males exhibited a significant inhibition of spatial skills (β)=0.7%, 95% C.I. 0.2/1.2, P=0.004, at ≤100 μg/day. A significant increment of aggressiveness was observed in both the sexes (β)=67.9,C.I. 3.4, 172.5, P=0.038, at >4.0 μg. Then, bisphenol-A treatments significantly abrogated spatial learning and ability in males (P<0.001 vs. females). Overall, our study showed that developmental exposures to low-doses of bisphenol-A, e.g. ≤120 μg/day, were associated to behavioural aberrations in offspring. Copyright © 2014. Published by Elsevier Ireland Ltd.

  13. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  14. Association Between Awareness of Hypertension and Health-Related Quality of Life in a Cross-Sectional Population-Based Study in Rural Area of Northwest China.

    PubMed

    Mi, Baibing; Dang, Shaonong; Li, Qiang; Zhao, Yaling; Yang, Ruihai; Wang, Duolao; Yan, Hong

    2015-07-01

    Hypertensive patients have more complex health care needs and are more likely to have poorer health-related quality of life than normotensive people. The awareness of hypertension could be related to reduce health-related quality of life. We propose the use of quantile regression to explore more detailed relationships between awareness of hypertension and health-related quality of life. In a cross-sectional, population-based study, 2737 participants (including 1035 hypertensive patients and 1702 normotensive participants) completed the Short-Form Health Survey. A quantile regression model was employed to investigate the association of physical component summary scores and mental component summary scores with awareness of hypertension and to evaluate the associated factors. Patients who were aware of hypertension (N = 554) had lower scores than patients who were unaware of hypertension (N = 481). The median (IQR) of physical component summary scores: 48.20 (13.88) versus 53.27 (10.79), P < 0.01; the mental component summary scores: 50.68 (15.09) versus 51.70 (10.65), P = 0.03. adjusting for covariates, the quantile regression results suggest awareness of hypertension was associated with most physical component summary scores quantiles (P < 0.05 except 10th and 20th quantiles) in which the β-estimates from -2.14 (95% CI: -3.80 to -0.48) to -1.45 (95% CI: -2.42 to -0.47), as the same significant trend with some poorer mental component summary scores quantiles in which the β-estimates from -3.47 (95% CI: -6.65 to -0.39) to -2.18 (95% CI: -4.30 to -0.06). The awareness of hypertension has a greater effect on those with intermediate physical component summary status: the β-estimates were equal to -2.04 (95% CI: -3.51 to -0.57, P < 0.05) at the 40th and decreased further to -1.45 (95% CI: -2.42 to -0.47, P < 0.01) at the 90th quantile. Awareness of hypertension was negatively related to health-related quality of life in hypertensive patients in rural western China

  15. Effects of export concentration on CO2 emissions in developed countries: an empirical analysis.

    PubMed

    Apergis, Nicholas; Can, Muhlis; Gozgor, Giray; Lau, Chi Keung Marco

    2018-03-08

    This paper provides the evidence on the short- and the long-run effects of the export product concentration on the level of CO 2 emissions in 19 developed (high-income) economies, spanning the period 1962-2010. To this end, the paper makes use of the nonlinear panel unit root and cointegration tests with multiple endogenous structural breaks. It also considers the mean group estimations, the autoregressive distributed lag model, and the panel quantile regression estimations. The findings illustrate that the environmental Kuznets curve (EKC) hypothesis is valid in the panel dataset of 19 developed economies. In addition, it documents that a higher level of the product concentration of exports leads to lower CO 2 emissions. The results from the panel quantile regressions also indicate that the effect of the export product concentration upon the per capita CO 2 emissions is relatively high at the higher quantiles.

  16. Self-Concept Predicts Academic Achievement Across Levels of the Achievement Distribution: Domain Specificity for Math and Reading.

    PubMed

    Susperreguy, Maria Ines; Davis-Kean, Pamela E; Duckworth, Kathryn; Chen, Meichu

    2017-09-18

    This study examines whether self-concept of ability in math and reading predicts later math and reading attainment across different levels of achievement. Data from three large-scale longitudinal data sets, the Avon Longitudinal Study of Parents and Children, National Institute of Child Health and Human Development-Study of Early Child Care and Youth Development, and Panel Study of Income Dynamics-Child Development Supplement, were used to answer this question by employing quantile regression analyses. After controlling for demographic variables, child characteristics, and early ability, the findings indicate that self-concept of ability in math and reading predicts later achievement in each respective domain across all quantile levels of achievement. These results were replicated across the three data sets representing different populations and provide robust evidence for the role of self-concept of ability in understanding achievement from early childhood to adolescence across the spectrum of performance (low to high). © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  17. Identifying Factors That Predict Promotion Time to E-4 and Re-Enlistment Eligibility for U.S. Marine Corps Field Radio Operators

    DTIC Science & Technology

    2014-12-01

    Primary Military Occupational Specialty PRO Proficiency Q-Q Quantile - Quantile RSS Residual Sum of Squares SI Shop Information T&R Training and...construct multivariate linear regression models to estimate Marines’ Computed Tier Score and time to achieve E-4 based on their individual personal...Science (GS) score, ASVAB Mathematics Knowledge (MK) score, ASVAB Paragraph Comprehension (PC) score, weight , and whether a Marine receives a weight

  18. Classification and regression tree (CART) analyses of genomic signatures reveal sets of tetramers that discriminate temperature optima of archaea and bacteria

    PubMed Central

    Dyer, Betsey D.; Kahn, Michael J.; LeBlanc, Mark D.

    2008-01-01

    Classification and regression tree (CART) analysis was applied to genome-wide tetranucleotide frequencies (genomic signatures) of 195 archaea and bacteria. Although genomic signatures have typically been used to classify evolutionary divergence, in this study, convergent evolution was the focus. Temperature optima for most of the organisms examined could be distinguished by CART analyses of tetranucleotide frequencies. This suggests that pervasive (nonlinear) qualities of genomes may reflect certain environmental conditions (such as temperature) in which those genomes evolved. The predominant use of GAGA and AGGA as the discriminating tetramers in CART models suggests that purine-loading and codon biases of thermophiles may explain some of the results. PMID:19054742

  19. Age-, sex-, and education-specific norms for an extended CERAD Neuropsychological Assessment Battery-Results from the population-based LIFE-Adult-Study.

    PubMed

    Luck, Tobias; Pabst, Alexander; Rodriguez, Francisca S; Schroeter, Matthias L; Witte, Veronica; Hinz, Andreas; Mehnert, Anja; Engel, Christoph; Loeffler, Markus; Thiery, Joachim; Villringer, Arno; Riedel-Heller, Steffi G

    2018-05-01

    To provide new age-, sex-, and education-specific reference values for an extended version of the well-established Consortium to Establish a Registry for Alzheimer's Disease Neuropsychological Assessment Battery (CERAD-NAB) that additionally includes the Trail Making Test and the Verbal Fluency Test-S-Words. Norms were calculated based on the cognitive performances of n = 1,888 dementia-free participants (60-79 years) from the population-based German LIFE-Adult-Study. Multiple regressions were used to examine the association of the CERAD-NAB scores with age, sex, and education. In order to calculate the norms, quantile and censored quantile regression analyses were performed estimating marginal means of the test scores at 2.28, 6.68, 10, 15.87, 25, 50, 75, and 90 percentiles for age-, sex-, and education-specific subgroups. Multiple regression analyses revealed that younger age was significantly associated with better cognitive performance in 15 CERAD-NAB measures and higher education with better cognitive performance in all 17 measures. Women performed significantly better than men in 12 measures and men than women in four measures. The determined norms indicate ceiling effects for the cognitive performances in the Boston Naming, Word List Recognition, Constructional Praxis Copying, and Constructional Praxis Recall tests. The new norms for the extended CERAD-NAB will be useful for evaluating dementia-free German-speaking adults in a broad variety of relevant cognitive domains. The extended CERAD-NAB follows more closely the criteria for the new DSM-5 Mild and Major Neurocognitive Disorder. Additionally, it could be further developed to include a test for social cognition. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    ERIC Educational Resources Information Center

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  1. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Do Our Means of Inquiry Match our Intentions?

    PubMed Central

    Petscher, Yaacov

    2016-01-01

    A key stage of the scientific method is the analysis of data, yet despite the variety of methods that are available to researchers they are most frequently distilled to a model that focuses on the average relation between variables. Although research questions are frequently conceived with broad inquiry in mind, most regression methods are limited in comprehensively evaluating how observed behaviors are related to each other. Quantile regression is a largely unknown yet well-suited analytic technique similar to traditional regression analysis, but allows for a more systematic approach to understanding complex associations among observed phenomena in the psychological sciences. Data from the National Education Longitudinal Study of 1988/2000 are used to illustrate how quantile regression overcomes the limitations of average associations in linear regression by showing that psychological well-being and sex each differentially relate to reading achievement depending on one’s level of reading achievement. PMID:27486410

  3. Food away from home and body mass outcomes: taking heterogeneity into account enhances quality of results.

    PubMed

    Kim, Tae Hyun; Lee, Eui-Kyung; Han, Euna

    2014-09-01

    The aim of this study was to explore the heterogeneous association of consumption of food away from home (FAFH) with individual body mass outcomes including body mass index and waist circumference over the entire conditional distribution of each outcome. Information on 16,403 adults obtained from nationally representative data on nutrition and behavior in Korea was used. A quantile regression model captured the variability of the association of FAFH with body mass outcomes across the entire conditional distribution of each outcome measure. Heavy FAFH consumption was defined as obtaining ≥1400 kcal from FAFH on a single day. Heavy FAFH consumption, specifically at full-service restaurants, was significantly associated with higher body mass index (+0.46 kg/m2 at the 50th quantile, 0.55 at the 75th, 0.66 at the 90th, and 0.44 at the 95th) and waist circumference (+0.96 cm at the 25th quantile, 1.06 cm at the 50th, 1.35 cm at the 75th, and 0.96 cm at the 90th quantiles) with overall larger associations at higher quantiles. Findings of the study indicate that conventional regression methods may mask important heterogeneity in the association between heavy FAFH consumption and body mass outcomes. Further public health efforts are needed to improve the nutritional quality of affordable FAFH choices and nutrition education and to establish a healthy food consumption environment. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis

    PubMed Central

    Nieves, Jeri W.; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J. Americo M.; Sorenson, Eric J.; D’Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi

    2017-01-01

    IMPORTANCE There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). OBJECTIVE To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. DESIGN, SETTING, AND PARTICIPANTS A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. EXPOSURES Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). MAIN OUTCOMES AND MEASURES Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale–Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). RESULTS Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5–68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of “good” micronutrients and “good” food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. CONCLUSIONS AND RELEVANCE Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated

  5. Forecasting conditional climate-change using a hybrid approach

    USGS Publications Warehouse

    Esfahani, Akbar Akbari; Friedel, Michael J.

    2014-01-01

    A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.

  6. Regression Effects in Angoff Ratings: Examples from Credentialing Exams

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2018-01-01

    This article discusses regression effects that are commonly observed in Angoff ratings where panelists tend to think that hard items are easier than they are and easy items are more difficult than they are in comparison to estimated item difficulties. Analyses of data from two credentialing exams illustrate these regression effects and the…

  7. Standardized Regression Coefficients as Indices of Effect Sizes in Meta-Analysis

    ERIC Educational Resources Information Center

    Kim, Rae Seon

    2011-01-01

    When conducting a meta-analysis, it is common to find many collected studies that report regression analyses, because multiple regression analysis is widely used in many fields. Meta-analysis uses effect sizes drawn from individual studies as a means of synthesizing a collection of results. However, indices of effect size from regression analyses…

  8. QQ-SNV: single nucleotide variant detection at low frequency by comparing the quality quantiles.

    PubMed

    Van der Borght, Koen; Thys, Kim; Wetzels, Yves; Clement, Lieven; Verbist, Bie; Reumers, Joke; van Vlijmen, Herman; Aerssens, Jeroen

    2015-11-10

    Next generation sequencing enables studying heterogeneous populations of viral infections. When the sequencing is done at high coverage depth ("deep sequencing"), low frequency variants can be detected. Here we present QQ-SNV (http://sourceforge.net/projects/qqsnv), a logistic regression classifier model developed for the Illumina sequencing platforms that uses the quantiles of the quality scores, to distinguish true single nucleotide variants from sequencing errors based on the estimated SNV probability. To train the model, we created a dataset of an in silico mixture of five HIV-1 plasmids. Testing of our method in comparison to the existing methods LoFreq, ShoRAH, and V-Phaser 2 was performed on two HIV and four HCV plasmid mixture datasets and one influenza H1N1 clinical dataset. For default application of QQ-SNV, variants were called using a SNV probability cutoff of 0.5 (QQ-SNV(D)). To improve the sensitivity we used a SNV probability cutoff of 0.0001 (QQ-SNV(HS)). To also increase specificity, SNVs called were overruled when their frequency was below the 80(th) percentile calculated on the distribution of error frequencies (QQ-SNV(HS-P80)). When comparing QQ-SNV versus the other methods on the plasmid mixture test sets, QQ-SNV(D) performed similarly to the existing approaches. QQ-SNV(HS) was more sensitive on all test sets but with more false positives. QQ-SNV(HS-P80) was found to be the most accurate method over all test sets by balancing sensitivity and specificity. When applied to a paired-end HCV sequencing study, with lowest spiked-in true frequency of 0.5%, QQ-SNV(HS-P80) revealed a sensitivity of 100% (vs. 40-60% for the existing methods) and a specificity of 100% (vs. 98.0-99.7% for the existing methods). In addition, QQ-SNV required the least overall computation time to process the test sets. Finally, when testing on a clinical sample, four putative true variants with frequency below 0.5% were consistently detected by QQ-SNV(HS-P80) from different

  9. Solvency supervision based on a total balance sheet approach

    NASA Astrophysics Data System (ADS)

    Pitselis, Georgios

    2009-11-01

    In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.

  10. Differentiating regressed melanoma from regressed lichenoid keratosis.

    PubMed

    Chan, Aegean H; Shulman, Kenneth J; Lee, Bonnie A

    2017-04-01

    Distinguishing regressed lichen planus-like keratosis (LPLK) from regressed melanoma can be difficult on histopathologic examination, potentially resulting in mismanagement of patients. We aimed to identify histopathologic features by which regressed melanoma can be differentiated from regressed LPLK. Twenty actively inflamed LPLK, 12 LPLK with regression and 15 melanomas with regression were compared and evaluated by hematoxylin and eosin staining as well as Melan-A, microphthalmia transcription factor (MiTF) and cytokeratin (AE1/AE3) immunostaining. (1) A total of 40% of regressed melanomas showed complete or near complete loss of melanocytes within the epidermis with Melan-A and MiTF immunostaining, while 8% of regressed LPLK exhibited this finding. (2) Necrotic keratinocytes were seen in the epidermis in 33% regressed melanomas as opposed to all of the regressed LPLK. (3) A dense infiltrate of melanophages in the papillary dermis was seen in 40% of regressed melanomas, a feature not seen in regressed LPLK. In summary, our findings suggest that a complete or near complete loss of melanocytes within the epidermis strongly favors a regressed melanoma over a regressed LPLK. In addition, necrotic epidermal keratinocytes and the presence of a dense band-like distribution of dermal melanophages can be helpful in differentiating these lesions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Regression and multivariate models for predicting particulate matter concentration level.

    PubMed

    Nazif, Amina; Mohammed, Nurul Izma; Malakahmad, Amirhossein; Abualqumboz, Motasem S

    2018-01-01

    The devastating health effects of particulate matter (PM 10 ) exposure by susceptible populace has made it necessary to evaluate PM 10 pollution. Meteorological parameters and seasonal variation increases PM 10 concentration levels, especially in areas that have multiple anthropogenic activities. Hence, stepwise regression (SR), multiple linear regression (MLR) and principal component regression (PCR) analyses were used to analyse daily average PM 10 concentration levels. The analyses were carried out using daily average PM 10 concentration, temperature, humidity, wind speed and wind direction data from 2006 to 2010. The data was from an industrial air quality monitoring station in Malaysia. The SR analysis established that meteorological parameters had less influence on PM 10 concentration levels having coefficient of determination (R 2 ) result from 23 to 29% based on seasoned and unseasoned analysis. While, the result of the prediction analysis showed that PCR models had a better R 2 result than MLR methods. The results for the analyses based on both seasoned and unseasoned data established that MLR models had R 2 result from 0.50 to 0.60. While, PCR models had R 2 result from 0.66 to 0.89. In addition, the validation analysis using 2016 data also recognised that the PCR model outperformed the MLR model, with the PCR model for the seasoned analysis having the best result. These analyses will aid in achieving sustainable air quality management strategies.

  12. Covariate Imbalance and Adjustment for Logistic Regression Analysis of Clinical Trial Data

    PubMed Central

    Ciolino, Jody D.; Martin, Reneé H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.

    2014-01-01

    In logistic regression analysis for binary clinical trial data, adjusted treatment effect estimates are often not equivalent to unadjusted estimates in the presence of influential covariates. This paper uses simulation to quantify the benefit of covariate adjustment in logistic regression. However, International Conference on Harmonization guidelines suggest that covariate adjustment be pre-specified. Unplanned adjusted analyses should be considered secondary. Results suggest that that if adjustment is not possible or unplanned in a logistic setting, balance in continuous covariates can alleviate some (but never all) of the shortcomings of unadjusted analyses. The case of log binomial regression is also explored. PMID:24138438

  13. Variable selection and model choice in geoadditive regression models.

    PubMed

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  14. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    EPA Science Inventory

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  15. Association between the Infant and Child Feeding Index (ICFI) and nutritional status of 6- to 35-month-old children in rural western China.

    PubMed

    Qu, Pengfei; Mi, Baibing; Wang, Duolao; Zhang, Ruo; Yang, Jiaomei; Liu, Danmeng; Dang, Shaonong; Yan, Hong

    2017-01-01

    The objective of this study was to determine the relationship between the quality of feeding practices and children's nutritional status in rural western China. A sample of 12,146 pairs of 6- to 35-month-old children and their mothers were recruited using stratified multistage cluster random sampling in rural western China. Quantile regression was used to analyze the relationship between the Infant and Child Feeding Index (ICFI) and children's nutritional status. In rural western China, 24.37% of all infants and young children suffer from malnutrition. Of this total, 19.57%, 8.74% and 4.63% of infants and children are classified as stunting, underweight and wasting, respectively. After adjusting for covariates, the quantile regression results suggested that qualified ICFI (ICFI > 13.8) was associated with all length and HAZ quantiles (P<0.05) and had a greater effect on the following: poor length and HAZ, the β-estimates (length) from 0.76 cm (95% CI: 0.53 to 0.99 cm) to 0.34 cm (95% CI: 0.09 to 0.59 cm) and the β-estimates (HAZ) from 0.17 (95% CI: 0.10 to 0.24) to 0.11 (95% CI: 0.04 to 0.19). Qualified ICFI was also associated with most weight quantiles (P<0.05 except the 80th and 90th quantiles) and poor and intermediate WAZ quantiles (P<0.05 including the 10th, 20th 30th and 40th quantiles). Additionally, qualified ICFI had a greater effect on poor weight and WAZ quantiles in which the β-estimates (weight) were from 0.20 kg (95% CI: 0.14 to 0.26 kg) to 0.06 kg (95% CI: 0.00 to 0.12 kg) and the β-estimates (WAZ) were from 0.14 (95% CI: 0.08 to 0.21) to 0.05 (95% CI: 0.01 to 0.10). Feeding practices were associated with the physical development of infants and young children, and proper feeding practices had a greater effect on poor physical development in infants and young children. For mothers in rural western China, proper guidelines and messaging on complementary feeding practices are necessary.

  16. Longitudinal changes in telomere length and associated genetic parameters in dairy cattle analysed using random regression models.

    PubMed

    Seeker, Luise A; Ilska, Joanna J; Psifidi, Androniki; Wilbourn, Rachael V; Underwood, Sarah L; Fairlie, Jennifer; Holland, Rebecca; Froy, Hannah; Bagnall, Ainsley; Whitelaw, Bruce; Coffey, Mike; Nussey, Daniel H; Banos, Georgios

    2018-01-01

    Telomeres cap the ends of linear chromosomes and shorten with age in many organisms. In humans short telomeres have been linked to morbidity and mortality. With the accumulation of longitudinal datasets the focus shifts from investigating telomere length (TL) to exploring TL change within individuals over time. Some studies indicate that the speed of telomere attrition is predictive of future disease. The objectives of the present study were to 1) characterize the change in bovine relative leukocyte TL (RLTL) across the lifetime in Holstein Friesian dairy cattle, 2) estimate genetic parameters of RLTL over time and 3) investigate the association of differences in individual RLTL profiles with productive lifespan. RLTL measurements were analysed using Legendre polynomials in a random regression model to describe TL profiles and genetic variance over age. The analyses were based on 1,328 repeated RLTL measurements of 308 female Holstein Friesian dairy cattle. A quadratic Legendre polynomial was fitted to the fixed effect of age in months and to the random effect of the animal identity. Changes in RLTL, heritability and within-trait genetic correlation along the age trajectory were calculated and illustrated. At a population level, the relationship between RLTL and age was described by a positive quadratic function. Individuals varied significantly regarding the direction and amount of RLTL change over life. The heritability of RLTL ranged from 0.36 to 0.47 (SE = 0.05-0.08) and remained statistically unchanged over time. The genetic correlation of RLTL at birth with measurements later in life decreased with the time interval between samplings from near unity to 0.69, indicating that TL later in life might be regulated by different genes than TL early in life. Even though animals differed in their RLTL profiles significantly, those differences were not correlated with productive lifespan (p = 0.954).

  17. Longitudinal changes in telomere length and associated genetic parameters in dairy cattle analysed using random regression models

    PubMed Central

    Ilska, Joanna J.; Psifidi, Androniki; Wilbourn, Rachael V.; Underwood, Sarah L.; Fairlie, Jennifer; Holland, Rebecca; Froy, Hannah; Bagnall, Ainsley; Whitelaw, Bruce; Coffey, Mike; Nussey, Daniel H.; Banos, Georgios

    2018-01-01

    Telomeres cap the ends of linear chromosomes and shorten with age in many organisms. In humans short telomeres have been linked to morbidity and mortality. With the accumulation of longitudinal datasets the focus shifts from investigating telomere length (TL) to exploring TL change within individuals over time. Some studies indicate that the speed of telomere attrition is predictive of future disease. The objectives of the present study were to 1) characterize the change in bovine relative leukocyte TL (RLTL) across the lifetime in Holstein Friesian dairy cattle, 2) estimate genetic parameters of RLTL over time and 3) investigate the association of differences in individual RLTL profiles with productive lifespan. RLTL measurements were analysed using Legendre polynomials in a random regression model to describe TL profiles and genetic variance over age. The analyses were based on 1,328 repeated RLTL measurements of 308 female Holstein Friesian dairy cattle. A quadratic Legendre polynomial was fitted to the fixed effect of age in months and to the random effect of the animal identity. Changes in RLTL, heritability and within-trait genetic correlation along the age trajectory were calculated and illustrated. At a population level, the relationship between RLTL and age was described by a positive quadratic function. Individuals varied significantly regarding the direction and amount of RLTL change over life. The heritability of RLTL ranged from 0.36 to 0.47 (SE = 0.05–0.08) and remained statistically unchanged over time. The genetic correlation of RLTL at birth with measurements later in life decreased with the time interval between samplings from near unity to 0.69, indicating that TL later in life might be regulated by different genes than TL early in life. Even though animals differed in their RLTL profiles significantly, those differences were not correlated with productive lifespan (p = 0.954). PMID:29438415

  18. An empirical study using permutation-based resampling in meta-regression

    PubMed Central

    2012-01-01

    Background In meta-regression, as the number of trials in the analyses decreases, the risk of false positives or false negatives increases. This is partly due to the assumption of normality that may not hold in small samples. Creation of a distribution from the observed trials using permutation methods to calculate P values may allow for less spurious findings. Permutation has not been empirically tested in meta-regression. The objective of this study was to perform an empirical investigation to explore the differences in results for meta-analyses on a small number of trials using standard large sample approaches verses permutation-based methods for meta-regression. Methods We isolated a sample of randomized controlled clinical trials (RCTs) for interventions that have a small number of trials (herbal medicine trials). Trials were then grouped by herbal species and condition and assessed for methodological quality using the Jadad scale, and data were extracted for each outcome. Finally, we performed meta-analyses on the primary outcome of each group of trials and meta-regression for methodological quality subgroups within each meta-analysis. We used large sample methods and permutation methods in our meta-regression modeling. We then compared final models and final P values between methods. Results We collected 110 trials across 5 intervention/outcome pairings and 5 to 10 trials per covariate. When applying large sample methods and permutation-based methods in our backwards stepwise regression the covariates in the final models were identical in all cases. The P values for the covariates in the final model were larger in 78% (7/9) of the cases for permutation and identical for 22% (2/9) of the cases. Conclusions We present empirical evidence that permutation-based resampling may not change final models when using backwards stepwise regression, but may increase P values in meta-regression of multiple covariates for relatively small amount of trials. PMID:22587815

  19. The association of fatigue, pain, depression and anxiety with work and activity impairment in immune mediated inflammatory diseases.

    PubMed

    Enns, Murray W; Bernstein, Charles N; Kroeker, Kristine; Graff, Lesley; Walker, John R; Lix, Lisa M; Hitchon, Carol A; El-Gabalawy, Renée; Fisk, John D; Marrie, Ruth Ann

    2018-01-01

    Impairment in work function is a frequent outcome in patients with chronic conditions such as immune-mediated inflammatory diseases (IMID), depression and anxiety disorders. The personal and economic costs of work impairment in these disorders are immense. Symptoms of pain, fatigue, depression and anxiety are potentially remediable forms of distress that may contribute to work impairment in chronic health conditions such as IMID. The present study evaluated the association between pain [Medical Outcomes Study Pain Effects Scale], fatigue [Daily Fatigue Impact Scale], depression and anxiety [Hospital Anxiety and Depression Scale] and work impairment [Work Productivity and Activity Impairment Scale] in four patient populations: multiple sclerosis (n = 255), inflammatory bowel disease (n = 248, rheumatoid arthritis (n = 154) and a depression and anxiety group (n = 307), using quantile regression, controlling for the effects of sociodemographic factors, physical disability, and cognitive deficits. Each of pain, depression symptoms, anxiety symptoms, and fatigue individually showed significant associations with work absenteeism, presenteeism, and general activity impairment (quantile regression standardized estimates ranging from 0.3 to 1.0). When the distress variables were entered concurrently into the regression models, fatigue was a significant predictor of work and activity impairment in all models (quantile regression standardized estimates ranging from 0.2 to 0.5). These findings have important clinical implications for understanding the determinants of work impairment and for improving work-related outcomes in chronic disease.

  20. Genetic analyses of protein yield in dairy cows applying random regression models with time-dependent and temperature x humidity-dependent covariates.

    PubMed

    Brügemann, K; Gernand, E; von Borstel, U U; König, S

    2011-08-01

    Data used in the present study included 1,095,980 first-lactation test-day records for protein yield of 154,880 Holstein cows housed on 196 large-scale dairy farms in Germany. Data were recorded between 2002 and 2009 and merged with meteorological data from public weather stations. The maximum distance between each farm and its corresponding weather station was 50 km. Hourly temperature-humidity indexes (THI) were calculated using the mean of hourly measurements of dry bulb temperature and relative humidity. On the phenotypic scale, an increase in THI was generally associated with a decrease in daily protein yield. For genetic analyses, a random regression model was applied using time-dependent (d in milk, DIM) and THI-dependent covariates. Additive genetic and permanent environmental effects were fitted with this random regression model and Legendre polynomials of order 3 for DIM and THI. In addition, the fixed curve was modeled with Legendre polynomials of order 3. Heterogeneous residuals were fitted by dividing DIM into 5 classes, and by dividing THI into 4 classes, resulting in 20 different classes. Additive genetic variances for daily protein yield decreased with increasing degrees of heat stress and were lowest at the beginning of lactation and at extreme THI. Due to higher additive genetic variances, slightly higher permanent environment variances, and similar residual variances, heritabilities were highest for low THI in combination with DIM at the end of lactation. Genetic correlations among individual values for THI were generally >0.90. These trends from the complex random regression model were verified by applying relatively simple bivariate animal models for protein yield measured in 2 THI environments; that is, defining a THI value of 60 as a threshold. These high correlations indicate the absence of any substantial genotype × environment interaction for protein yield. However, heritabilities and additive genetic variances from the random regression

  1. Quantile-based permutation thresholds for quantitative trait loci hotspots.

    PubMed

    Neto, Elias Chaibub; Keller, Mark P; Broman, Andrew F; Attie, Alan D; Jansen, Ritsert C; Broman, Karl W; Yandell, Brian S

    2012-08-01

    Quantitative trait loci (QTL) hotspots (genomic locations affecting many traits) are a common feature in genetical genomics studies and are biologically interesting since they may harbor critical regulators. Therefore, statistical procedures to assess the significance of hotspots are of key importance. One approach, randomly allocating observed QTL across the genomic locations separately by trait, implicitly assumes all traits are uncorrelated. Recently, an empirical test for QTL hotspots was proposed on the basis of the number of traits that exceed a predetermined LOD value, such as the standard permutation LOD threshold. The permutation null distribution of the maximum number of traits across all genomic locations preserves the correlation structure among the phenotypes, avoiding the detection of spurious hotspots due to nongenetic correlation induced by uncontrolled environmental factors and unmeasured variables. However, by considering only the number of traits above a threshold, without accounting for the magnitude of the LOD scores, relevant information is lost. In particular, biologically interesting hotspots composed of a moderate to small number of traits with strong LOD scores may be neglected as nonsignificant. In this article we propose a quantile-based permutation approach that simultaneously accounts for the number and the LOD scores of traits within the hotspots. By considering a sliding scale of mapping thresholds, our method can assess the statistical significance of both small and large hotspots. Although the proposed approach can be applied to any type of heritable high-volume "omic" data set, we restrict our attention to expression (e)QTL analysis. We assess and compare the performances of these three methods in simulations and we illustrate how our approach can effectively assess the significance of moderate and small hotspots with strong LOD scores in a yeast expression data set.

  2. Measures of clustering and heterogeneity in multilevel Poisson regression analyses of rates/count data

    PubMed Central

    Austin, Peter C.; Stryhn, Henrik; Leckie, George; Merlo, Juan

    2017-01-01

    Multilevel data occur frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models. These models incorporate cluster‐specific random effects that allow one to partition the total variation in the outcome into between‐cluster variation and between‐individual variation. The magnitude of the effect of clustering provides a measure of the general contextual effect. When outcomes are binary or time‐to‐event in nature, the general contextual effect can be quantified by measures of heterogeneity like the median odds ratio or the median hazard ratio, respectively, which can be calculated from a multilevel regression model. Outcomes that are integer counts denoting the number of times that an event occurred are common in epidemiological and medical research. The median (incidence) rate ratio in multilevel Poisson regression for counts that corresponds to the median odds ratio or median hazard ratio for binary or time‐to‐event outcomes respectively is relatively unknown and is rarely used. The median rate ratio is the median relative change in the rate of the occurrence of the event when comparing identical subjects from 2 randomly selected different clusters that are ordered by rate. We also describe how the variance partition coefficient, which denotes the proportion of the variation in the outcome that is attributable to between‐cluster differences, can be computed with count outcomes. We illustrate the application and interpretation of these measures in a case study analyzing the rate of hospital readmission in patients discharged from hospital with a diagnosis of heart failure. PMID:29114926

  3. Analyses of single nucleotide polymorphisms in selected nutrient-sensitive genes in weight-regain prevention: the DIOGENES study.

    PubMed

    Larsen, Lesli H; Angquist, Lars; Vimaleswaran, Karani S; Hager, Jörg; Viguerie, Nathalie; Loos, Ruth J F; Handjieva-Darlenska, Teodora; Jebb, Susan A; Kunesova, Marie; Larsen, Thomas M; Martinez, J Alfredo; Papadaki, Angeliki; Pfeiffer, Andreas F H; van Baak, Marleen A; Sørensen, Thorkild Ia; Holst, Claus; Langin, Dominique; Astrup, Arne; Saris, Wim H M

    2012-05-01

    Differences in the interindividual response to dietary intervention could be modified by genetic variation in nutrient-sensitive genes. This study examined single nucleotide polymorphisms (SNPs) in presumed nutrient-sensitive candidate genes for obesity and obesity-related diseases for main and dietary interaction effects on weight, waist circumference, and fat mass regain over 6 mo. In total, 742 participants who had lost ≥ 8% of their initial body weight were randomly assigned to follow 1 of 5 different ad libitum diets with different glycemic indexes and contents of dietary protein. The SNP main and SNP-diet interaction effects were analyzed by using linear regression models, corrected for multiple testing by using Bonferroni correction and evaluated by using quantile-quantile (Q-Q) plots. After correction for multiple testing, none of the SNPs were significantly associated with weight, waist circumference, or fat mass regain. Q-Q plots showed that ALOX5AP rs4769873 showed a higher observed than predicted P value for the association with less waist circumference regain over 6 mo (-3.1 cm/allele; 95% CI: -4.6, -1.6; P/Bonferroni-corrected P = 0.000039/0.076), independently of diet. Additional associations were identified by using Q-Q plots for SNPs in ALOX5AP, TNF, and KCNJ11 for main effects; in LPL and TUB for glycemic index interaction effects on waist circumference regain; in GHRL, CCK, MLXIPL, and LEPR on weight; in PPARC1A, PCK2, ALOX5AP, PYY, and ADRB3 on waist circumference; and in PPARD, FABP1, PLAUR, and LPIN1 on fat mass regain for dietary protein interaction. The observed effects of SNP-diet interactions on weight, waist, and fat mass regain suggest that genetic variation in nutrient-sensitive genes can modify the response to diet. This trial was registered at clinicaltrials.gov as NCT00390637.

  4. Genetic analyses of stillbirth in relation to litter size using random regression models.

    PubMed

    Chen, C Y; Misztal, I; Tsuruta, S; Herring, W O; Holl, J; Culbertson, M

    2010-12-01

    Estimates of genetic parameters for number of stillborns (NSB) in relation to litter size (LS) were obtained with random regression models (RRM). Data were collected from 4 purebred Duroc nucleus farms between 2004 and 2008. Two data sets with 6,575 litters for the first parity (P1) and 6,259 litters for the second to fifth parity (P2-5) with a total of 8,217 and 5,066 animals in the pedigree were analyzed separately. Number of stillborns was studied as a trait on sow level. Fixed effects were contemporary groups (farm-year-season) and fixed cubic regression coefficients on LS with Legendre polynomials. Models for P2-5 included the fixed effect of parity. Random effects were additive genetic effects for both data sets with permanent environmental effects included for P2-5. Random effects modeled with Legendre polynomials (RRM-L), linear splines (RRM-S), and degree 0 B-splines (RRM-BS) with regressions on LS were used. For P1, the order of polynomial, the number of knots, and the number of intervals used for respective models were quadratic, 3, and 3, respectively. For P2-5, the same parameters were linear, 2, and 2, respectively. Heterogeneous residual variances were considered in the models. For P1, estimates of heritability were 12 to 15%, 5 to 6%, and 6 to 7% in LS 5, 9, and 13, respectively. For P2-5, estimates were 15 to 17%, 4 to 5%, and 4 to 6% in LS 6, 9, and 12, respectively. For P1, average estimates of genetic correlations between LS 5 to 9, 5 to 13, and 9 to 13 were 0.53, -0.29, and 0.65, respectively. For P2-5, same estimates averaged for RRM-L and RRM-S were 0.75, -0.21, and 0.50, respectively. For RRM-BS with 2 intervals, the correlation was 0.66 between LS 5 to 7 and 8 to 13. Parameters obtained by 3 RRM revealed the nonlinear relationship between additive genetic effect of NSB and the environmental deviation of LS. The negative correlations between the 2 extreme LS might possibly indicate different genetic bases on incidence of stillbirth.

  5. Incremental impact of body mass status with modifiable unhealthy lifestyle behaviors on pharmaceutical expenditure.

    PubMed

    Kim, Tae Hyun; Lee, Eui-Kyung; Han, Euna

    Overweight/obesity is a growing health risk in Korea. The impact of overweight/obesity on pharmaceutical expenditure can be larger if individuals have multiple risk factors and multiple comorbidities. The current study estimated the combined effects of overweight/obesity and other unhealthy behaviors on pharmaceutical expenditure. An instrumental variable quantile regression model was estimated using Korea Health Panel Study data. The current study extracted data from 3 waves (2009, 2010, and 2011). The final sample included 7148 person-year observations for adults aged 20 years or older. Overweight/obese individuals had higher pharmaceutical expenditure than their non-obese counterparts only at the upper quantiles of the conditional distribution of pharmaceutical expenditure (by 119% at the 90th quantile and 115% at the 95th). The current study found a stronger association at the upper quantiles among men (152%, 144%, and 150% at the 75th, 90th, and 95th quantiles, respectively) than among women (152%, 150%, and 148% at the 75th, 90th, and 95th quantiles, respectively). The association at the upper quantiles was stronger when combined with moderate to heavy drinking and no regular physical check-up, particularly among males. The current study confirms that the association of overweight/obesity with modifiable unhealthy behaviors on pharmaceutical expenditure is larger than with overweight/obesity alone. Assessing the effect of overweight/obesity with lifestyle risk factors can help target groups for public health intervention programs. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. A general framework for the use of logistic regression models in meta-analysis.

    PubMed

    Simmonds, Mark C; Higgins, Julian Pt

    2016-12-01

    Where individual participant data are available for every randomised trial in a meta-analysis of dichotomous event outcomes, "one-stage" random-effects logistic regression models have been proposed as a way to analyse these data. Such models can also be used even when individual participant data are not available and we have only summary contingency table data. One benefit of this one-stage regression model over conventional meta-analysis methods is that it maximises the correct binomial likelihood for the data and so does not require the common assumption that effect estimates are normally distributed. A second benefit of using this model is that it may be applied, with only minor modification, in a range of meta-analytic scenarios, including meta-regression, network meta-analyses and meta-analyses of diagnostic test accuracy. This single model can potentially replace the variety of often complex methods used in these areas. This paper considers, with a range of meta-analysis examples, how random-effects logistic regression models may be used in a number of different types of meta-analyses. This one-stage approach is compared with widely used meta-analysis methods including Bayesian network meta-analysis and the bivariate and hierarchical summary receiver operating characteristic (ROC) models for meta-analyses of diagnostic test accuracy. © The Author(s) 2014.

  7. Analysis and trends of precipitation lapse rate and extreme indices over north Sikkim eastern Himalayas under CMIP5ESM-2M RCPs experiments

    NASA Astrophysics Data System (ADS)

    Singh, Vishal; Goyal, Manish Kumar

    2016-01-01

    This paper draws attention to highlight the spatial and temporal variability in precipitation lapse rate (PLR) and precipitation extreme indices (PEIs) through the mesoscale characterization of Teesta river catchment, which corresponds to north Sikkim eastern Himalayas. A PLR rate is an important variable for the snowmelt runoff models. In a mountainous region, the PLR could be varied from lower elevation parts to high elevation parts. In this study, a PLR was computed by accounting elevation differences, which varies from around 1500 m to 7000 m. A precipitation variability and extremity were analysed using multiple mathematical functions viz. quantile regression, spatial mean, spatial standard deviation, Mann-Kendall test and Sen's estimation. For this reason, a daily precipitation, in the historical (years 1980-2005) as measured/observed gridded points and projected experiments for the 21st century (years 2006-2100) simulated by CMIP5 ESM-2 M model (Coupled Model Intercomparison Project Phase 5 Earth System Model 2) employing three different radiative forcing scenarios (Representative Concentration Pathways), utilized for the research work. The outcomes of this study suggest that a PLR is significantly varied from lower elevation to high elevation parts. The PEI based analysis showed that the extreme high intensity events have been increased significantly, especially after 2040s. The PEI based observations also showed that the numbers of wet days are increased for all the RCPs. The quantile regression plots showed significant increments in the upper and lower quantiles of the various extreme indices. The Mann-Kendall test and Sen's estimation tests clearly indicated significant changing patterns in the frequency and intensity of the precipitation indices across all the sub-basins and RCP scenario in an intra-decadal time series domain. The RCP8.5 showed extremity of the projected outcomes.

  8. Delivery of Essential Medicines to Primary Care Institutions and its Association with Procurement Volume and Price: A Case Study in Hubei Province, China.

    PubMed

    Tang, Yuqing; Liu, Chaojie; Zhang, Xinping

    2017-02-01

    The low availability of essential medicines is a worldwide issue of concern. In 2009, China introduced a National Essential Medicines List (NEML), with NEML medicines being purchased in bulk at contracted prices established by tenders conducted at the provincial level. The availability of essential medicines in the public sector largely relies on commercial supply chains. The objectives of this paper were to analyze the delivery performance of essential medicines under NEML provincial procurement arrangements, and to determine whether the procurement volume and price of medicines are associated with the delivery performance of suppliers. We reviewed 9390 recorded orders of 1099 essential medicines in Hubei province from August 2011 to April 2012. The reliability of medicine delivery in-full and on-time (DIFOT) was considered the performance indicator, and we used Spearman correlation analyses to explore whether there were any associations between DIFOT and procurement price and volume. Quantile regressions were performed to determine such associations. The DIFOT had positive correlations with procurement price and volume. The Spearman rank correlation coefficients between price and DIFOT were 0.114, 0.34 and 0.25 for medicines with low one-third, middle one-third and high one-third procurement volumes, respectively. The quantile regression analysis revealed a positive association between price and DIFOT across all quantiles of DIFOT, and although significant positive associations between volume and DIFOT were only found at the 25th percentile of DIFOT, volume showed significant interactions with price for both the 25th and 50th percentiles of DIFOT. Higher procurement price is associated with better delivery performance of essential medicines; however, it is important to link procurement price with procurement volume. Increasing procurement volume may alleviate the negative effect of low price on delivery performance. Variation in volumes of repeated orders imposes

  9. Risk factors for autistic regression: results of an ambispective cohort study.

    PubMed

    Zhang, Ying; Xu, Qiong; Liu, Jing; Li, She-chang; Xu, Xiu

    2012-08-01

    A subgroup of children diagnosed with autism experience developmental regression featured by a loss of previously acquired abilities. The pathogeny of autistic regression is unknown, although many risk factors likely exist. To better characterize autistic regression and investigate the association between autistic regression and potential influencing factors in Chinese autistic children, we conducted an ambispective study with a cohort of 170 autistic subjects. Analyses by multiple logistic regression showed significant correlations between autistic regression and febrile seizures (OR = 3.53, 95% CI = 1.17-10.65, P = .025), as well as with a family history of neuropsychiatric disorders (OR = 3.62, 95% CI = 1.35-9.71, P = .011). This study suggests that febrile seizures and family history of neuropsychiatric disorders are correlated with autistic regression.

  10. Retro-regression--another important multivariate regression improvement.

    PubMed

    Randić, M

    2001-01-01

    We review the serious problem associated with instabilities of the coefficients of regression equations, referred to as the MRA (multivariate regression analysis) "nightmare of the first kind". This is manifested when in a stepwise regression a descriptor is included or excluded from a regression. The consequence is an unpredictable change of the coefficients of the descriptors that remain in the regression equation. We follow with consideration of an even more serious problem, referred to as the MRA "nightmare of the second kind", arising when optimal descriptors are selected from a large pool of descriptors. This process typically causes at different steps of the stepwise regression a replacement of several previously used descriptors by new ones. We describe a procedure that resolves these difficulties. The approach is illustrated on boiling points of nonanes which are considered (1) by using an ordered connectivity basis; (2) by using an ordering resulting from application of greedy algorithm; and (3) by using an ordering derived from an exhaustive search for optimal descriptors. A novel variant of multiple regression analysis, called retro-regression (RR), is outlined showing how it resolves the ambiguities associated with both "nightmares" of the first and the second kind of MRA.

  11. Modified Regression Correlation Coefficient for Poisson Regression Model

    NASA Astrophysics Data System (ADS)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  12. An Empirical Study of Eight Nonparametric Tests in Hierarchical Regression.

    ERIC Educational Resources Information Center

    Harwell, Michael; Serlin, Ronald C.

    When normality does not hold, nonparametric tests represent an important data-analytic alternative to parametric tests. However, the use of nonparametric tests in educational research has been limited by the absence of easily performed tests for complex experimental designs and analyses, such as factorial designs and multiple regression analyses,…

  13. The repeatability of mean defect with size III and size V standard automated perimetry.

    PubMed

    Wall, Michael; Doyle, Carrie K; Zamba, K D; Artes, Paul; Johnson, Chris A

    2013-02-15

    The mean defect (MD) of the visual field is a global statistical index used to monitor overall visual field change over time. Our goal was to investigate the relationship of MD and its variability for two clinically used strategies (Swedish Interactive Threshold Algorithm [SITA] standard size III and full threshold size V) in glaucoma patients and controls. We tested one eye, at random, for 46 glaucoma patients and 28 ocularly healthy subjects with Humphrey program 24-2 SITA standard for size III and full threshold for size V each five times over a 5-week period. The standard deviation of MD was regressed against the MD for the five repeated tests, and quantile regression was used to show the relationship of variability and MD. A Wilcoxon test was used to compare the standard deviations of the two testing methods following quantile regression. Both types of regression analysis showed increasing variability with increasing visual field damage. Quantile regression showed modestly smaller MD confidence limits. There was a 15% decrease in SD with size V in glaucoma patients (P = 0.10) and a 12% decrease in ocularly healthy subjects (P = 0.08). The repeatability of size V MD appears to be slightly better than size III SITA testing. When using MD to determine visual field progression, a change of 1.5 to 4 decibels (dB) is needed to be outside the normal 95% confidence limits, depending on the size of the stimulus and the amount of visual field damage.

  14. Intermediate and advanced topics in multilevel logistic regression analysis

    PubMed Central

    Merlo, Juan

    2017-01-01

    Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher‐level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within‐cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population‐average effect of covariates measured at the subject and cluster level, in contrast to the within‐cluster or cluster‐specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster‐level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28543517

  15. Independent contrasts and PGLS regression estimators are equivalent.

    PubMed

    Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary

    2012-05-01

    We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.

  16. Effect of psychosocial stressors on patients with Crohn's disease: threatening life experiences and family relations.

    PubMed

    Slonim-Nevo, Vered; Sarid, Orly; Friger, Michael; Schwartz, Doron; Chernin, Elena; Shahar, Ilana; Sergienko, Ruslan; Vardi, Hillel; Rosenthal, Alexander; Mushkalo, Alexander; Dizengof, Vitaly; Ben-Yakov, Gil; Abu-Freha, Naim; Munteanu, Daniella; Gaspar, Nava; Eidelman, Leslie; Segal, Arik; Fich, Alexander; Greenberg, Dan; Odes, Shmuel

    2016-09-01

    Threatening life experiences and adverse family relations are major psychosocial stressors affecting mental and physical health in chronic illnesses, but their influence in Crohn's disease (CD) is unclear. We assessed whether these stressors would predict the psychological and medical condition of CD patients. Consecutive adult CD patients completed a series of instruments including demography, Patient Harvey-Bradshaw Index (P-HBI), Short Inflammatory Bowel Disease Questionnaire (SIBDQ), short-form survey instrument (SF-36), brief symptom inventory (BSI), family assessment device (FAD), and list of threatening life experiences (LTE). Associations of FAD and LTE with P-HBI, SIBDQ, SF-36, and BSI were examined by multiple linear and quantile regression analyses. The cohort included 391 patients, mean age 38.38±13.95 years, 59.6% women, with intermediate economic status. The median scores were as follows: P-HBI 4 (2-8), FAD 1.67 (1.3-2.1), LTE 1 (0-3), SF-36 physical health 43.75 (33.7-51.0), SF-36 mental health 42.99 (34.1-51.9), and BSI-Global Severity Index 0.81 (0.4-1.4). The SIBDQ was 47.27±13.9. LTE was associated with increased P-HBI in all quantiles and FAD in the 50% quantile. FAD and LTE were associated with reduced SIBDQ (P<0.001). Higher LTE was associated with lower SF-36 physical and mental health (P<0.001); FAD was associated with reduced mental health (P<0.001). FAD and LTE were associated positively with GSI in all quantiles; age was associated negatively. CD patients with more threatening life experiences and adverse family relations were less healthy both physically and mentally. Physicians offering patients sociopsychological therapy should relate to threatening life experiences and family relations.

  17. No causal impact of serum vascular endothelial growth factor level on temporal changes in body mass index in Japanese male workers: a five-year longitudinal study.

    PubMed

    Imatoh, Takuya; Kamimura, Seiichiro; Miyazaki, Motonobu

    2017-03-01

    It has been reported that adipocytes secrete vascular endothelial growth factor. Therefore, we conducted a 5-year longitudinal epidemiological study to further elucidate the association between vascular endothelial growth factor levels and temporal changes in body mass index. Our study subjects were Japanese male workers, who had regular health check-ups. Vascular endothelial growth factor levels were measured at baseline. To examine the association between vascular endothelial growth factor levels and overweight, we calculated the odds ratio using a multivariate logistic regression model. Moreover, linear mixed effect models were used to assess the association between vascular endothelial growth factor level and temporal changes in body mass index during the 5-year follow-up period. Vascular endothelial growth factor levels were marginally higher in subjects with a body mass index greater than 25 kg/m 2 compared with in those with a body mass index less than 25 kg/m 2 (505.4 vs. 465.5 pg/mL, P = 0.1) and were weakly correlated with leptin levels (β: 0.05, P = 0.07). In multivariate logistic regression, subjects in the highest vascular endothelial growth factor quantile were significantly associated with an increased risk for overweight compared with those in the lowest quantile (odds ratio 1.65, 95 % confidential interval: 1.10-2.50). Moreover P for trend was significant (P for trend = 0.003). However, the linear mixed effect model revealed that vascular endothelial growth factor levels were not associated with changes in body mass index over a 5-year period (quantile 2, β: 0.06, P = 0.46; quantile 3, β: -0.06, P = 0.45; quantile 4, β: -0.10, P = 0.22; quantile 1 as reference). Our results suggested that high vascular endothelial growth factor levels were significantly associated with overweight in Japanese males but high vascular endothelial growth factor levels did not necessarily cause obesity.

  18. BAYESIAN LARGE-SCALE MULTIPLE REGRESSION WITH SUMMARY STATISTICS FROM GENOME-WIDE ASSOCIATION STUDIES1

    PubMed Central

    Zhu, Xiang; Stephens, Matthew

    2017-01-01

    Bayesian methods for large-scale multiple regression provide attractive approaches to the analysis of genome-wide association studies (GWAS). For example, they can estimate heritability of complex traits, allowing for both polygenic and sparse models; and by incorporating external genomic data into the priors, they can increase power and yield new biological insights. However, these methods require access to individual genotypes and phenotypes, which are often not easily available. Here we provide a framework for performing these analyses without individual-level data. Specifically, we introduce a “Regression with Summary Statistics” (RSS) likelihood, which relates the multiple regression coefficients to univariate regression results that are often easily available. The RSS likelihood requires estimates of correlations among covariates (SNPs), which also can be obtained from public databases. We perform Bayesian multiple regression analysis by combining the RSS likelihood with previously proposed prior distributions, sampling posteriors by Markov chain Monte Carlo. In a wide range of simulations RSS performs similarly to analyses using the individual data, both for estimating heritability and detecting associations. We apply RSS to a GWAS of human height that contains 253,288 individuals typed at 1.06 million SNPs, for which analyses of individual-level data are practically impossible. Estimates of heritability (52%) are consistent with, but more precise, than previous results using subsets of these data. We also identify many previously unreported loci that show evidence for association with height in our analyses. Software is available at https://github.com/stephenslab/rss. PMID:29399241

  19. Evaluation of three statistical prediction models for forensic age prediction based on DNA methylation.

    PubMed

    Smeers, Inge; Decorte, Ronny; Van de Voorde, Wim; Bekaert, Bram

    2018-05-01

    DNA methylation is a promising biomarker for forensic age prediction. A challenge that has emerged in recent studies is the fact that prediction errors become larger with increasing age due to interindividual differences in epigenetic ageing rates. This phenomenon of non-constant variance or heteroscedasticity violates an assumption of the often used method of ordinary least squares (OLS) regression. The aim of this study was to evaluate alternative statistical methods that do take heteroscedasticity into account in order to provide more accurate, age-dependent prediction intervals. A weighted least squares (WLS) regression is proposed as well as a quantile regression model. Their performances were compared against an OLS regression model based on the same dataset. Both models provided age-dependent prediction intervals which account for the increasing variance with age, but WLS regression performed better in terms of success rate in the current dataset. However, quantile regression might be a preferred method when dealing with a variance that is not only non-constant, but also not normally distributed. Ultimately the choice of which model to use should depend on the observed characteristics of the data. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Intermediate and advanced topics in multilevel logistic regression analysis.

    PubMed

    Austin, Peter C; Merlo, Juan

    2017-09-10

    Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  1. Differential Language Influence on Math Achievement

    ERIC Educational Resources Information Center

    Chen, Fang

    2010-01-01

    New models are commonly designed to solve certain limitations of other ones. Quantile regression is introduced in this paper because it can provide information that a regular mean regression misses. This research aims to demonstrate its utility in the educational research and measurement field for questions that may not be detected otherwise.…

  2. How Many Subjects Does It Take to Do a Regression Analysis?

    ERIC Educational Resources Information Center

    Green, Samuel B.

    1991-01-01

    An evaluation of the rules-of-thumb used to determine the minimum number of subjects required to conduct multiple regression analyses suggests that researchers who use a rule of thumb rather than power analyses trade simplicity of use for accuracy and specificity of response. Insufficient power is likely to result. (SLD)

  3. Regression Analyses on the Butterfly Ballot Effect: A Statistical Perspective of the US 2000 Election

    ERIC Educational Resources Information Center

    Wu, Dane W.

    2002-01-01

    The year 2000 US presidential election between Al Gore and George Bush has been the most intriguing and controversial one in American history. The state of Florida was the trigger for the controversy, mainly, due to the use of the misleading "butterfly ballot". Using prediction (or confidence) intervals for least squares regression lines…

  4. Measuring disparities across the distribution of mental health care expenditures.

    PubMed

    Le Cook, Benjamin; Manning, Willard; Alegria, Margarita

    2013-03-01

    Previous mental health care disparities studies predominantly compare mean mental health care use across racial/ethnic groups, leaving policymakers with little information on disparities among those with a higher level of expenditures. To identify racial/ethnic disparities among individuals at varying quantiles of mental health care expenditures. To assess whether disparities in the upper quantiles of expenditure differ by insurance status, income and education. Data were analyzed from a nationally representative sample of white, black and Latino adults 18 years and older (n=83,878). Our dependent variable was total mental health care expenditure. We measured disparities in any mental health care expenditures, disparities in mental health care expenditure at the 95th, 97.5 th, and 99 th expenditure quantiles of the full population using quantile regression, and at the 50 th, 75 th, and 95 th quantiles for positive users. In the full population, we tested interaction coefficients between race/ethnicity and income, insurance, and education levels to determine whether racial/ethnic disparities in the upper quantiles differed by income, insurance and education. Significant Black-white and Latino-white disparities were identified in any mental health care expenditures. In the full population, moving up the quantiles of mental health care expenditures, Black-White and Latino-White disparities were reduced but remained statistically significant. No statistically significant disparities were found in analyses of positive users only. The magnitude of black-white disparities was smaller among those enrolled in public insurance programs compared to the privately insured and uninsured in the 97.5 th and 99 th quantiles. Disparities persist in the upper quantiles among those in higher income categories and after excluding psychiatric inpatient and emergency department (ED) visits. Disparities exist in any mental health care and among those that use the most mental health care

  5. Measuring Disparities across the Distribution of Mental Health Care Expenditures

    PubMed Central

    Cook, Benjamin Lê; Manning, Willard; Alegría, Margarita

    2013-01-01

    Background Previous mental health care disparities studies predominantly compare mean mental health care use across racial/ethnic groups, leaving policymakers with little information on disparities among those with a higher level of expenditures. Aims of the Study To identify racial/ethnic disparities among individuals at varying quantiles of mental health care expenditures. To assess whether disparities in the upper quantiles of expenditure differ by insurance status, income and education. Methods Data were analyzed from a nationally representative sample of white, black and Latino adults 18 years and older (n=83,878). Our dependent variable was total mental health care expenditure. We measured disparities in any mental health care expenditures, disparities in mental health care expenditure at the 95th, 97.5th, and 99th expenditure quantiles of the full population using quantile regression, and at the 50th, 75th, and 95th quantiles for positive users. In the full population, we tested interaction coefficients between race/ethnicity and income, insurance, and education levels to determine whether racial/ethnic disparities in the upper quantiles differed by income, insurance and education. Results Significant Black-white and Latino-white disparities were identified in any mental health care expenditures. In the full population, moving up the quantiles of mental health care expenditures, Black-White and Latino-White disparities were reduced but remained statistically significant. No statistically significant disparities were found in analyses of positive users only. The magnitude of black-white disparities was smaller among those enrolled in public insurance programs compared to the privately insured and uninsured in the 97.5th and 99th quantiles. Disparities persist in the upper quantiles among those in higher income categories and after excluding psychiatric inpatient and emergency department (ED) visits. Discussion Disparities exist in any mental health care and

  6. Predictors of success of external cephalic version and cephalic presentation at birth among 1253 women with non-cephalic presentation using logistic regression and classification tree analyses.

    PubMed

    Hutton, Eileen K; Simioni, Julia C; Thabane, Lehana

    2017-08-01

    Among women with a fetus with a non-cephalic presentation, external cephalic version (ECV) has been shown to reduce the rate of breech presentation at birth and cesarean birth. Compared with ECV at term, beginning ECV prior to 37 weeks' gestation decreases the number of infants in a non-cephalic presentation at birth. The purpose of this secondary analysis was to investigate factors associated with a successful ECV procedure and to present this in a clinically useful format. Data were collected as part of the Early ECV Pilot and Early ECV2 Trials, which randomized 1776 women with a fetus in breech presentation to either early ECV (34-36 weeks' gestation) or delayed ECV (at or after 37 weeks). The outcome of interest was successful ECV, defined as the fetus being in a cephalic presentation immediately following the procedure, as well as at the time of birth. The importance of several factors in predicting successful ECV was investigated using two statistical methods: logistic regression and classification and regression tree (CART) analyses. Among nulliparas, non-engagement of the presenting part and an easily palpable fetal head were independently associated with success. Among multiparas, non-engagement of the presenting part, gestation less than 37 weeks and an easily palpable fetal head were found to be independent predictors of success. These findings were consistent with results of the CART analyses. Regardless of parity, descent of the presenting part was the most discriminating factor in predicting successful ECV and cephalic presentation at birth. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  7. Distributional changes in rainfall and river flow in Sarawak, Malaysia

    NASA Astrophysics Data System (ADS)

    Sa'adi, Zulfaqar; Shahid, Shamsuddin; Ismail, Tarmizi; Chung, Eun-Sung; Wang, Xiao-Jun

    2017-11-01

    Climate change may not change the rainfall mean, but the variability and extremes. Therefore, it is required to explore the possible distributional changes of rainfall characteristics over time. The objective of present study is to assess the distributional changes in annual and northeast monsoon rainfall (November-January) and river flow in Sarawak where small changes in rainfall or river flow variability/distribution may have severe implications on ecology and agriculture. A quantile regression-based approach was used to assess the changes of scale and location of empirical probability density function over the period 1980-2014 at 31 observational stations. The results indicate that diverse variation patterns exist at all stations for annual rainfall but mainly increasing quantile trend at the lowers, and higher quantiles for the month of January and December. The significant increase in annual rainfall is found mostly in the north and central-coastal region and monsoon month rainfalls in the interior and north of Sarawak. Trends in river flow data show that changes in rainfall distribution have affected higher quantiles of river flow in monsoon months at some of the basins and therefore more flooding. The study reveals that quantile trend can provide more information of rainfall change which may be useful for climate change mitigation and adaptation planning.

  8. Incense Burning during Pregnancy and Birth Weight and Head Circumference among Term Births: The Taiwan Birth Cohort Study.

    PubMed

    Chen, Le-Yu; Ho, Christine

    2016-09-01

    Incense burning for rituals or religious purposes is an important tradition in many countries. However, incense smoke contains particulate matter and gas products such as carbon monoxide, sulfur, and nitrogen dioxide, which are potentially harmful to health. We analyzed the relationship between prenatal incense burning and birth weight and head circumference at birth using the Taiwan Birth Cohort Study. We also analyzed whether the associations varied by sex and along the distribution of birth outcomes. We performed ordinary least squares (OLS) and quantile regressions analysis on a sample of 15,773 term births (> 37 gestational weeks; 8,216 boys and 7,557 girls) in Taiwan in 2005. The associations were estimated separately for boys and girls as well as for the population as a whole. We controlled extensively for factors that may be correlated with incense burning and birth weight and head circumference, such as parental religion, demographics, and health characteristics, as well as pregnancy-related variables. Findings from fully adjusted OLS regressions indicated that exposure to incense was associated with lower birth weight in boys (-18 g; 95% CI: -36, -0.94) but not girls (1 g; 95% CI: -17, 19; interaction p-value = 0.31). Associations with head circumference were negative for boys (-0.95 mm; 95% CI: -1.8, -0.16) and girls (-0.71 mm; 95% CI: -1.5, 0.11; interaction p-values = 0.73). Quantile regression results suggested that the negative associations were larger among the lower quantiles of birth outcomes. OLS regressions showed that prenatal incense burning was associated with lower birth weight for boys and smaller head circumference for boys and girls. The associations were more pronounced among the lower quantiles of birth outcomes. Further research is necessary to confirm whether incense burning has differential effects by sex. Chen LY, Ho C. 2016. Incense burning during pregnancy and birth weight and head circumference among term births: The Taiwan Birth

  9. Stolon regression

    PubMed Central

    Cherry Vogt, Kimberly S

    2008-01-01

    Many colonial organisms encrust surfaces with feeding and reproductive polyps connected by vascular stolons. Such colonies often show a dichotomy between runner-like forms, with widely spaced polyps and long stolon connections, and sheet-like forms, with closely spaced polyps and short stolon connections. Generative processes, such as rates of polyp initiation relative to rates of stolon elongation, are typically thought to underlie this dichotomy. Regressive processes, such as tissue regression and cell death, may also be relevant. In this context, we have recently characterized the process of stolon regression in a colonial cnidarian, Podocoryna carnea. Stolon regression occurs naturally in these colonies. To characterize this process in detail, high levels of stolon regression were induced in experimental colonies by treatment with reactive oxygen and reactive nitrogen species (ROS and RNS). Either treatment results in stolon regression and is accompanied by high levels of endogenous ROS and RNS as well as morphological indications of cell death in the regressing stolon. The initiating step in regression appears to be a perturbation of normal colony-wide gastrovascular flow. This suggests more general connections between stolon regression and a wide variety of environmental effects. Here we summarize our results and further discuss such connections. PMID:19704785

  10. Traffic Predictive Control: Case Study and Evaluation

    DOT National Transportation Integrated Search

    2017-06-26

    This project developed a quantile regression method for predicting future traffic flow at a signalized intersection by combining both historical and real-time data. The algorithm exploits nonlinear correlations in historical measurements and efficien...

  11. A tutorial on the piecewise regression approach applied to bedload transport data

    Treesearch

    Sandra E. Ryan; Laurie S. Porth

    2007-01-01

    This tutorial demonstrates the application of piecewise regression to bedload data to define a shift in phase of transport so that the reader may perform similar analyses on available data. The use of piecewise regression analysis implicitly recognizes different functions fit to bedload data over varying ranges of flow. The transition from primarily low rates of sand...

  12. Regression: A Bibliography.

    ERIC Educational Resources Information Center

    Pedrini, D. T.; Pedrini, Bonnie C.

    Regression, another mechanism studied by Sigmund Freud, has had much research, e.g., hypnotic regression, frustration regression, schizophrenic regression, and infra-human-animal regression (often directly related to fixation). Many investigators worked with hypnotic age regression, which has a long history, going back to Russian reflexologists.…

  13. Modeling Longitudinal Data Containing Non-Normal Within Subject Errors

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan; Glenn, Nancy L.

    2013-01-01

    The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.

  14. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  15. Seasonal effects of wind conditions on migration patterns of soaring American white pelican.

    PubMed

    Gutierrez Illan, Javier; Wang, Guiming; Cunningham, Fred L; King, D Tommy

    2017-01-01

    Energy and time expenditures are determinants of bird migration strategies. Soaring birds have developed migration strategies to minimize these costs, optimizing the use of all the available resources to facilitate their displacement. We analysed the effects of different wind factors (tailwind, turbulence, vertical updrafts) on the migratory flying strategies adopted by 24 satellite-tracked American white pelicans (Pelecanus erythrorhynchos) throughout spring and autumn in North America. We hypothesize that different wind conditions encountered along migration routes between spring and autumn induce pelicans to adopt different flying strategies and use of these wind resources. Using quantile regression and fine-scale atmospheric data, we found that the pelicans optimized the use of available wind resources, flying faster and more direct routes in spring than in autumn. They actively selected tailwinds in both spring and autumn displacements but relied on available updrafts predominantly in their spring migration, when they needed to arrive at the breeding regions. These effects varied depending on the flying speed of the pelicans. We found significant directional correlations between the pelican migration flights and wind direction. In light of our results, we suggest plasticity of migratory flight strategies by pelicans is likely to enhance their ability to cope with the effects of ongoing climate change and the alteration of wind regimes. Here, we also demonstrate the usefulness and applicability of quantile regression techniques to investigate complex ecological processes such as variable effects of atmospheric conditions on soaring migration.

  16. Seasonal effects of wind conditions on migration patterns of soaring American white pelican

    PubMed Central

    Wang, Guiming; Cunningham, Fred L.; King, D. Tommy

    2017-01-01

    Energy and time expenditures are determinants of bird migration strategies. Soaring birds have developed migration strategies to minimize these costs, optimizing the use of all the available resources to facilitate their displacement. We analysed the effects of different wind factors (tailwind, turbulence, vertical updrafts) on the migratory flying strategies adopted by 24 satellite-tracked American white pelicans (Pelecanus erythrorhynchos) throughout spring and autumn in North America. We hypothesize that different wind conditions encountered along migration routes between spring and autumn induce pelicans to adopt different flying strategies and use of these wind resources. Using quantile regression and fine-scale atmospheric data, we found that the pelicans optimized the use of available wind resources, flying faster and more direct routes in spring than in autumn. They actively selected tailwinds in both spring and autumn displacements but relied on available updrafts predominantly in their spring migration, when they needed to arrive at the breeding regions. These effects varied depending on the flying speed of the pelicans. We found significant directional correlations between the pelican migration flights and wind direction. In light of our results, we suggest plasticity of migratory flight strategies by pelicans is likely to enhance their ability to cope with the effects of ongoing climate change and the alteration of wind regimes. Here, we also demonstrate the usefulness and applicability of quantile regression techniques to investigate complex ecological processes such as variable effects of atmospheric conditions on soaring migration. PMID:29065188

  17. Geodesic least squares regression for scaling studies in magnetic confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verdoolaege, Geert

    In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority ofmore » the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices.« less

  18. A python module to normalize microarray data by the quantile adjustment method.

    PubMed

    Baber, Ibrahima; Tamby, Jean Philippe; Manoukis, Nicholas C; Sangaré, Djibril; Doumbia, Seydou; Traoré, Sekou F; Maiga, Mohamed S; Dembélé, Doulaye

    2011-06-01

    Microarray technology is widely used for gene expression research targeting the development of new drug treatments. In the case of a two-color microarray, the process starts with labeling DNA samples with fluorescent markers (cyanine 635 or Cy5 and cyanine 532 or Cy3), then mixing and hybridizing them on a chemically treated glass printed with probes, or fragments of genes. The level of hybridization between a strand of labeled DNA and a probe present on the array is measured by scanning the fluorescence of spots in order to quantify the expression based on the quality and number of pixels for each spot. The intensity data generated from these scans are subject to errors due to differences in fluorescence efficiency between Cy5 and Cy3, as well as variation in human handling and quality of the sample. Consequently, data have to be normalized to correct for variations which are not related to the biological phenomena under investigation. Among many existing normalization procedures, we have implemented the quantile adjustment method using the python computer language, and produced a module which can be run via an HTML dynamic form. This module is composed of different functions for data files reading, intensity and ratio computations and visualization. The current version of the HTML form allows the user to visualize the data before and after normalization. It also gives the option to subtract background noise before normalizing the data. The output results of this module are in agreement with the results of other normalization tools. Published by Elsevier B.V.

  19. Tools to Support Interpreting Multiple Regression in the Face of Multicollinearity

    PubMed Central

    Kraha, Amanda; Turner, Heather; Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K.

    2012-01-01

    While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses. PMID:22457655

  20. Tools to support interpreting multiple regression in the face of multicollinearity.

    PubMed

    Kraha, Amanda; Turner, Heather; Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K

    2012-01-01

    While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses.

  1. The impact of global signal regression on resting state correlations: Are anti-correlated networks introduced?

    PubMed Central

    Murphy, Kevin; Birn, Rasmus M.; Handwerker, Daniel A.; Jones, Tyler B.; Bandettini, Peter A.

    2009-01-01

    Low-frequency fluctuations in fMRI signal have been used to map several consistent resting state networks in the brain. Using the posterior cingulate cortex as a seed region, functional connectivity analyses have found not only positive correlations in the default mode network but negative correlations in another resting state network related to attentional processes. The interpretation is that the human brain is intrinsically organized into dynamic, anti-correlated functional networks. Global variations of the BOLD signal are often considered nuisance effects and are commonly removed using a general linear model (GLM) technique. This global signal regression method has been shown to introduce negative activation measures in standard fMRI analyses. The topic of this paper is whether such a correction technique could be the cause of anti-correlated resting state networks in functional connectivity analyses. Here we show that, after global signal regression, correlation values to a seed voxel must sum to a negative value. Simulations also show that small phase differences between regions can lead to spurious negative correlation values. A combination breath holding and visual task demonstrates that the relative phase of global and local signals can affect connectivity measures and that, experimentally, global signal regression leads to bell-shaped correlation value distributions, centred on zero. Finally, analyses of negatively correlated networks in resting state data show that global signal regression is most likely the cause of anti-correlations. These results call into question the interpretation of negatively correlated regions in the brain when using global signal regression as an initial processing step. PMID:18976716

  2. The impact of global signal regression on resting state correlations: are anti-correlated networks introduced?

    PubMed

    Murphy, Kevin; Birn, Rasmus M; Handwerker, Daniel A; Jones, Tyler B; Bandettini, Peter A

    2009-02-01

    Low-frequency fluctuations in fMRI signal have been used to map several consistent resting state networks in the brain. Using the posterior cingulate cortex as a seed region, functional connectivity analyses have found not only positive correlations in the default mode network but negative correlations in another resting state network related to attentional processes. The interpretation is that the human brain is intrinsically organized into dynamic, anti-correlated functional networks. Global variations of the BOLD signal are often considered nuisance effects and are commonly removed using a general linear model (GLM) technique. This global signal regression method has been shown to introduce negative activation measures in standard fMRI analyses. The topic of this paper is whether such a correction technique could be the cause of anti-correlated resting state networks in functional connectivity analyses. Here we show that, after global signal regression, correlation values to a seed voxel must sum to a negative value. Simulations also show that small phase differences between regions can lead to spurious negative correlation values. A combination breath holding and visual task demonstrates that the relative phase of global and local signals can affect connectivity measures and that, experimentally, global signal regression leads to bell-shaped correlation value distributions, centred on zero. Finally, analyses of negatively correlated networks in resting state data show that global signal regression is most likely the cause of anti-correlations. These results call into question the interpretation of negatively correlated regions in the brain when using global signal regression as an initial processing step.

  3. Wildfire Selectivity for Land Cover Type: Does Size Matter?

    PubMed Central

    Barros, Ana M. G.; Pereira, José M. C.

    2014-01-01

    Previous research has shown that fires burn certain land cover types disproportionally to their abundance. We used quantile regression to study land cover proneness to fire as a function of fire size, under the hypothesis that they are inversely related, for all land cover types. Using five years of fire perimeters, we estimated conditional quantile functions for lower (avoidance) and upper (preference) quantiles of fire selectivity for five land cover types - annual crops, evergreen oak woodlands, eucalypt forests, pine forests and shrublands. The slope of significant regression quantiles describes the rate of change in fire selectivity (avoidance or preference) as a function of fire size. We used Monte-Carlo methods to randomly permutate fires in order to obtain a distribution of fire selectivity due to chance. This distribution was used to test the null hypotheses that 1) mean fire selectivity does not differ from that obtained by randomly relocating observed fire perimeters; 2) that land cover proneness to fire does not vary with fire size. Our results show that land cover proneness to fire is higher for shrublands and pine forests than for annual crops and evergreen oak woodlands. As fire size increases, selectivity decreases for all land cover types tested. Moreover, the rate of change in selectivity with fire size is higher for preference than for avoidance. Comparison between observed and randomized data led us to reject both null hypotheses tested ( = 0.05) and to conclude it is very unlikely the observed values of fire selectivity and change in selectivity with fire size are due to chance. PMID:24454747

  4. Differential effects of dietary diversity and maternal characteristics on linear growth of children aged 6-59 months in sub-Saharan Africa: a multi-country analysis.

    PubMed

    Amugsi, Dickson A; Dimbuene, Zacharie T; Kimani-Murage, Elizabeth W; Mberu, Blessing; Ezeh, Alex C

    2017-04-01

    To investigate the differential effects of dietary diversity (DD) and maternal characteristics on child linear growth at different points of the conditional distribution of height-for-age Z-score (HAZ) in sub-Saharan Africa. Secondary analysis of data from nationally representative cross-sectional samples of singleton children aged 0-59 months, born to mothers aged 15-49 years. The outcome variable was child HAZ. Quantile regression was used to perform the multivariate analysis. The most recent Demographic and Health Surveys from Ghana, Nigeria, Kenya, Mozambique and Democratic Republic of Congo (DRC). The present analysis was restricted to children aged 6-59 months (n 31 604). DD was associated positively with HAZ in the first four quantiles (5th, 10th, 25th and 50th) and the highest quantile (90th) in Nigeria. The largest effect occurred at the very bottom (5th quantile) and the very top (90th quantile) of the conditional HAZ distribution. In DRC, DD was significantly and positively associated with HAZ in the two lower quantiles (5th, 10th). The largest effects of maternal education occurred at the lower end of the conditional HAZ distribution in Ghana, Nigeria and DRC. Maternal BMI and height also had positive effects on HAZ at different points of the conditional distribution of HAZ. Our analysis shows that the association between DD and maternal factors and HAZ differs along the conditional HAZ distribution. Intervention measures need to take into account the heterogeneous effect of the determinants of child nutritional status along the different percentiles of the HAZ distribution.

  5. No Evidence of Reciprocal Associations between Daily Sleep and Physical Activity.

    PubMed

    Mitchell, Jonathan A; Godbole, Suneeta; Moran, Kevin; Murray, Kate; James, Peter; Laden, Francine; Hipp, J Aaron; Kerr, Jacqueline; Glanz, Karen

    2016-10-01

    This study aimed to determine whether physical activity patterns are associated with sleep later at night and if nighttime sleep is associated with physical activity patterns the next day among adult women. Women (N = 353) living throughout the United States wore a wrist and a hip accelerometer for 7 d. Total sleep time (TST, hours per night) and sleep efficiency (SE, %) were estimated from the wrist accelerometer, and moderate to vigorous physical activity (MVPA, >1040 counts per minute, h·d) and sedentary behavior (SB, <100 counts per minute, h·d) were estimated from the hip accelerometer. Mixed-effects models adjusted for age, race, body mass index, education, employment, marital status, health status, and hip accelerometer wear time were used to analyze the data. Follow-up analyses using quantile regression were used to investigate associations among women with below average TST and MVPA and above average SB. The average age of our sample was 55.5 yr (SD = 10.2 yr). The majority of participants were White (79%) and married (72%), and half were employed full time (49%). The participants spent on average 8.9 and 1.1 h·d in SB and MVPA, respectively, and 6.8 h per night asleep. No associations were observed between MVPA and SB with nighttime TST or SE. There were no associations between nighttime TST and SE with MVPA or SB the next day. The findings were the same in the quantile regression analyses. In free-living adult women, accelerometry-estimated nighttime sleep and physical activity patterns were not associated with one another. On the basis of our observational study involving a sample of adult women, higher physical activity will not necessarily improve sleep at night on a day-to-day basis (and vice versa).

  6. Estimating the exceedance probability of rain rate by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  7. Evaluating differential effects using regression interactions and regression mixture models

    PubMed Central

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903

  8. An Investigation of Factors Influencing Nurses' Clinical Decision-Making Skills.

    PubMed

    Wu, Min; Yang, Jinqiu; Liu, Lingying; Ye, Benlan

    2016-08-01

    This study aims to investigate the influencing factors on nurses' clinical decision-making (CDM) skills. A cross-sectional nonexperimental research design was conducted in the medical, surgical, and emergency departments of two university hospitals, between May and June 2014. We used a quantile regression method to identify the influencing factors across different quantiles of the CDM skills distribution and compared the results with the corresponding ordinary least squares (OLS) estimates. Our findings revealed that nurses were best at the skills of managing oneself. Educational level, experience, and the total structural empowerment had significant positive impacts on nurses' CDM skills, while the nurse-patient relationship, patient care and interaction, formal empowerment, and information empowerment were negatively correlated with nurses' CDM skills. These variables explained no more than 30% of the variance in nurses' CDM skills and mainly explained the lower quantiles of nurses' CDM skills distribution. © The Author(s) 2016.

  9. Heterogeneity in Smokers' Responses to Tobacco Control Policies.

    PubMed

    Nesson, Erik

    2017-02-01

    This paper uses unconditional quantile regression to estimate whether smokers' responses to tobacco control policies change across the distribution of smoking levels. I measure smoking behavior with the number of cigarettes smoked per day and also with serum cotinine levels, a continuous biomarker of nicotine exposure, using individual-level repeated cross-section data from the National Health and Nutrition Examination Surveys. I find that the cigarette taxes lead to reductions in both the number of cigarettes smoked per day and in smokers' cotinine levels. These reductions are most pronounced in the middle quantiles of both distributions in terms of marginal effects, but most pronounced in the lower quantiles in terms of tax elasticities. I do not find that higher cigarette taxes lead to statistically significant changes in the amount of nicotine smokers ingest from each cigarette. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Prediction by regression and intrarange data scatter in surface-process studies

    USGS Publications Warehouse

    Toy, T.J.; Osterkamp, W.R.; Renard, K.G.

    1993-01-01

    Modeling is a major component of contemporary earth science, and regression analysis occupies a central position in the parameterization, calibration, and validation of geomorphic and hydrologic models. Although this methodology can be used in many ways, we are primarily concerned with the prediction of values for one variable from another variable. Examination of the literature reveals considerable inconsistency in the presentation of the results of regression analysis and the occurrence of patterns in the scatter of data points about the regression line. Both circumstances confound utilization and evaluation of the models. Statisticians are well aware of various problems associated with the use of regression analysis and offer improved practices; often, however, their guidelines are not followed. After a review of the aforementioned circumstances and until standard criteria for model evaluation become established, we recommend, as a minimum, inclusion of scatter diagrams, the standard error of the estimate, and sample size in reporting the results of regression analyses for most surface-process studies. ?? 1993 Springer-Verlag.

  11. Gene regulatory network inference from multifactorial perturbation data using both regression and correlation analyses.

    PubMed

    Xiong, Jie; Zhou, Tong

    2012-01-01

    An important problem in systems biology is to reconstruct gene regulatory networks (GRNs) from experimental data and other a priori information. The DREAM project offers some types of experimental data, such as knockout data, knockdown data, time series data, etc. Among them, multifactorial perturbation data are easier and less expensive to obtain than other types of experimental data and are thus more common in practice. In this article, a new algorithm is presented for the inference of GRNs using the DREAM4 multifactorial perturbation data. The GRN inference problem among [Formula: see text] genes is decomposed into [Formula: see text] different regression problems. In each of the regression problems, the expression level of a target gene is predicted solely from the expression level of a potential regulation gene. For different potential regulation genes, different weights for a specific target gene are constructed by using the sum of squared residuals and the Pearson correlation coefficient. Then these weights are normalized to reflect effort differences of regulating distinct genes. By appropriately choosing the parameters of the power law, we constructe a 0-1 integer programming problem. By solving this problem, direct regulation genes for an arbitrary gene can be estimated. And, the normalized weight of a gene is modified, on the basis of the estimation results about the existence of direct regulations to it. These normalized and modified weights are used in queuing the possibility of the existence of a corresponding direct regulation. Computation results with the DREAM4 In Silico Size 100 Multifactorial subchallenge show that estimation performances of the suggested algorithm can even outperform the best team. Using the real data provided by the DREAM5 Network Inference Challenge, estimation performances can be ranked third. Furthermore, the high precision of the obtained most reliable predictions shows the suggested algorithm may be helpful in guiding

  12. Establishment of Biological Reference Intervals and Reference Curve for Urea by Exploratory Parametric and Non-Parametric Quantile Regression Models.

    PubMed

    Sarkar, Rajarshi

    2013-07-01

    The validity of the entire renal function tests as a diagnostic tool depends substantially on the Biological Reference Interval (BRI) of urea. Establishment of BRI of urea is difficult partly because exclusion criteria for selection of reference data are quite rigid and partly due to the compartmentalization considerations regarding age and sex of the reference individuals. Moreover, construction of Biological Reference Curve (BRC) of urea is imperative to highlight the partitioning requirements. This a priori study examines the data collected by measuring serum urea of 3202 age and sex matched individuals, aged between 1 and 80 years, by a kinetic UV Urease/GLDH method on a Roche Cobas 6000 auto-analyzer. Mann-Whitney U test of the reference data confirmed the partitioning requirement by both age and sex. Further statistical analysis revealed the incompatibility of the data for a proposed parametric model. Hence the data was non-parametrically analysed. BRI was found to be identical for both sexes till the 2(nd) decade, and the BRI for males increased progressively 6(th) decade onwards. Four non-parametric models were postulated for construction of BRC: Gaussian kernel, double kernel, local mean and local constant, of which the last one generated the best-fitting curves. Clinical decision making should become easier and diagnostic implications of renal function tests should become more meaningful if this BRI is followed and the BRC is used as a desktop tool in conjunction with similar data for serum creatinine.

  13. Income elasticity of health expenditures in Iran.

    PubMed

    Zare, Hossein; Trujillo, Antonio J; Leidman, Eva; Buttorff, Christine

    2013-09-01

    Because of its policy implications, the income elasticity of health care expenditures is a subject of much debate. Governments may have an interest in subsidizing the care of those with low income. Using more than two decades of data from the Iran Household Expenditure and Income Survey, this article investigates the relationship between income and health care expenditure in urban and rural areas in Iran, a resource rich, upper-middle-income country. We implemented spline and quantile regression techniques to obtain a more robust description of the relationship of interest. This study finds non-uniform effects of income on health expenditures. Although the results show that health care is a necessity for all income brackets, spline regression estimates indicate that the income elasticity is lowest for the poorest Iranians in urban and rural areas. This suggests that they will show low flexibility in medical expenses as income fluctuates. Further, a quantile regression model assessing the effect of income at different level of medical expenditure suggests that households with lower medical expenses are less elastic.

  14. Nonparametric methods for drought severity estimation at ungauged sites

    NASA Astrophysics Data System (ADS)

    Sadri, S.; Burn, D. H.

    2012-12-01

    The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.

  15. Incense Burning during Pregnancy and Birth Weight and Head Circumference among Term Births: The Taiwan Birth Cohort Study

    PubMed Central

    Chen, Le-Yu; Ho, Christine

    2016-01-01

    Background: Incense burning for rituals or religious purposes is an important tradition in many countries. However, incense smoke contains particulate matter and gas products such as carbon monoxide, sulfur, and nitrogen dioxide, which are potentially harmful to health. Objectives: We analyzed the relationship between prenatal incense burning and birth weight and head circumference at birth using the Taiwan Birth Cohort Study. We also analyzed whether the associations varied by sex and along the distribution of birth outcomes. Methods: We performed ordinary least squares (OLS) and quantile regressions analysis on a sample of 15,773 term births (> 37 gestational weeks; 8,216 boys and 7,557 girls) in Taiwan in 2005. The associations were estimated separately for boys and girls as well as for the population as a whole. We controlled extensively for factors that may be correlated with incense burning and birth weight and head circumference, such as parental religion, demographics, and health characteristics, as well as pregnancy-related variables. Results: Findings from fully adjusted OLS regressions indicated that exposure to incense was associated with lower birth weight in boys (–18 g; 95% CI: –36, –0.94) but not girls (1 g; 95% CI: –17, 19; interaction p-value = 0.31). Associations with head circumference were negative for boys (–0.95 mm; 95% CI: –1.8, –0.16) and girls (–0.71 mm; 95% CI: –1.5, 0.11; interaction p-values = 0.73). Quantile regression results suggested that the negative associations were larger among the lower quantiles of birth outcomes. Conclusions: OLS regressions showed that prenatal incense burning was associated with lower birth weight for boys and smaller head circumference for boys and girls. The associations were more pronounced among the lower quantiles of birth outcomes. Further research is necessary to confirm whether incense burning has differential effects by sex. Citation: Chen LY, Ho C. 2016. Incense burning during

  16. Socioeconomic and ethnic inequalities in exposure to air and noise pollution in London.

    PubMed

    Tonne, Cathryn; Milà, Carles; Fecht, Daniela; Alvarez, Mar; Gulliver, John; Smith, James; Beevers, Sean; Ross Anderson, H; Kelly, Frank

    2018-06-01

    Transport-related air and noise pollution, exposures linked to adverse health outcomes, varies within cities potentially resulting in exposure inequalities. Relatively little is known regarding inequalities in personal exposure to air pollution or transport-related noise. Our objectives were to quantify socioeconomic and ethnic inequalities in London in 1) air pollution exposure at residence compared to personal exposure; and 2) transport-related noise at residence from different sources. We used individual-level data from the London Travel Demand Survey (n = 45,079) between 2006 and 2010. We modeled residential (CMAQ-urban) and personal (London Hybrid Exposure Model) particulate matter <2.5 μm and nitrogen dioxide (NO 2 ), road-traffic noise at residence (TRANEX) and identified those within 50 dB noise contours of railways and Heathrow airport. We analyzed relationships between household income, area-level income deprivation and ethnicity with air and noise pollution using quantile and logistic regression. We observed inverse patterns in inequalities in air pollution when estimated at residence versus personal exposure with respect to household income (categorical, 8 groups). Compared to the lowest income group (<£10,000), the highest group (>£75,000) had lower residential NO 2 (-1.3 (95% CI -2.1, -0.6) μg/m 3 in the 95th exposure quantile) but higher personal NO 2 exposure (1.9 (95% CI 1.6, 2.3) μg/m 3 in the 95th quantile), which was driven largely by transport mode and duration. Inequalities in residential exposure to NO 2 with respect to area-level deprivation were larger at lower exposure quantiles (e.g. estimate for NO 2 5.1 (95% CI 4.6, 5.5) at quantile 0.15 versus 1.9 (95% CI 1.1, 2.6) at quantile 0.95), reflecting low-deprivation, high residential NO 2 areas in the city centre. Air pollution exposure at residence consistently overestimated personal exposure; this overestimation varied with age, household income, and area-level income

  17. Analyses of polycyclic aromatic hydrocarbon (PAH) and chiral-PAH analogues-methyl-β-cyclodextrin guest-host inclusion complexes by fluorescence spectrophotometry and multivariate regression analysis.

    PubMed

    Greene, LaVana; Elzey, Brianda; Franklin, Mariah; Fakayode, Sayo O

    2017-03-05

    The negative health impact of polycyclic aromatic hydrocarbons (PAHs) and differences in pharmacological activity of enantiomers of chiral molecules in humans highlights the need for analysis of PAHs and their chiral analogue molecules in humans. Herein, the first use of cyclodextrin guest-host inclusion complexation, fluorescence spectrophotometry, and chemometric approach to PAH (anthracene) and chiral-PAH analogue derivatives (1-(9-anthryl)-2,2,2-triflouroethanol (TFE)) analyses are reported. The binding constants (K b ), stoichiometry (n), and thermodynamic properties (Gibbs free energy (ΔG), enthalpy (ΔH), and entropy (ΔS)) of anthracene and enantiomers of TFE-methyl-β-cyclodextrin (Me-β-CD) guest-host complexes were also determined. Chemometric partial-least-square (PLS) regression analysis of emission spectra data of Me-β-CD-guest-host inclusion complexes was used for the determination of anthracene and TFE enantiomer concentrations in Me-β-CD-guest-host inclusion complex samples. The values of calculated K b and negative ΔG suggest the thermodynamic favorability of anthracene-Me-β-CD and enantiomeric of TFE-Me-β-CD inclusion complexation reactions. However, anthracene-Me-β-CD and enantiomer TFE-Me-β-CD inclusion complexations showed notable differences in the binding affinity behaviors and thermodynamic properties. The PLS regression analysis resulted in square-correlation-coefficients of 0.997530 or better and a low LOD of 3.81×10 -7 M for anthracene and 3.48×10 -8 M for TFE enantiomers at physiological conditions. Most importantly, PLS regression accurately determined the anthracene and TFE enantiomer concentrations with an average low error of 2.31% for anthracene, 4.44% for R-TFE and 3.60% for S-TFE. The results of the study are highly significant because of its high sensitivity and accuracy for analysis of PAH and chiral PAH analogue derivatives without the need of an expensive chiral column, enantiomeric resolution, or use of a polarized

  18. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

  19. Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.

    2013-09-01

    The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.

  20. Implementation and Evaluation of the Streamflow Statistics (StreamStats) Web Application for Computing Basin Characteristics and Flood Peaks in Illinois

    USGS Publications Warehouse

    Ishii, Audrey L.; Soong, David T.; Sharpe, Jennifer B.

    2010-01-01

    difference is not large (less than one-tenth of the average prediction error for the regression-equation estimates). The sensitivity of the flood-quantile estimates to differences in the computed basin characteristics are determined and presented in tables. A test of usage consistency was conducted by having at least 7 new users compute flood quantile estimates at 27 locations. The average maximum deviation of the estimate from the mode value at each site was 1.31 percent after four mislocated sites were removed. A comparison of manual 100-year flood-quantile computations with ILSS at 34 sites indicated no statistically significant difference. ILSS appears to be an accurate, reliable, and effective tool for flood-quantile estimates.

  1. Association between regression and self injury among children with autism.

    PubMed

    Lance, Eboni I; York, Janet M; Lee, Li-Ching; Zimmerman, Andrew W

    2014-02-01

    Self injurious behaviors (SIBs) are challenging clinical problems in individuals with autism spectrum disorders (ASDs). This study is one of the first and largest to utilize inpatient data to examine the associations between autism, developmental regression, and SIBs. Medical records of 125 neurobehavioral hospitalized patients with diagnoses of ASDs and SIBs between 4 and 17 years of age were reviewed. Data were collected from medical records on the type and frequency of SIBs and a history of language, social, or behavioral regression during development. The children with a history of any type of developmental regression (social, behavioral, or language) were more likely to have a diagnosis of autistic disorder than other ASD diagnoses. There were no significant differences in the occurrence of self injurious or other problem behaviors (such as aggression or disruption) between children with and without regression. Regression may influence the diagnostic considerations in ASDs but does not seem to influence the clinical phenotype with regard to behavioral issues. Additional data analyses explored the frequencies and subtypes of SIBs and other medical diagnoses in ASDs, with intellectual disability and disruptive behavior disorder found most commonly. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  3. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield

    PubMed Central

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-01-01

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723

  4. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    PubMed

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  5. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An

  6. Education and inequalities in risk scores for coronary heart disease and body mass index: evidence for a population strategy.

    PubMed

    Liu, Sze Yan; Kawachi, Ichiro; Glymour, M Maria

    2012-09-01

    Concerns have been raised that education may have greater benefits for persons at high risk of coronary heart disease (CHD) than for those at low risk. We estimated the association of education (less than high school, high school, or college graduates) with 10-year CHD risk and body mass index (BMI), using linear and quantile regression models, in the following two nationally representative datasets: the 2006 wave of the Health and Retirement Survey and the 2003-2008 National Health and Nutrition Examination Survey (NHANES). Higher educational attainment was associated with lower 10-year CHD risk for all groups. However, the magnitude of this association varied considerably across quantiles for some subgroups. For example, among women in NHANES, a high school degree was associated with 4% (95% confidence interval = -9% to 1%) and 17% (-24% to -8%) lower CHD risk in the 10th and 90th percentiles, respectively. For BMI, a college degree was associated with uniform decreases across the distribution for women, but with varying increases for men. Compared with those who had not completed high school, male college graduates in the NHANES sample had a BMI that was 6% greater (2% to 11%) at the 10th percentile of the BMI distribution and 7% lower (-10% to -3%) at the 90th percentile (ie, overweight/obese). Estimates from the Health and Retirement Survey sample and the marginal quantile regression models showed similar patterns. Conventional regression methods may mask important variations in the associations between education and CHD risk.

  7. Advanced statistics: linear regression, part II: multiple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  8. Normalization Approaches for Removing Systematic Biases Associated with Mass Spectrometry and Label-Free Proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callister, Stephen J.; Barry, Richard C.; Adkins, Joshua N.

    2006-02-01

    Central tendency, linear regression, locally weighted regression, and quantile techniques were investigated for normalization of peptide abundance measurements obtained from high-throughput liquid chromatography-Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR MS). Arbitrary abundances of peptides were obtained from three sample sets, including a standard protein sample, two Deinococcus radiodurans samples taken from different growth phases, and two mouse striatum samples from control and methamphetamine-stressed mice (strain C57BL/6). The selected normalization techniques were evaluated in both the absence and presence of biological variability by estimating extraneous variability prior to and following normalization. Prior to normalization, replicate runs from each sample setmore » were observed to be statistically different, while following normalization replicate runs were no longer statistically different. Although all techniques reduced systematic bias, assigned ranks among the techniques revealed significant trends. For most LC-FTICR MS analyses, linear regression normalization ranked either first or second among the four techniques, suggesting that this technique was more generally suitable for reducing systematic biases.« less

  9. The impact of the 2008 financial crisis on food security and food expenditures in Mexico: a disproportionate effect on the vulnerable

    PubMed Central

    Vilar-Compte, Mireya; Sandoval-Olascoaga, Sebastian; Bernal-Stuart, Ana; Shimoga, Sandhya; Vargas-Bustamante, Arturo

    2015-01-01

    Objective The present paper investigated the impact of the 2008 financial crisis on food security in Mexico and how it disproportionally affected vulnerable households. Design A generalized ordered logistic regression was estimated to assess the impact of the crisis on households’ food security status. An ordinary least squares and a quantile regression were estimated to evaluate the effect of the financial crisis on a continuous proxy measure of food security defined as the share of a household’s current income devoted to food expenditures. Setting Both analyses were performed using pooled cross-sectional data from the Mexican National Household Income and Expenditure Survey 2008 and 2010. Subjects The analytical sample included 29 468 households in 2008 and 27 654 in 2010. Results The generalized ordered logistic model showed that the financial crisis significantly (P < 0·05) decreased the probability of being food secure, mildly or moderately food insecure, compared with being severely food insecure (OR = 0·74). A similar but smaller effect was found when comparing severely and moderately food-insecure households with mildly food-insecure and food-secure households (OR = 0·81). The ordinary least squares model showed that the crisis significantly (P < 0·05) increased the share of total income spent on food (β coefficient of 0·02). The quantile regression confirmed the findings suggested by the generalized ordered logistic model, showing that the effects of the crisis were more profound among poorer households. Conclusions The results suggest that households that were more vulnerable before the financial crisis saw a worsened effect in terms of food insecurity with the crisis. Findings were consistent with both measures of food security – one based on self-reported experience and the other based on food spending. PMID:25428800

  10. The impact of the 2008 financial crisis on food security and food expenditures in Mexico: a disproportionate effect on the vulnerable.

    PubMed

    Vilar-Compte, Mireya; Sandoval-Olascoaga, Sebastian; Bernal-Stuart, Ana; Shimoga, Sandhya; Vargas-Bustamante, Arturo

    2015-11-01

    The present paper investigated the impact of the 2008 financial crisis on food security in Mexico and how it disproportionally affected vulnerable households. A generalized ordered logistic regression was estimated to assess the impact of the crisis on households' food security status. An ordinary least squares and a quantile regression were estimated to evaluate the effect of the financial crisis on a continuous proxy measure of food security defined as the share of a household's current income devoted to food expenditures. Setting Both analyses were performed using pooled cross-sectional data from the Mexican National Household Income and Expenditure Survey 2008 and 2010. The analytical sample included 29,468 households in 2008 and 27,654 in 2010. The generalized ordered logistic model showed that the financial crisis significantly (P<0·05) decreased the probability of being food secure, mildly or moderately food insecure, compared with being severely food insecure (OR=0·74). A similar but smaller effect was found when comparing severely and moderately food-insecure households with mildly food-insecure and food-secure households (OR=0·81). The ordinary least squares model showed that the crisis significantly (P<0·05) increased the share of total income spent on food (β coefficient of 0·02). The quantile regression confirmed the findings suggested by the generalized ordered logistic model, showing that the effects of the crisis were more profound among poorer households. The results suggest that households that were more vulnerable before the financial crisis saw a worsened effect in terms of food insecurity with the crisis. Findings were consistent with both measures of food security--one based on self-reported experience and the other based on food spending.

  11. A Comparison between the Use of Beta Weights and Structure Coefficients in Interpreting Regression Results

    ERIC Educational Resources Information Center

    Tong, Fuhui

    2006-01-01

    Background: An extensive body of researches has favored the use of regression over other parametric analyses that are based on OVA. In case of noteworthy regression results, researchers tend to explore magnitude of beta weights for the respective predictors. Purpose: The purpose of this paper is to examine both beta weights and structure…

  12. Microarray image analysis: background estimation using quantile and morphological filters.

    PubMed

    Bengtsson, Anders; Bengtsson, Henrik

    2006-02-28

    In a microarray experiment the difference in expression between genes on the same slide is up to 103 fold or more. At low expression, even a small error in the estimate will have great influence on the final test and reference ratios. In addition to the true spot intensity the scanned signal consists of different kinds of noise referred to as background. In order to assess the true spot intensity background must be subtracted. The standard approach to estimate background intensities is to assume they are equal to the intensity levels between spots. In the literature, morphological opening is suggested to be one of the best methods for estimating background this way. This paper examines fundamental properties of rank and quantile filters, which include morphological filters at the extremes, with focus on their ability to estimate between-spot intensity levels. The bias and variance of these filter estimates are driven by the number of background pixels used and their distributions. A new rank-filter algorithm is implemented and compared to methods available in Spot by CSIRO and GenePix Pro by Axon Instruments. Spot's morphological opening has a mean bias between -47 and -248 compared to a bias between 2 and -2 for the rank filter and the variability of the morphological opening estimate is 3 times higher than for the rank filter. The mean bias of Spot's second method, morph.close.open, is between -5 and -16 and the variability is approximately the same as for morphological opening. The variability of GenePix Pro's region-based estimate is more than ten times higher than the variability of the rank-filter estimate and with slightly more bias. The large variability is because the size of the background window changes with spot size. To overcome this, a non-adaptive region-based method is implemented. Its bias and variability are comparable to that of the rank filter. The performance of more advanced rank filters is equal to the best region-based methods. However, in

  13. Alternative Statistical Frameworks for Student Growth Percentile Estimation

    ERIC Educational Resources Information Center

    Lockwood, J. R.; Castellano, Katherine E.

    2015-01-01

    This article suggests two alternative statistical approaches for estimating student growth percentiles (SGP). The first is to estimate percentile ranks of current test scores conditional on past test scores directly, by modeling the conditional cumulative distribution functions, rather than indirectly through quantile regressions. This would…

  14. Effects of Individual Development Accounts (IDAs) on Household Wealth and Saving Taste

    ERIC Educational Resources Information Center

    Huang, Jin

    2010-01-01

    This study examines effects of individual development accounts (IDAs) on household wealth of low-income participants. Methods: This study uses longitudinal survey data from the American Dream Demonstration (ADD) involving experimental design (treatment group = 537, control group = 566). Results: Results from quantile regression analysis indicate…

  15. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  16. Regression Discontinuity for Causal Effect Estimation in Epidemiology.

    PubMed

    Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till

    Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.

  17. Ecologic regression analysis and the study of the influence of air quality on mortality.

    PubMed Central

    Selvin, S; Merrill, D; Wong, L; Sacks, S T

    1984-01-01

    This presentation focuses entirely on the use and evaluation of regression analysis applied to ecologic data as a method to study the effects of ambient air pollution on mortality rates. Using extensive national data on mortality, air quality and socio-economic status regression analyses are used to study the influence of air quality on mortality. The analytic methods and data are selected in such a way that direct comparisons can be made with other ecologic regression studies of mortality and air quality. Analyses are performed by use of two types of geographic areas, age-specific mortality of both males and females and three pollutants (total suspended particulates, sulfur dioxide and nitrogen dioxide). The overall results indicate no persuasive evidence exists of a link between air quality and general mortality levels. Additionally, a lack of consistency between the present results and previous published work is noted. Overall, it is concluded that linear regression analysis applied to nationally collected ecologic data cannot be used to usefully infer a causal relationship between air quality and mortality which is in direct contradiction to other major published studies. PMID:6734568

  18. Regression Analysis of Optical Coherence Tomography Disc Variables for Glaucoma Diagnosis.

    PubMed

    Richter, Grace M; Zhang, Xinbo; Tan, Ou; Francis, Brian A; Chopra, Vikas; Greenfield, David S; Varma, Rohit; Schuman, Joel S; Huang, David

    2016-08-01

    To report diagnostic accuracy of optical coherence tomography (OCT) disc variables using both time-domain (TD) and Fourier-domain (FD) OCT, and to improve the use of OCT disc variable measurements for glaucoma diagnosis through regression analyses that adjust for optic disc size and axial length-based magnification error. Observational, cross-sectional. In total, 180 normal eyes of 112 participants and 180 eyes of 138 participants with perimetric glaucoma from the Advanced Imaging for Glaucoma Study. Diagnostic variables evaluated from TD-OCT and FD-OCT were: disc area, rim area, rim volume, optic nerve head volume, vertical cup-to-disc ratio (CDR), and horizontal CDR. These were compared with overall retinal nerve fiber layer thickness and ganglion cell complex. Regression analyses were performed that corrected for optic disc size and axial length. Area-under-receiver-operating curves (AUROC) were used to assess diagnostic accuracy before and after the adjustments. An index based on multiple logistic regression that combined optic disc variables with axial length was also explored with the aim of improving diagnostic accuracy of disc variables. Comparison of diagnostic accuracy of disc variables, as measured by AUROC. The unadjusted disc variables with the highest diagnostic accuracies were: rim volume for TD-OCT (AUROC=0.864) and vertical CDR (AUROC=0.874) for FD-OCT. Magnification correction significantly worsened diagnostic accuracy for rim variables, and while optic disc size adjustments partially restored diagnostic accuracy, the adjusted AUROCs were still lower. Axial length adjustments to disc variables in the form of multiple logistic regression indices led to a slight but insignificant improvement in diagnostic accuracy. Our various regression approaches were not able to significantly improve disc-based OCT glaucoma diagnosis. However, disc rim area and vertical CDR had very high diagnostic accuracy, and these disc variables can serve to complement

  19. Measuring racial/ethnic disparities across the distribution of health care expenditures.

    PubMed

    Cook, Benjamin Lê; Manning, Willard G

    2009-10-01

    To assess whether black-white and Hispanic-white disparities increase or abate in the upper quantiles of total health care expenditure, conditional on covariates. Nationally representative adult population of non-Hispanic whites, African Americans, and Hispanics from the 2001-2005 Medical Expenditure Panel Surveys. We examine unadjusted racial/ethnic differences across the distribution of expenditures. We apply quantile regression to measure disparities at the median, 75th, 90th, and 95th quantiles, testing for differences over the distribution of health care expenditures and across income and education categories. We test the sensitivity of the results to comparisons based only on health status and estimate a two-part model to ensure that results are not driven by an extremely skewed distribution of expenditures with a large zero mass. Black-white and Hispanic-white disparities diminish in the upper quantiles of expenditure, but expenditures for blacks and Hispanics remain significantly lower than for whites throughout the distribution. For most education and income categories, disparities exist at the median and decline, but remain significant even with increased education and income. Blacks and Hispanics receive significantly disparate care at high expenditure levels, suggesting prioritization of improved access to quality care among minorities with critical health issues.

  20. Application of artificial neural network to fMRI regression analysis.

    PubMed

    Misaki, Masaya; Miyauchi, Satoru

    2006-01-15

    We used an artificial neural network (ANN) to detect correlations between event sequences and fMRI (functional magnetic resonance imaging) signals. The layered feed-forward neural network, given a series of events as inputs and the fMRI signal as a supervised signal, performed a non-linear regression analysis. This type of ANN is capable of approximating any continuous function, and thus this analysis method can detect any fMRI signals that correlated with corresponding events. Because of the flexible nature of ANNs, fitting to autocorrelation noise is a problem in fMRI analyses. We avoided this problem by using cross-validation and an early stopping procedure. The results showed that the ANN could detect various responses with different time courses. The simulation analysis also indicated an additional advantage of ANN over non-parametric methods in detecting parametrically modulated responses, i.e., it can detect various types of parametric modulations without a priori assumptions. The ANN regression analysis is therefore beneficial for exploratory fMRI analyses in detecting continuous changes in responses modulated by changes in input values.

  1. Structural vascular disease in Africans: Performance of ethnic-specific waist circumference cut points using logistic regression and neural network analyses: The SABPA study.

    PubMed

    Botha, J; de Ridder, J H; Potgieter, J C; Steyn, H S; Malan, L

    2013-10-01

    A recently proposed model for waist circumference cut points (RPWC), driven by increased blood pressure, was demonstrated in an African population. We therefore aimed to validate the RPWC by comparing the RPWC and the Joint Statement Consensus (JSC) models via Logistic Regression (LR) and Neural Networks (NN) analyses. Urban African gender groups (N=171) were stratified according to the JSC and RPWC cut point models. Ultrasound carotid intima media thickness (CIMT), blood pressure (BP) and fasting bloods (glucose, high density lipoprotein (HDL) and triglycerides) were obtained in a well-controlled setting. The RPWC male model (LR ROC AUC: 0.71, NN ROC AUC: 0.71) was practically equal to the JSC model (LR ROC AUC: 0.71, NN ROC AUC: 0.69) to predict structural vascular -disease. Similarly, the female RPWC model (LR ROC AUC: 0.84, NN ROC AUC: 0.82) and JSC model (LR ROC AUC: 0.82, NN ROC AUC: 0.81) equally predicted CIMT as surrogate marker for structural vascular disease. Odds ratios supported validity where prediction of CIMT revealed -clinical -significance, well over 1, for both the JSC and RPWC models in African males and females (OR 3.75-13.98). In conclusion, the proposed RPWC model was substantially validated utilizing linear and non-linear analyses. We therefore propose ethnic-specific WC cut points (African males, ≥90 cm; -females, ≥98 cm) to predict a surrogate marker for structural vascular disease. © J. A. Barth Verlag in Georg Thieme Verlag KG Stuttgart · New York.

  2. Yield gaps and yield relationships in US soybean production systems

    USDA-ARS?s Scientific Manuscript database

    The magnitude of yield gaps (YG) (potential yield – farmer yield) provides some indication of the prospects for increasing crop yield to meet the food demands of future populations. Quantile regression analysis was applied to county soybean [Glycine max (L.) Merrill] yields (1971 – 2011) from Kentuc...

  3. Examining the Reliability of Student Growth Percentiles Using Multidimensional IRT

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li

    2015-01-01

    Student growth percentiles (SGPs, Betebenner, 2009) are used to locate a student's current score in a conditional distribution based on the student's past scores. Currently, following Betebenner (2009), quantile regression (QR) is most often used operationally to estimate the SGPs. Alternatively, multidimensional item response theory (MIRT) may…

  4. Separation in Logistic Regression: Causes, Consequences, and Control.

    PubMed

    Mansournia, Mohammad Ali; Geroldinger, Angelika; Greenland, Sander; Heinze, Georg

    2018-04-01

    Separation is encountered in regression models with a discrete outcome (such as logistic regression) where the covariates perfectly predict the outcome. It is most frequent under the same conditions that lead to small-sample and sparse-data bias, such as presence of a rare outcome, rare exposures, highly correlated covariates, or covariates with strong effects. In theory, separation will produce infinite estimates for some coefficients. In practice, however, separation may be unnoticed or mishandled because of software limits in recognizing and handling the problem and in notifying the user. We discuss causes of separation in logistic regression and describe how common software packages deal with it. We then describe methods that remove separation, focusing on the same penalized-likelihood techniques used to address more general sparse-data problems. These methods improve accuracy, avoid software problems, and allow interpretation as Bayesian analyses with weakly informative priors. We discuss likelihood penalties, including some that can be implemented easily with any software package, and their relative advantages and disadvantages. We provide an illustration of ideas and methods using data from a case-control study of contraceptive practices and urinary tract infection.

  5. Temporal Synchronization Analysis for Improving Regression Modeling of Fecal Indicator Bacteria Levels

    EPA Science Inventory

    Multiple linear regression models are often used to predict levels of fecal indicator bacteria (FIB) in recreational swimming waters based on independent variables (IVs) such as meteorologic, hydrodynamic, and water-quality measures. The IVs used for these analyses are traditiona...

  6. Socioeconomic determinants of childhood overweight and obesity in China: the long arm of institutional power

    PubMed Central

    Fu, Qiang; George, Linda K.

    2018-01-01

    Abstract Previous studies have widely reported that the association between socioeconomic status (SES) and childhood overweight and obesity in China is significant and positive, which lends little support to the fundamental-cause perspective. Using multiple waves (1997, 2000, 2004 and 2006) of the China Health and Nutrition Survey (CHNS) (N = 2,556, 2,063, 1,431 and 1,242, respectively) and continuous BMI cut-points obtained from a polynomial method, (mixed-effect) logistic regression analyses show that parental state-sector employment, an important, yet overlooked, indicator of political power during the market transformation has changed from a risk factor for childhood overweight/obesity in 1997 to a protective factor for childhood overweight/obesity in 2006. Results from quantile regression analyses generate the same conclusions and demonstrate that the protective effect of parental state sector employment at high percentiles of BMI is robust under different estimation strategies. By bridging the fundamental causes perspective and theories of market transformation, this research not only documents the effect of political power on childhood overweight/obesity but also calls for the use of multifaceted, culturally-relevant stratification measures in testing the fundamental cause perspective across time and space. PMID:26178452

  7. Resource predictability and specialization in avian malaria parasites.

    PubMed

    Svensson-Coelho, Maria; Loiselle, Bette A; Blake, John G; Ricklefs, Robert E

    2016-09-01

    We tested the hypothesis that avian haemosporidian (malaria) parasites specialize on hosts that can be characterized as predictable resources at a site in Amazonian Ecuador. We incorporated host phylogenetic relationship and relative abundance in assessing parasite specialization, and we examined associations between parasite specialization and three host characteristics - abundance, mass and longevity - using quantile regression, phylogenetic logistic regression and t-tests. Hosts of specialist malaria parasite lineages were on average more abundant than hosts of generalist parasite lineages, but the relationship between host abundance and parasite specialization was not consistent across analyses. We also found support for a positive association between parasite specialization and host longevity, but this also was not consistent across analyses. Nonetheless, our findings suggest that the predictability of a host resource may play a role in the evolution of specialization. However, we also discuss two alternative explanations to the resource predictability hypothesis for specialization: (i) that interspecific interactions among the parasites themselves might constrain some parasites to a specialist strategy, and (ii) that frequent encounters with multiple host species, mediated by blood-sucking insects, might promote generalization within this system. © 2016 John Wiley & Sons Ltd.

  8. Functional mixture regression.

    PubMed

    Yao, Fang; Fu, Yuejiao; Lee, Thomas C M

    2011-04-01

    In functional linear models (FLMs), the relationship between the scalar response and the functional predictor process is often assumed to be identical for all subjects. Motivated by both practical and methodological considerations, we relax this assumption and propose a new class of functional regression models that allow the regression structure to vary for different groups of subjects. By projecting the predictor process onto its eigenspace, the new functional regression model is simplified to a framework that is similar to classical mixture regression models. This leads to the proposed approach named as functional mixture regression (FMR). The estimation of FMR can be readily carried out using existing software implemented for functional principal component analysis and mixture regression. The practical necessity and performance of FMR are illustrated through applications to a longevity analysis of female medflies and a human growth study. Theoretical investigations concerning the consistent estimation and prediction properties of FMR along with simulation experiments illustrating its empirical properties are presented in the supplementary material available at Biostatistics online. Corresponding results demonstrate that the proposed approach could potentially achieve substantial gains over traditional FLMs.

  9. Difficulties with Regression Analysis of Age-Adjusted Rates.

    DTIC Science & Technology

    1982-09-01

    variables used in those analyses, such as death rates in various states, have been age adjusted, whereas the predictor variables have not been age adjusted...The use of crude state death rates as the outcome variable with crude covariates and age as predictors can avoid the problem, at least under some...should be regressed on age-adjusted exposure Z+B+ Although age-specific death rates , Yas+’ may be available, it is often difficult to obtain age

  10. Birth by caesarean section and school performance in Swedish adolescents- a population-based study.

    PubMed

    Curran, Eileen A; Kenny, Louise C; Dalman, Christina; Kearney, Patricia M; Cryan, John F; Dinan, Timothy G; Khashan, Ali S

    2017-04-17

    Our objective was to assess the impact of obstetric mode of delivery, and in particular birth by Caesarean section (CS), on school performance in adolescents using a large, population-based cohort. We extracted data from the Swedish Medical Birth Register and National School Register. We included all live singleton births in Sweden from 1982-1995 (n = 1,489,925). School grades were reported on a scale from 0 to 320, scores less than 160 (i.e. "pass") were considered to be "poor school performance." Mode of delivery was categorised as: unassisted vaginal delivery (VD), assisted VD, elective CS and emergency CS. We measured the association between mode of delivery and "poor school performance" using logistic regression. We then used quantile regression to assess the association between mode of delivery and school performance across the distribution of scores. We adjusted for maternal age, parity, small and large for gestational age, gestational age, maternal country of birth, maternal depression, non-affective disorder or bipolar disorder, parental income at time of birth, and parental social welfare at time of birth. We also conducted sensitivity analyses to investigate the association further. With logistic regression analysis, the adjusted odds ratio (aOR) of assisted VD and poor school performance, compared to unassisted VD, was 1.06 (95% CI: 1.03-1.08). For elective CS it was 1.06 (95% CI:1.03-1.09) and for emergency CS it was 1.12 (95% CI: 1.09-1.15). With quantile regression, assisted VD showed little difference in scores, when compared to unassisted VD, at any point across the distribution. Elective CS was associated with a 1-3 point decrease in scores, and emergency CS was associated with a 2-5 point decrease in scores. A slight association was found between birth by CS and school performance. However, the effect was quite small and given the complex nature of the relationship, should be interpreted with caution.

  11. CO-occurring exposure to perchlorate, nitrate and thiocyanate alters thyroid function in healthy pregnant women

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horton, Megan K., E-mail: megan.horton@mssm.edu; Blount, Benjamin C.; Valentin-Blasini, Liza

    Background: Adequate maternal thyroid function during pregnancy is necessary for normal fetal brain development, making pregnancy a critical window of vulnerability to thyroid disrupting insults. Sodium/iodide symporter (NIS) inhibitors, namely perchlorate, nitrate, and thiocyanate, have been shown individually to competitively inhibit uptake of iodine by the thyroid. Several epidemiologic studies examined the association between these individual exposures and thyroid function. Few studies have examined the effect of this chemical mixture on thyroid function during pregnancy Objectives: We examined the cross sectional association between urinary perchlorate, thiocyanate and nitrate concentrations and thyroid function among healthy pregnant women living in New Yorkmore » City using weighted quantile sum (WQS) regression. Methods: We measured thyroid stimulating hormone (TSH) and free thyroxine (FreeT4) in blood samples; perchlorate, thiocyanate, nitrate and iodide in urine samples collected from 284 pregnant women at 12 (±2.8) weeks gestation. We examined associations between urinary analyte concentrations and TSH or FreeT4 using linear regression or WQS adjusting for gestational age, urinary iodide and creatinine. Results: Individual analyte concentrations in urine were significantly correlated (Spearman's r 0.4–0.5, p<0.001). Linear regression analyses did not suggest associations between individual concentrations and thyroid function. The WQS revealed a significant positive association between the weighted sum of urinary concentrations of the three analytes and increased TSH. Perchlorate had the largest weight in the index, indicating the largest contribution to the WQS. Conclusions: Co-exposure to perchlorate, nitrate and thiocyanate may alter maternal thyroid function, specifically TSH, during pregnancy. - Highlights: • Perchlorate, nitrate, thiocyanate and iodide measured in maternal urine. • Thyroid function (TSH and Free T4) measured in maternal

  12. Use of probabilistic weights to enhance linear regression myoelectric control

    NASA Astrophysics Data System (ADS)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2015-12-01

    Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  13. Quantile Mapping Bias correction for daily precipitation over Vietnam in a regional climate model

    NASA Astrophysics Data System (ADS)

    Trinh, L. T.; Matsumoto, J.; Ngo-Duc, T.

    2017-12-01

    In the past decades, Regional Climate Models (RCMs) have been developed significantly, allowing climate simulation to be conducted at a higher resolution. However, RCMs often contained biases when comparing with observations. Therefore, statistical correction methods were commonly employed to reduce/minimize the model biases. In this study, outputs of the Regional Climate Model (RegCM) version 4.3 driven by the CNRM-CM5 global products were evaluated with and without the Quantile Mapping (QM) bias correction method. The model domain covered the area from 90oE to 145oE and from 15oS to 40oN with a horizontal resolution of 25km. The QM bias correction processes were implemented by using the Vietnam Gridded precipitation dataset (VnGP) and the outputs of RegCM historical run in the period 1986-1995 and then validated for the period 1996-2005. Based on the statistical quantity of spatial correlation and intensity distributions, the QM method showed a significant improvement in rainfall compared to the non-bias correction method. The improvements both in time and space were recognized in all seasons and all climatic sub-regions of Vietnam. Moreover, not only the rainfall amount but also some extreme indices such as R10m, R20mm, R50m, CDD, CWD, R95pTOT, R99pTOT were much better after the correction. The results suggested that the QM correction method should be taken into practice for the projections of the future precipitation over Vietnam.

  14. Calibration of limited-area ensemble precipitation forecasts for hydrological predictions

    NASA Astrophysics Data System (ADS)

    Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana

    2015-04-01

    The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog

  15. Yield and yield gaps in central U.S. corn production systems

    USDA-ARS?s Scientific Manuscript database

    The magnitude of yield gaps (YG) (potential yield – farmer yield) provides some indication of the prospects for increasing crop yield. Quantile regression analysis was applied to county maize (Zea mays L.) yields (1972 – 2011) from Kentucky, Iowa and Nebraska (irrigated) (total of 115 counties) to e...

  16. The Effect of Family Background, University Quality and Educational Mismatch on Wage: An Analysis Using a Young Cohort of Italian Graduates

    ERIC Educational Resources Information Center

    Ordine, Patrizia; Rose, Giuseppe

    2015-01-01

    This paper analyzes the impact of university quality, family background and mismatch on the wages of young Italian graduates. An empirical analysis is undertaken using a representative sample of graduates merged with a dataset containing information on the characteristics of universities. By utilizing quantile regression techniques, some evidence…

  17. Genome-wide regression and prediction with the BGLR statistical package.

    PubMed

    Pérez, Paulino; de los Campos, Gustavo

    2014-10-01

    Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis. Copyright © 2014 by the Genetics Society of America.

  18. Modelling the behaviour of unemployment rates in the US over time and across space

    NASA Astrophysics Data System (ADS)

    Holmes, Mark J.; Otero, Jesús; Panagiotidis, Theodore

    2013-11-01

    This paper provides evidence that unemployment rates across US states are stationary and therefore behave according to the natural rate hypothesis. We provide new insights by considering the effect of key variables on the speed of adjustment associated with unemployment shocks. A highly-dimensional VAR analysis of the half-lives associated with shocks to unemployment rates in pairs of states suggests that the distance between states and vacancy rates respectively exert a positive and negative influence. We find that higher homeownership rates do not lead to higher half-lives. When the symmetry assumption is relaxed through quantile regression, support for the Oswald hypothesis through a positive relationship between homeownership rates and half-lives is found at the higher quantiles.

  19. Logistic Regression in the Identification of Hazards in Construction

    NASA Astrophysics Data System (ADS)

    Drozd, Wojciech

    2017-10-01

    The construction site and its elements create circumstances that are conducive to the formation of risks to safety during the execution of works. Analysis indicates the critical importance of these factors in the set of characteristics that describe the causes of accidents in the construction industry. This article attempts to analyse the characteristics related to the construction site, in order to indicate their importance in defining the circumstances of accidents at work. The study includes sites inspected in 2014 - 2016 by the employees of the District Labour Inspectorate in Krakow (Poland). The analysed set of detailed (disaggregated) data includes both quantitative and qualitative characteristics. The substantive task focused on classification modelling in the identification of hazards in construction and identifying those of the analysed characteristics that are important in an accident. In terms of methodology, resource data analysis using statistical classifiers, in the form of logistic regression, was the method used.

  20. A comparison of Cox and logistic regression for use in genome-wide association studies of cohort and case-cohort design.

    PubMed

    Staley, James R; Jones, Edmund; Kaptoge, Stephen; Butterworth, Adam S; Sweeting, Michael J; Wood, Angela M; Howson, Joanna M M

    2017-06-01

    Logistic regression is often used instead of Cox regression to analyse genome-wide association studies (GWAS) of single-nucleotide polymorphisms (SNPs) and disease outcomes with cohort and case-cohort designs, as it is less computationally expensive. Although Cox and logistic regression models have been compared previously in cohort studies, this work does not completely cover the GWAS setting nor extend to the case-cohort study design. Here, we evaluated Cox and logistic regression applied to cohort and case-cohort genetic association studies using simulated data and genetic data from the EPIC-CVD study. In the cohort setting, there was a modest improvement in power to detect SNP-disease associations using Cox regression compared with logistic regression, which increased as the disease incidence increased. In contrast, logistic regression had more power than (Prentice weighted) Cox regression in the case-cohort setting. Logistic regression yielded inflated effect estimates (assuming the hazard ratio is the underlying measure of association) for both study designs, especially for SNPs with greater effect on disease. Given logistic regression is substantially more computationally efficient than Cox regression in both settings, we propose a two-step approach to GWAS in cohort and case-cohort studies. First to analyse all SNPs with logistic regression to identify associated variants below a pre-defined P-value threshold, and second to fit Cox regression (appropriately weighted in case-cohort studies) to those identified SNPs to ensure accurate estimation of association with disease.

  1. A comparison of time dependent Cox regression, pooled logistic regression and cross sectional pooling with simulations and an application to the Framingham Heart Study.

    PubMed

    Ngwa, Julius S; Cabral, Howard J; Cheng, Debbie M; Pencina, Michael J; Gagnon, David R; LaValley, Michael P; Cupples, L Adrienne

    2016-11-03

    Typical survival studies follow individuals to an event and measure explanatory variables for that event, sometimes repeatedly over the course of follow up. The Cox regression model has been used widely in the analyses of time to diagnosis or death from disease. The associations between the survival outcome and time dependent measures may be biased unless they are modeled appropriately. In this paper we explore the Time Dependent Cox Regression Model (TDCM), which quantifies the effect of repeated measures of covariates in the analysis of time to event data. This model is commonly used in biomedical research but sometimes does not explicitly adjust for the times at which time dependent explanatory variables are measured. This approach can yield different estimates of association compared to a model that adjusts for these times. In order to address the question of how different these estimates are from a statistical perspective, we compare the TDCM to Pooled Logistic Regression (PLR) and Cross Sectional Pooling (CSP), considering models that adjust and do not adjust for time in PLR and CSP. In a series of simulations we found that time adjusted CSP provided identical results to the TDCM while the PLR showed larger parameter estimates compared to the time adjusted CSP and the TDCM in scenarios with high event rates. We also observed upwardly biased estimates in the unadjusted CSP and unadjusted PLR methods. The time adjusted PLR had a positive bias in the time dependent Age effect with reduced bias when the event rate is low. The PLR methods showed a negative bias in the Sex effect, a subject level covariate, when compared to the other methods. The Cox models yielded reliable estimates for the Sex effect in all scenarios considered. We conclude that survival analyses that explicitly account in the statistical model for the times at which time dependent covariates are measured provide more reliable estimates compared to unadjusted analyses. We present results from the

  2. Parametric regression model for survival data: Weibull regression model as an example

    PubMed Central

    2016-01-01

    Weibull regression model is one of the most popular forms of parametric regression model that it provides estimate of baseline hazard function, as well as coefficients for covariates. Because of technical difficulties, Weibull regression model is seldom used in medical literature as compared to the semi-parametric proportional hazard model. To make clinical investigators familiar with Weibull regression model, this article introduces some basic knowledge on Weibull regression model and then illustrates how to fit the model with R software. The SurvRegCensCov package is useful in converting estimated coefficients to clinical relevant statistics such as hazard ratio (HR) and event time ratio (ETR). Model adequacy can be assessed by inspecting Kaplan-Meier curves stratified by categorical variable. The eha package provides an alternative method to model Weibull regression model. The check.dist() function helps to assess goodness-of-fit of the model. Variable selection is based on the importance of a covariate, which can be tested using anova() function. Alternatively, backward elimination starting from a full model is an efficient way for model development. Visualization of Weibull regression model after model development is interesting that it provides another way to report your findings. PMID:28149846

  3. Computational tools for exact conditional logistic regression.

    PubMed

    Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P

    Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.

  4. Evaluating Differential Effects Using Regression Interactions and Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung

    2015-01-01

    Research increasingly emphasizes understanding differential effects. This article focuses on understanding regression mixture models, which are relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their…

  5. Utility-Based Instruments for People with Dementia: A Systematic Review and Meta-Regression Analysis.

    PubMed

    Li, Li; Nguyen, Kim-Huong; Comans, Tracy; Scuffham, Paul

    2018-04-01

    Several utility-based instruments have been applied in cost-utility analysis to assess health state values for people with dementia. Nevertheless, concerns and uncertainty regarding their performance for people with dementia have been raised. To assess the performance of available utility-based instruments for people with dementia by comparing their psychometric properties and to explore factors that cause variations in the reported health state values generated from those instruments by conducting meta-regression analyses. A literature search was conducted and psychometric properties were synthesized to demonstrate the overall performance of each instrument. When available, health state values and variables such as the type of instrument and cognitive impairment levels were extracted from each article. A meta-regression analysis was undertaken and available covariates were included in the models. A total of 64 studies providing preference-based values were identified and included. The EuroQol five-dimension questionnaire demonstrated the best combination of feasibility, reliability, and validity. Meta-regression analyses suggested that significant differences exist between instruments, type of respondents, and mode of administration and the variations in estimated utility values had influences on incremental quality-adjusted life-year calculation. This review finds that the EuroQol five-dimension questionnaire is the most valid utility-based instrument for people with dementia, but should be replaced by others under certain circumstances. Although no utility estimates were reported in the article, the meta-regression analyses that examined variations in utility estimates produced by different instruments impact on cost-utility analysis, potentially altering the decision-making process in some circumstances. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. 78 FR 16808 - Connect America Fund; High-Cost Universal Service Support

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-19

    ... to use one regression to generate a single cap on total loop costs for each study area. A single cap.... * * * A preferable, and simpler, approach would be to develop one conditional quantile model for aggregate.... Total universal service support for such carriers was approaching $2 billion annually--more than 40...

  7. Heterogenous Effects of Sports Participation on Education and Labor Market Outcomes

    ERIC Educational Resources Information Center

    Gorry, Devon

    2016-01-01

    This paper analyzes the distribution of education and labor market benefits from sports participation. Results show that effects are similar across gender, but differ on other dimensions. In particular, participants in team sports show greater gains than those in individual sports. Quantile regressions show that educational gains are larger for…

  8. Genetic parameters for growth characteristics of free-range chickens under univariate random regression models.

    PubMed

    Rovadoscki, Gregori A; Petrini, Juliana; Ramirez-Diaz, Johanna; Pertile, Simone F N; Pertille, Fábio; Salvian, Mayara; Iung, Laiza H S; Rodriguez, Mary Ana P; Zampar, Aline; Gaya, Leila G; Carvalho, Rachel S B; Coelho, Antonio A D; Savino, Vicente J M; Coutinho, Luiz L; Mourão, Gerson B

    2016-09-01

    Repeated measures from the same individual have been analyzed by using repeatability and finite dimension models under univariate or multivariate analyses. However, in the last decade, the use of random regression models for genetic studies with longitudinal data have become more common. Thus, the aim of this research was to estimate genetic parameters for body weight of four experimental chicken lines by using univariate random regression models. Body weight data from hatching to 84 days of age (n = 34,730) from four experimental free-range chicken lines (7P, Caipirão da ESALQ, Caipirinha da ESALQ and Carijó Barbado) were used. The analysis model included the fixed effects of contemporary group (gender and rearing system), fixed regression coefficients for age at measurement, and random regression coefficients for permanent environmental effects and additive genetic effects. Heterogeneous variances for residual effects were considered, and one residual variance was assigned for each of six subclasses of age at measurement. Random regression curves were modeled by using Legendre polynomials of the second and third orders, with the best model chosen based on the Akaike Information Criterion, Bayesian Information Criterion, and restricted maximum likelihood. Multivariate analyses under the same animal mixed model were also performed for the validation of the random regression models. The Legendre polynomials of second order were better for describing the growth curves of the lines studied. Moderate to high heritabilities (h(2) = 0.15 to 0.98) were estimated for body weight between one and 84 days of age, suggesting that selection for body weight at all ages can be used as a selection criteria. Genetic correlations among body weight records obtained through multivariate analyses ranged from 0.18 to 0.96, 0.12 to 0.89, 0.06 to 0.96, and 0.28 to 0.96 in 7P, Caipirão da ESALQ, Caipirinha da ESALQ, and Carijó Barbado chicken lines, respectively. Results indicate that

  9. Differences in nutrient and energy contents of commonly consumed dishes prepared in restaurants v. at home in Hunan Province, China.

    PubMed

    Jia, Xiaofang; Liu, Jiawu; Chen, Bo; Jin, Donghui; Fu, Zhongxi; Liu, Huilin; Du, Shufa; Popkin, Barry M; Mendez, Michelle A

    2018-05-01

    Eating away from home is associated with poor diet quality, in part due to less healthy food choices and larger portions. However, few studies account for the potential additional contribution of differences in food composition between restaurant- and home-prepared dishes. The present study aimed to investigate differences in nutrients of dishes prepared in restaurants v. at home. Eight commonly consumed dishes were collected in twenty of each of the following types of locations: small and large restaurants, and urban and rural households. In addition, two fast-food items were collected from ten KFC, McDonald's and food stalls. Five samples per dish were randomly pooled from every location. Nutrients were analysed and energy was calculated in composite samples. Differences in nutrients of dishes by preparation location were determined. Hunan Province, China. Na, K, protein, total fat, fatty acids, carbohydrate and energy in dishes. On average, both the absolute and relative fat contents, SFA and Na:K ratio were higher in dishes prepared in restaurants than households (P < 0·05). Protein was 15 % higher in animal food-based dishes prepared in households than restaurants (P<0·05). Quantile regression models found that, at the 90th quantile, restaurant preparation was consistently negatively associated with protein and positively associated with the percentage of energy from fat in all dishes. Moreover, restaurant preparation also positively influenced the SFA content in dishes, except at the highest quantiles. These findings suggest that compared with home preparation, dishes prepared in restaurants in China may differ in concentrations of total fat, SFA, protein and Na:K ratio, which may further contribute, beyond food choices, to less healthy nutrient intakes linked to eating away from home.

  10. Health care expenditures among working-age adults with physical disabilities: variations by disability spans.

    PubMed

    Pumkam, Chaiporn; Probst, Janice C; Bennett, Kevin J; Hardin, James; Xirasagar, Sudha

    2013-10-01

    Data on health care costs for working-age adults with physical disabilities are sparse and the dynamic nature of disability is not captured. To assess the effect of 3 types of disability status (persistent disability, temporary disability, and no disability) on health care expenditures, out-of-pocket (OOP) spending, and financial burden. Data from Medical Expenditure Panel Survey panel 12 (2007-2008) were used. Respondents were classified into 3 groups. Medians of average annual expenditures, OOP expenditures, and financial ratios were weighted. The package R was used for quantile regression analyses. Fifteen percent of the working-age population reported persistent disabilities and 7% had temporary disabilities. The persistent disability group had the greatest unadjusted annual medians for total expenditures ($4234), OOP expenses ($591), and financial burden ratios (1.59), followed by the temporary disability group ($1612, $388, 0.71 respectively). The persistent disability group paid approximately 15% of total health care expenditures out-of-pocket, while the temporary disability group and the no disability group each paid 22% out-of-pocket. After adjusting for other factors, quantile regression shows that the persistent disability group had significantly higher total expenditures, OOP expenses, and financial burden ratios (coefficients 1664, 156, 0.58 respectively) relative to the no disability group at the 50th percentile. Results for the temporary disability group show a similar trend except for OOP expenses. People who have disabling conditions for a longer period have better financial protection against OOP health care expenses but face greater financial burdens because of their higher out-of-pocket expenditures and their socioeconomic disadvantages. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Network effects across the earnings distribution: payoffs to visible and invisible job finding assistance.

    PubMed

    McDonald, Steve

    2015-01-01

    This study makes three critical contributions to the "Do Contacts Matter?" debate. First, the widely reported null relationship between informal job searching and wages is shown to be mostly the artifact of a coding error and sample selection restrictions. Second, previous analyses examined only active informal job searching without fully considering the benefits derived from unsolicited network assistance (the "invisible hand of social capital") - thereby underestimating the network effect. Third, wage returns to networks are examined across the earnings distribution. Longitudinal data from the NLSY reveal significant wage returns for network-based job finding over formal job searching, especially for individuals who were informally recruited into their jobs (non-searchers). Fixed effects quantile regression analyses show that contacts generate wage premiums among middle and high wage jobs, but not low wage jobs. These findings challenge conventional wisdom on contact effects and advance understanding of how social networks affect wage attainment and inequality. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Drop-Weight Impact Test on U-Shape Concrete Specimens with Statistical and Regression Analyses

    PubMed Central

    Zhu, Xue-Chao; Zhu, Han; Li, Hao-Ran

    2015-01-01

    According to the principle and method of drop-weight impact test, the impact resistance of concrete was measured using self-designed U-shape specimens and a newly designed drop-weight impact test apparatus. A series of drop-weight impact tests were carried out with four different masses of drop hammers (0.875, 0.8, 0.675 and 0.5 kg). The test results show that the impact resistance results fail to follow a normal distribution. As expected, U-shaped specimens can predetermine the location of the cracks very well. It is also easy to record the cracks propagation during the test. The maximum of coefficient of variation in this study is 31.2%; it is lower than the values obtained from the American Concrete Institute (ACI) impact tests in the literature. By regression analysis, the linear relationship between the first-crack and ultimate failure impact resistance is good. It can suggested that a minimum number of specimens is required to reliably measure the properties of the material based on the observed levels of variation. PMID:28793540

  13. Practical Guidance for Conducting Mediation Analysis With Multiple Mediators Using Inverse Odds Ratio Weighting

    PubMed Central

    Nguyen, Quynh C.; Osypuk, Theresa L.; Schmidt, Nicole M.; Glymour, M. Maria; Tchetgen Tchetgen, Eric J.

    2015-01-01

    Despite the recent flourishing of mediation analysis techniques, many modern approaches are difficult to implement or applicable to only a restricted range of regression models. This report provides practical guidance for implementing a new technique utilizing inverse odds ratio weighting (IORW) to estimate natural direct and indirect effects for mediation analyses. IORW takes advantage of the odds ratio's invariance property and condenses information on the odds ratio for the relationship between the exposure (treatment) and multiple mediators, conditional on covariates, by regressing exposure on mediators and covariates. The inverse of the covariate-adjusted exposure-mediator odds ratio association is used to weight the primary analytical regression of the outcome on treatment. The treatment coefficient in such a weighted regression estimates the natural direct effect of treatment on the outcome, and indirect effects are identified by subtracting direct effects from total effects. Weighting renders treatment and mediators independent, thereby deactivating indirect pathways of the mediators. This new mediation technique accommodates multiple discrete or continuous mediators. IORW is easily implemented and is appropriate for any standard regression model, including quantile regression and survival analysis. An empirical example is given using data from the Moving to Opportunity (1994–2002) experiment, testing whether neighborhood context mediated the effects of a housing voucher program on obesity. Relevant Stata code (StataCorp LP, College Station, Texas) is provided. PMID:25693776

  14. Height premium for job performance.

    PubMed

    Kim, Tae Hyun; Han, Euna

    2017-08-01

    This study assessed the relationship of height with wages, using the 1998 and 2012 Korean Labor and Income Panel Study data. The key independent variable was height measured in centimeters, which was included as a series of dummy indicators of height per 5cm span (<155cm, 155-160cm, 160-165cm, and ≥165cm for women; <165cm, 165-170cm, 170-175cm, 175-180cm, and ≥180cm for men). We controlled for household- and individual-level random effects. We used a random-effect quantile regression model for monthly wages to assess the heterogeneity in the height-wage relationship, across the conditional distribution of monthly wages. We found a non-linear relationship of height with monthly wages. For men, the magnitude of the height wage premium was overall larger at the upper quantile of the conditional distribution of log monthly wages than at the median to low quantile, particularly in professional and semi-professional occupations. The height-wage premium was also larger at the 90th quantile for self-employed women and salaried men. Our findings add a global dimension to the existing evidence on height-wage premium, demonstrating non-linearity in the association between height and wages and heterogeneous changes in the dispersion and direction of the association between height and wages, by wage level. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Regional estimation of extreme suspended sediment concentrations using watershed characteristics

    NASA Astrophysics Data System (ADS)

    Tramblay, Yves; Ouarda, Taha B. M. J.; St-Hilaire, André; Poulin, Jimmy

    2010-01-01

    SummaryThe number of stations monitoring daily suspended sediment concentration (SSC) has been decreasing since the 1980s in North America while suspended sediment is considered as a key variable for water quality. The objective of this study is to test the feasibility of regionalising extreme SSC, i.e. estimating SSC extremes values for ungauged basins. Annual maximum SSC for 72 rivers in Canada and USA were modelled with probability distributions in order to estimate quantiles corresponding to different return periods. Regionalisation techniques, originally developed for flood prediction in ungauged basins, were tested using the climatic, topographic, land cover and soils attributes of the watersheds. Two approaches were compared, using either physiographic characteristics or seasonality of extreme SSC to delineate the regions. Multiple regression models to estimate SSC quantiles as a function of watershed characteristics were built in each region, and compared to a global model including all sites. Regional estimates of SSC quantiles were compared with the local values. Results show that regional estimation of extreme SSC is more efficient than a global regression model including all sites. Groups/regions of stations have been identified, using either the watershed characteristics or the seasonality of occurrence for extreme SSC values providing a method to better describe the extreme events of SSC. The most important variables for predicting extreme SSC are the percentage of clay in the soils, precipitation intensity and forest cover.

  16. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    ERIC Educational Resources Information Center

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  17. Patterns of medicinal plant use: an examination of the Ecuadorian Shuar medicinal flora using contingency table and binomial analyses.

    PubMed

    Bennett, Bradley C; Husby, Chad E

    2008-03-28

    Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.

  18. Covariate Measurement Error Correction for Student Growth Percentiles Using the SIMEX Method

    ERIC Educational Resources Information Center

    Shang, Yi; VanIwaarden, Adam; Betebenner, Damian W.

    2015-01-01

    In this study, we examined the impact of covariate measurement error (ME) on the estimation of quantile regression and student growth percentiles (SGPs), and find that SGPs tend to be overestimated among students with higher prior achievement and underestimated among those with lower prior achievement, a problem we describe as ME endogeneity in…

  19. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    PubMed

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2018-02-01

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE < 10.3%. The external data evaluation showed that the models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

  20. NCCS Regression Test Harness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tharrington, Arnold N.

    2015-09-09

    The NCCS Regression Test Harness is a software package that provides a framework to perform regression and acceptance testing on NCCS High Performance Computers. The package is written in Python and has only the dependency of a Subversion repository to store the regression tests.

  1. Factors Affecting Regression-Discontinuity.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.

    The regression-discontinuity approach to evaluating educational programs is reviewed, and regression-discontinuity post-program mean differences under various conditions are discussed. The regression-discontinuity design is used to determine whether post-program differences exist between an experimental program and a control group. The difference…

  2. Understanding poisson regression.

    PubMed

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  3. Time series regression model for infectious disease and weather.

    PubMed

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Regression Model for Light Weight and Crashworthiness Enhancement Design of Automotive Parts in Frontal CAR Crash

    NASA Astrophysics Data System (ADS)

    Bae, Gihyun; Huh, Hoon; Park, Sungho

    This paper deals with a regression model for light weight and crashworthiness enhancement design of automotive parts in frontal car crash. The ULSAB-AVC model is employed for the crash analysis and effective parts are selected based on the amount of energy absorption during the crash behavior. Finite element analyses are carried out for designated design cases in order to investigate the crashworthiness and weight according to the material and thickness of main energy absorption parts. Based on simulations results, a regression analysis is performed to construct a regression model utilized for light weight and crashworthiness enhancement design of automotive parts. An example for weight reduction of main energy absorption parts demonstrates the validity of a regression model constructed.

  5. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.

    PubMed

    Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa

    2008-01-01

    This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.

  6. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    PubMed

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  7. Effective behaviour change techniques for physical activity and healthy eating in overweight and obese adults; systematic review and meta-regression analyses.

    PubMed

    Samdal, Gro Beate; Eide, Geir Egil; Barth, Tom; Williams, Geoffrey; Meland, Eivind

    2017-03-28

    This systematic review aims to explain the heterogeneity in results of interventions to promote physical activity and healthy eating for overweight and obese adults, by exploring the differential effects of behaviour change techniques (BCTs) and other intervention characteristics. The inclusion criteria specified RCTs with ≥ 12 weeks' duration, from January 2007 to October 2014, for adults (mean age ≥ 40 years, mean BMI ≥ 30). Primary outcomes were measures of healthy diet or physical activity. Two reviewers rated study quality, coded the BCTs, and collected outcome results at short (≤6 months) and long term (≥12 months). Meta-analyses and meta-regressions were used to estimate effect sizes (ES), heterogeneity indices (I 2 ) and regression coefficients. We included 48 studies containing a total of 82 outcome reports. The 32 long term reports had an overall ES = 0.24 with 95% confidence interval (CI): 0.15 to 0.33 and I 2  = 59.4%. The 50 short term reports had an ES = 0.37 with 95% CI: 0.26 to 0.48, and I 2  = 71.3%. The number of BCTs unique to the intervention group, and the BCTs goal setting and self-monitoring of behaviour predicted the effect at short and long term. The total number of BCTs in both intervention arms and using the BCTs goal setting of outcome, feedback on outcome of behaviour, implementing graded tasks, and adding objects to the environment, e.g. using a step counter, significantly predicted the effect at long term. Setting a goal for change; and the presence of reporting bias independently explained 58.8% of inter-study variation at short term. Autonomy supportive and person-centred methods as in Motivational Interviewing, the BCTs goal setting of behaviour, and receiving feedback on the outcome of behaviour, explained all of the between study variations in effects at long term. There are similarities, but also differences in effective BCTs promoting change in healthy eating and physical activity and

  8. Multicollinearity in spatial genetics: separating the wheat from the chaff using commonality analyses.

    PubMed

    Prunier, J G; Colyn, M; Legendre, X; Nimon, K F; Flamand, M C

    2015-01-01

    Direct gradient analyses in spatial genetics provide unique opportunities to describe the inherent complexity of genetic variation in wildlife species and are the object of many methodological developments. However, multicollinearity among explanatory variables is a systemic issue in multivariate regression analyses and is likely to cause serious difficulties in properly interpreting results of direct gradient analyses, with the risk of erroneous conclusions, misdirected research and inefficient or counterproductive conservation measures. Using simulated data sets along with linear and logistic regressions on distance matrices, we illustrate how commonality analysis (CA), a detailed variance-partitioning procedure that was recently introduced in the field of ecology, can be used to deal with nonindependence among spatial predictors. By decomposing model fit indices into unique and common (or shared) variance components, CA allows identifying the location and magnitude of multicollinearity, revealing spurious correlations and thus thoroughly improving the interpretation of multivariate regressions. Despite a few inherent limitations, especially in the case of resistance model optimization, this review highlights the great potential of CA to account for complex multicollinearity patterns in spatial genetics and identifies future applications and lines of research. We strongly urge spatial geneticists to systematically investigate commonalities when performing direct gradient analyses. © 2014 John Wiley & Sons Ltd.

  9. Student Growth Percentiles Based on MIRT: Implications of Calibrated Projection. CRESST Report 842

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li; Choi, Kilchan

    2014-01-01

    This research concerns a new proposal for calculating student growth percentiles (SGP, Betebenner, 2009). In Betebenner (2009), quantile regression (QR) is used to estimate the SGPs. However, measurement error in the score estimates, which always exists in practice, leads to bias in the QR-­based estimates (Shang, 2012). One way to address this…

  10. Gender Gap in Mathematics and Physics in Chinese Middle Schools: A Case Study of A Beijing's District

    ERIC Educational Resources Information Center

    Li, Manli; Zhang, Yu; Wang, Yihan

    2017-01-01

    This study examines the gender gaps in mathematics and physics in Chinese middle schools. The data is from the Education Bureau management database which includes all middle school students who took high school entrance exam in a district of Beijing from 2006-2013. The ordinary least square model and quantile regression model are applied. This…

  11. Gender Gap in the National College Entrance Exam Performance in China: A Case Study of a Typical Chinese Municipality

    ERIC Educational Resources Information Center

    Zhang, Yu; Tsang, Mun

    2015-01-01

    This is one of the first studies to investigate gender achievement gap in the National College Entrance Exam in a typical municipality in China, which is the crucial examination for the transition from high school to higher education in that country. Using ordinary least square model and quantile regression model, the study consistently finds that…

  12. Adult Literacy, Heterogeneity and Returns to Schooling in Chile

    ERIC Educational Resources Information Center

    Patrinos, Harry Anthony; Sakellariou, Chris

    2015-01-01

    We examine the importance of adult functional literacy skills for individuals using a quantile regression methodology. The inclusion of the direct measure of basic skills reduces the return to schooling by 27%, equivalent to two additional years of schooling, while a one standard deviation increase in the score increases earnings by 20%. For those…

  13. Epidemiologic programs for computers and calculators. A microcomputer program for multiple logistic regression by unconditional and conditional maximum likelihood methods.

    PubMed

    Campos-Filho, N; Franco, E L

    1989-02-01

    A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.

  14. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  15. Methods for estimating selected low-flow statistics and development of annual flow-duration statistics for Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Kula, Stephanie P.

    2013-01-01

    This report presents the results of a study to develop methods for estimating selected low-flow statistics and for determining annual flow-duration statistics for Ohio streams. Regression techniques were used to develop equations for estimating 10-year recurrence-interval (10-percent annual-nonexceedance probability) low-flow yields, in cubic feet per second per square mile, with averaging periods of 1, 7, 30, and 90-day(s), and for estimating the yield corresponding to the long-term 80-percent duration flow. These equations, which estimate low-flow yields as a function of a streamflow-variability index, are based on previously published low-flow statistics for 79 long-term continuous-record streamgages with at least 10 years of data collected through water year 1997. When applied to the calibration dataset, average absolute percent errors for the regression equations ranged from 15.8 to 42.0 percent. The regression results have been incorporated into the U.S. Geological Survey (USGS) StreamStats application for Ohio (http://water.usgs.gov/osw/streamstats/ohio.html) in the form of a yield grid to facilitate estimation of the corresponding streamflow statistics in cubic feet per second. Logistic-regression equations also were developed and incorporated into the USGS StreamStats application for Ohio for selected low-flow statistics to help identify occurrences of zero-valued statistics. Quantiles of daily and 7-day mean streamflows were determined for annual and annual-seasonal (September–November) periods for each complete climatic year of streamflow-gaging station record for 110 selected streamflow-gaging stations with 20 or more years of record. The quantiles determined for each climatic year were the 99-, 98-, 95-, 90-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 25-, 20-, 10-, 5-, 2-, and 1-percent exceedance streamflows. Selected exceedance percentiles of the annual-exceedance percentiles were subsequently computed and tabulated to help facilitate consideration of the

  16. Selecting risk factors: a comparison of discriminant analysis, logistic regression and Cox's regression model using data from the Tromsø Heart Study.

    PubMed

    Brenn, T; Arnesen, E

    1985-01-01

    For comparative evaluation, discriminant analysis, logistic regression and Cox's model were used to select risk factors for total and coronary deaths among 6595 men aged 20-49 followed for 9 years. Groups with mortality between 5 and 93 per 1000 were considered. Discriminant analysis selected variable sets only marginally different from the logistic and Cox methods which always selected the same sets. A time-saving option, offered for both the logistic and Cox selection, showed no advantage compared with discriminant analysis. Analysing more than 3800 subjects, the logistic and Cox methods consumed, respectively, 80 and 10 times more computer time than discriminant analysis. When including the same set of variables in non-stepwise analyses, all methods estimated coefficients that in most cases were almost identical. In conclusion, discriminant analysis is advocated for preliminary or stepwise analysis, otherwise Cox's method should be used.

  17. Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations

    NASA Astrophysics Data System (ADS)

    Mehran, A.; AghaKouchak, A.; Phillips, T. J.

    2014-02-01

    The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies, and biases for both entire distributions and their upper tails. The results of the volumetric hit index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas but that their replication of observed precipitation over arid regions and certain subcontinental regions (e.g., northern Eurasia, eastern Russia, and central Australia) is problematical. Overall, the VHI of the multimodel ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and Central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g., western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, intermodel variations in bias over Australia and Amazonia are considerable. The quantile bias analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. It is found that a simple mean field bias removal improves the overall B and VHI values but does not make a significant improvement at high quantiles of precipitation.

  18. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  19. Trends of VOC exposures among a nationally representative sample: Analysis of the NHANES 1988 through 2004 data sets

    PubMed Central

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2015-01-01

    Exposures to volatile organic compounds (VOCs) are ubiquitous due to emissions from personal, commercial and industrial products, but quantitative and representative information regarding long term exposure trends is lacking. This study characterizes trends from1988 to 2004 for the 15 VOCs measured in blood in five cohorts of the National Health and Nutrition Examination Survey (NHANES), a large and representative sample of U.S. adults. Trends were evaluated at various percentiles using linear quantile regression (QR) models, which were adjusted for solvent-related occupations and cotinine levels. Most VOCs showed decreasing trends at all quantiles, e.g., median exposures declined by 2.5 (m, p-xylene) to 6.4 (tetrachloroethene) percent per year over the 15 year period. Trends varied by VOC and quantile, and were grouped into three patterns: similar decreases at all quantiles (including benzene, toluene); most rapid decreases at upper quantiles (ethylbenzene, m, p-xylene, o-xylene, styrene, chloroform, tetrachloroethene); and fastest declines at central quantiles (1,4-dichlorobenzene). These patterns reflect changes in exposure sources, e.g., upper-percentile exposures may result mostly from occupational exposure, while lower percentile exposures arise from general environmental sources. Both VOC emissions aggregated at the national level and VOC concentrations measured in ambient air also have declined substantially over the study period and are supportive of the exposure trends, although the NHANES data suggest the importance of indoor sources and personal activities on VOC exposures. While piecewise QR models suggest that exposures of several VOCs decreased little or any during the 1990’s, followed by more rapid decreases from 1999 to 2004, questions are raised concerning the reliability of VOC data in several of the NHANES cohorts and its applicability as an exposure indicator, as demonstrated by the modest correlation between VOC levels in blood and personal

  20. Trends of VOC exposures among a nationally representative sample: Analysis of the NHANES 1988 through 2004 data sets

    NASA Astrophysics Data System (ADS)

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2011-09-01

    Exposures to volatile organic compounds (VOCs) are ubiquitous due to emissions from personal, commercial and industrial products, but quantitative and representative information regarding long term exposure trends is lacking. This study characterizes trends from 1988 to 2004 for the 15 VOCs measured in blood in five cohorts of the National Health and Nutrition Examination Survey (NHANES), a large and representative sample of U.S. adults. Trends were evaluated at various percentiles using linear quantile regression (QR) models, which were adjusted for solvent-related occupations and cotinine levels. Most VOCs showed decreasing trends at all quantiles, e.g., median exposures declined by 2.5 (m,p-xylene) to 6.4 (tetrachloroethene) percent per year over the 15 year period. Trends varied by VOC and quantile, and were grouped into three patterns: similar decreases at all quantiles (including benzene, toluene); most rapid decreases at upper quantiles (ethylbenzene, m,p-xylene, o-xylene, styrene, chloroform, tetrachloroethene); and fastest declines at central quantiles (1,4-dichlorobenzene). These patterns reflect changes in exposure sources, e.g., upper-percentile exposures may result mostly from occupational exposure, while lower percentile exposures arise from general environmental sources. Both VOC emissions aggregated at the national level and VOC concentrations measured in ambient air also have declined substantially over the study period and are supportive of the exposure trends, although the NHANES data suggest the importance of indoor sources and personal activities on VOC exposures. While piecewise QR models suggest that exposures of several VOCs decreased little or any during the 1990's, followed by more rapid decreases from 1999 to 2004, questions are raised concerning the reliability of VOC data in several of the NHANES cohorts and its applicability as an exposure indicator, as demonstrated by the modest correlation between VOC levels in blood and personal air

  1. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis

    PubMed Central

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Background: Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. Methods: In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. Results: The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Conclusion: Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended. PMID:26793655

  2. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis.

    PubMed

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended.

  3. Socioeconomic determinants of childhood overweight and obesity in China: the long arm of institutional power.

    PubMed

    Fu, Qiang; George, Linda K

    2015-07-01

    Previous studies have widely reported that the association between socioeconomic status (SES) and childhood overweight and obesity in China is significant and positive, which lends little support to the fundamental-cause perspective. Using multiple waves (1997, 2000, 2004 and 2006) of the China Health and Nutrition Survey (CHNS) (N = 2,556, 2,063, 1,431 and 1,242, respectively) and continuous BMI cut-points obtained from a polynomial method, (mixed-effect) logistic regression analyses show that parental state-sector employment, an important, yet overlooked, indicator of political power during the market transformation has changed from a risk factor for childhood overweight/obesity in 1997 to a protective factor for childhood overweight/obesity in 2006. Results from quantile regression analyses generate the same conclusions and demonstrate that the protective effect of parental state sector employment at high percentiles of BMI is robust under different estimation strategies. By bridging the fundamental causes perspective and theories of market transformation, this research not only documents the effect of political power on childhood overweight/obesity but also calls for the use of multifaceted, culturally-relevant stratification measures in testing the fundamental cause perspective across time and space. © 2015 Foundation for the Sociology of Health & Illness.

  4. Bootstrap investigation of the stability of a Cox regression model.

    PubMed

    Altman, D G; Andersen, P K

    1989-07-01

    We describe a bootstrap investigation of the stability of a Cox proportional hazards regression model resulting from the analysis of a clinical trial of azathioprine versus placebo in patients with primary biliary cirrhosis. We have considered stability to refer both to the choice of variables included in the model and, more importantly, to the predictive ability of the model. In stepwise Cox regression analyses of 100 bootstrap samples using 17 candidate variables, the most frequently selected variables were those selected in the original analysis, and no other important variable was identified. Thus there was no reason to doubt the model obtained in the original analysis. For each patient in the trial, bootstrap confidence intervals were constructed for the estimated probability of surviving two years. It is shown graphically that these intervals are markedly wider than those obtained from the original model.

  5. Error Covariance Penalized Regression: A novel multivariate model combining penalized regression with multivariate error structure.

    PubMed

    Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C

    2018-06-29

    A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Estimating tree crown widths for the primary Acadian species in Maine

    Treesearch

    Matthew B. Russell; Aaron R. Weiskittel

    2012-01-01

    In this analysis, data for seven conifer and eight hardwood species were gathered from across the state of Maine for estimating tree crown widths. Maximum and largest crown width equations were developed using tree diameter at breast height as the primary predicting variable. Quantile regression techniques were used to estimate the maximum crown width and a constrained...

  7. Establishing Normative Reference Values for Handgrip among Hungarian Youth

    ERIC Educational Resources Information Center

    Saint-Maurice, Pedro F.; Laurson, Kelly R.; Karsai, István; Kaj, Mónika; Csányi, Tamás

    2015-01-01

    Purpose: The purpose of this study was to examine age- and sex-related variation in handgrip strength and to determine reference values for the Hungarian population. Method: A sample of 1,086 Hungary youth (aged 11-18 years old; 654 boys and 432 girls) completed a handgrip strength assessment using a handheld dynamometer. Quantile regression was…

  8. Declining annual streamflow distributions in the Pacific Northwest United States, 1948-2006

    Treesearch

    C. H. Luce; Z. A. Holden

    2009-01-01

    Much of the discussion on climate change and water in the western United States centers on decreased snowpack and earlier spring runoff. Although increasing variability in annual flows has been noted, the nature of those changes is largely unexplored. We tested for trends in the distribution of annual runoff using quantile regression at 43 gages in the Pacific...

  9. Orthogonal Regression: A Teaching Perspective

    ERIC Educational Resources Information Center

    Carr, James R.

    2012-01-01

    A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…

  10. Across the continuum of satisfaction with work-family balance: Work hours, flexibility-fit, and work-family culture.

    PubMed

    McNamara, Tay K; Pitt-Catsouphes, Marcie; Matz-Costa, Christina; Brown, Melissa; Valcour, Monique

    2013-03-01

    This study investigated the association between hours worked per week and satisfaction with work-family balance, using data from a 2007-2008 survey of employees nested within organizations. We tested hypotheses informed by the resource drain and resources-and-demands perspectives using quantile regression. We found that the negative association between hours worked per week and satisfaction with work-family balance was significantly stronger at the 25th percentile, as compared to at the 75th percentile, of satisfaction with work-family balance. Further, there was some evidence that perceived flexibility-fit (i.e., the fit between worker needs and flexible work options available) and supportive work-family culture attenuated the relationship between hours worked and satisfaction with work-family balance. The results suggest that analyses focusing on the average relationship between long work hours (such as those using ordinary least squares regression) and satisfaction with work-family balance may underestimate the importance of long work hours for workers with lower satisfaction levels. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.

    PubMed

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.

  12. Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate

    NASA Astrophysics Data System (ADS)

    Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno

    2017-03-01

    This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.

  13. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression

    PubMed Central

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271

  14. Association between response rates and survival outcomes in patients with newly diagnosed multiple myeloma. A systematic review and meta-regression analysis.

    PubMed

    Mainou, Maria; Madenidou, Anastasia-Vasiliki; Liakos, Aris; Paschos, Paschalis; Karagiannis, Thomas; Bekiari, Eleni; Vlachaki, Efthymia; Wang, Zhen; Murad, Mohammad Hassan; Kumar, Shaji; Tsapas, Apostolos

    2017-06-01

    We performed a systematic review and meta-regression analysis of randomized control trials to investigate the association between response to initial treatment and survival outcomes in patients with newly diagnosed multiple myeloma (MM). Response outcomes included complete response (CR) and the combined outcome of CR or very good partial response (VGPR), while survival outcomes were overall survival (OS) and progression-free survival (PFS). We used random-effect meta-regression models and conducted sensitivity analyses based on definition of CR and study quality. Seventy-two trials were included in the systematic review, 63 of which contributed data in meta-regression analyses. There was no association between OS and CR in patients without autologous stem cell transplant (ASCT) (regression coefficient: .02, 95% confidence interval [CI] -0.06, 0.10), in patients undergoing ASCT (-.11, 95% CI -0.44, 0.22) and in trials comparing ASCT with non-ASCT patients (.04, 95% CI -0.29, 0.38). Similarly, OS did not correlate with the combined metric of CR or VGPR, and no association was evident between response outcomes and PFS. Sensitivity analyses yielded similar results. This meta-regression analysis suggests that there is no association between conventional response outcomes and survival in patients with newly diagnosed MM. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Logic regression and its extensions.

    PubMed

    Schwender, Holger; Ruczinski, Ingo

    2010-01-01

    Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Analysis of Palm Oil Production, Export, and Government Consumption to Gross Domestic Product of Five Districts in West Kalimantan by Panel Regression

    NASA Astrophysics Data System (ADS)

    Sulistianingsih, E.; Kiftiah, M.; Rosadi, D.; Wahyuni, H.

    2017-04-01

    Gross Domestic Product (GDP) is an indicator of economic growth in a region. GDP is a panel data, which consists of cross-section and time series data. Meanwhile, panel regression is a tool which can be utilised to analyse panel data. There are three models in panel regression, namely Common Effect Model (CEM), Fixed Effect Model (FEM) and Random Effect Model (REM). The models will be chosen based on results of Chow Test, Hausman Test and Lagrange Multiplier Test. This research analyses palm oil about production, export, and government consumption to five district GDP are in West Kalimantan, namely Sanggau, Sintang, Sambas, Ketapang and Bengkayang by panel regression. Based on the results of analyses, it concluded that REM, which adjusted-determination-coefficient is 0,823, is the best model in this case. Also, according to the result, only Export and Government Consumption that influence GDP of the districts.

  17. Flood Change Assessment and Attribution in Austrian alpine Basins

    NASA Astrophysics Data System (ADS)

    Claps, Pierluigi; Allamano, Paola; Como, Anastasia; Viglione, Alberto

    2016-04-01

    The present paper aims to investigate the sensitivity of flood peaks to global warming in the Austrian alpine basins. A group of 97 Austrian watersheds, with areas ranging from 14 to 6000 km2 and with average elevation ranging from 1000 to 2900 m a.s.l. have been considered. Annual maximum floods are available for the basins from 1890 to 2007 with two densities of observation. In a first period, until 1950, an average of 42 records of flood peaks are available. From 1951 to 2007 the density of observation increases to an average amount of contemporary peaks of 85. This information is very important with reference to the statistical tools used for the empirical assessment of change over time, that is linear quantile regressions. Application of this tool to the data set unveils trends in extreme events, confirmed by statistical testing, for the 0.75 and 0.95 empirical quantiles. All applications are made with specific (discharges/area) values . Similarly of what done in a previous approach, multiple quantile regressions have also been applied, confirming the presence of trends even when the possible interference of the specific discharge and morphoclimatic parameters (i.e. mean elevation and catchment area). Application of a geomorphoclimatic model by Allamano et al (2009) can allow to mimic to which extent the empirically available increase in air temperature and annual rainfall can justify the attribution of change derived by the empirical statistical tools. An comparison with data from Swiss alpine basins treated in a previous paper is finally undertaken.

  18. Reporting and methodological quality of meta-analyses in urological literature.

    PubMed

    Xia, Leilei; Xu, Jing; Guzzo, Thomas J

    2017-01-01

    To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, " a priori " design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and " a priori " design were associated with superior reporting quality, following PRISMA guideline and " a priori " design were associated with superior methodological quality. Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having " a priori " protocol.

  19. Distinct patterns of desynchronized limb regression in malagasy scincine lizards (squamata, scincidae).

    PubMed

    Miralles, Aurélien; Hipsley, Christy A; Erens, Jesse; Gehara, Marcelo; Rakotoarison, Andolalao; Glaw, Frank; Müller, Johannes; Vences, Miguel

    2015-01-01

    Scincine lizards in Madagascar form an endemic clade of about 60 species exhibiting a variety of ecomorphological adaptations. Several subclades have adapted to burrowing and convergently regressed their limbs and eyes, resulting in a variety of partial and completely limbless morphologies among extant taxa. However, patterns of limb regression in these taxa have not been studied in detail. Here we fill this gap in knowledge by providing a phylogenetic analysis of DNA sequences of three mitochondrial and four nuclear gene fragments in an extended sampling of Malagasy skinks, and microtomographic analyses of osteology of various burrowing taxa adapted to sand substrate. Based on our data we propose to (i) consider Sirenoscincus Sakata & Hikida, 2003, as junior synonym of Voeltzkowia Boettger, 1893; (ii) resurrect the genus name Grandidierina Mocquard, 1894, for four species previously included in Voeltzkowia; and (iii) consider Androngo Brygoo, 1982, as junior synonym of Pygomeles Grandidier, 1867. By supporting the clade consisting of the limbless Voeltzkowia mira and the forelimb-only taxa V. mobydick and V. yamagishii, our data indicate that full regression of limbs and eyes occurred in parallel twice in the genus Voeltzkowia (as hitherto defined) that we consider as a sand-swimming ecomorph: in the Voeltzkowia clade sensu stricto the regression first affected the hindlimbs and subsequently the forelimbs, whereas the Grandidierina clade first regressed the forelimbs and subsequently the hindlimbs following the pattern prevalent in squamates. Timetree reconstructions for the Malagasy Scincidae contain a substantial amount of uncertainty due to the absence of suitable primary fossil calibrations. However, our preliminary reconstructions suggest rapid limb regression in Malagasy scincids with an estimated maximal duration of 6 MYr for a complete regression in Paracontias, and 4 and 8 MYr respectively for complete regression of forelimbs in Grandidierina and

  20. Distinct Patterns of Desynchronized Limb Regression in Malagasy Scincine Lizards (Squamata, Scincidae)

    PubMed Central

    Miralles, Aurélien; Hipsley, Christy A.; Erens, Jesse; Gehara, Marcelo; Rakotoarison, Andolalao; Glaw, Frank; Müller, Johannes; Vences, Miguel

    2015-01-01

    Scincine lizards in Madagascar form an endemic clade of about 60 species exhibiting a variety of ecomorphological adaptations. Several subclades have adapted to burrowing and convergently regressed their limbs and eyes, resulting in a variety of partial and completely limbless morphologies among extant taxa. However, patterns of limb regression in these taxa have not been studied in detail. Here we fill this gap in knowledge by providing a phylogenetic analysis of DNA sequences of three mitochondrial and four nuclear gene fragments in an extended sampling of Malagasy skinks, and microtomographic analyses of osteology of various burrowing taxa adapted to sand substrate. Based on our data we propose to (i) consider Sirenoscincus Sakata & Hikida, 2003, as junior synonym of Voeltzkowia Boettger, 1893; (ii) resurrect the genus name Grandidierina Mocquard, 1894, for four species previously included in Voeltzkowia; and (iii) consider Androngo Brygoo, 1982, as junior synonym of Pygomeles Grandidier, 1867. By supporting the clade consisting of the limbless Voeltzkowia mira and the forelimb-only taxa V. mobydick and V. yamagishii, our data indicate that full regression of limbs and eyes occurred in parallel twice in the genus Voeltzkowia (as hitherto defined) that we consider as a sand-swimming ecomorph: in the Voeltzkowia clade sensu stricto the regression first affected the hindlimbs and subsequently the forelimbs, whereas the Grandidierina clade first regressed the forelimbs and subsequently the hindlimbs following the pattern prevalent in squamates. Timetree reconstructions for the Malagasy Scincidae contain a substantial amount of uncertainty due to the absence of suitable primary fossil calibrations. However, our preliminary reconstructions suggest rapid limb regression in Malagasy scincids with an estimated maximal duration of 6 MYr for a complete regression in Paracontias, and 4 and 8 MYr respectively for complete regression of forelimbs in Grandidierina and

  1. Effect of threatening life experiences and adverse family relations in ulcerative colitis: analysis using structural equation modeling and comparison with Crohn's disease.

    PubMed

    Slonim-Nevo, Vered; Sarid, Orly; Friger, Michael; Schwartz, Doron; Sergienko, Ruslan; Pereg, Avihu; Vardi, Hillel; Singer, Terri; Chernin, Elena; Greenberg, Dan; Odes, Shmuel

    2017-05-01

    We published that threatening life experiences and adverse family relations impact Crohn's disease (CD) adversely. In this study, we examine the influence of these stressors in ulcerative colitis (UC). Patients completed demography, economic status (ES), the Patient-Simple Clinical Colitis Activity Index (P-SCCAI), the Short Inflammatory Bowel Disease Questionnaire (SIBDQ), the Short-Form Health Survey (SF-36), the Brief Symptom Inventory (BSI), the Family Assessment Device (FAD), and the List of Threatening Life Experiences (LTE). Analysis included multiple linear and quantile regressions and structural equation modeling, comparing CD. UC patients (N=148, age 47.55±16.04 years, 50.6% women) had scores [median (interquartile range)] as follows: SCAAI, 2 (0.3-4.8); FAD, 1.8 (1.3-2.2); LTE, 1.0 (0-2.0); SF-36 Physical Health, 49.4 (36.8-55.1); SF-36 Mental Health, 45 (33.6-54.5); Brief Symptom Inventory-Global Severity Index (GSI), 0.5 (0.2-1.0). SIBDQ was 49.76±14.91. There were significant positive associations for LTE and SCAAI (25, 50, 75% quantiles), FAD and SF-36 Mental Health, FAD and LTE with GSI (50, 75, 90% quantiles), and ES with SF-36 and SIBDQ. The negative associations were as follows: LTE with SF-36 Physical/Mental Health, SIBDQ with FAD and LTE, ES with GSI (all quantiles), and P-SCCAI (75, 90% quantiles). In structural equation modeling analysis, LTE impacted ES negatively and ES impacted GSI negatively; LTE impacted GSI positively and GSI impacted P-SCCAI positively. In a split model, ES had a greater effect on GSI in UC than CD, whereas other path magnitudes were similar. Threatening life experiences, adverse family relations, and poor ES make UC patients less healthy both physically and mentally. The impact of ES is worse in UC than CD.

  2. On the use of log-transformation vs. nonlinear regression for analyzing biological power laws.

    PubMed

    Xiao, Xiao; White, Ethan P; Hooten, Mevin B; Durham, Susan L

    2011-10-01

    Power-law relationships are among the most well-studied functional relationships in biology. Recently the common practice of fitting power laws using linear regression (LR) on log-transformed data has been criticized, calling into question the conclusions of hundreds of studies. It has been suggested that nonlinear regression (NLR) is preferable, but no rigorous comparison of these two methods has been conducted. Using Monte Carlo simulations, we demonstrate that the error distribution determines which method performs better, with NLR better characterizing data with additive, homoscedastic, normal error and LR better characterizing data with multiplicative, heteroscedastic, lognormal error. Analysis of 471 biological power laws shows that both forms of error occur in nature. While previous analyses based on log-transformation appear to be generally valid, future analyses should choose methods based on a combination of biological plausibility and analysis of the error distribution. We provide detailed guidelines and associated computer code for doing so, including a model averaging approach for cases where the error structure is uncertain.

  3. Diffusion patterns of new anti-diabetic drugs into hospitals in Taiwan: the case of Thiazolidinediones for diabetes

    PubMed Central

    2011-01-01

    Background Diffusion of new drugs in the health care market affects patients' access to new treatment options and health care expenditures. We examined how a new drug class for diabetes mellitus, thiazolidinediones (TZDs), diffused in the health care market in Taiwan. Methods Assuming that monthly hospital prescriptions of TZDs could serve as a micro-market to perform drug penetration studies, we retrieved monthly TZD prescription data for 580 hospitals in Taiwan from Taiwan's National Health Insurance Research Database for the period between March 1, 2001 and December 31, 2005. Three diffusion parameters, time to adoption, speed of penetration (monthly growth on prescriptions), and peak penetration (maximum monthly prescription) were evaluated. Cox proportional hazards model and quantile regressions were estimated for analyses on the diffusion parameters. Results Prior hospital-level pharmaceutical prescription concentration significantly deterred the adoption of the new drug class (HR: 0.02, 95%CI = 0.01 to 0.04). Adoption of TZDs was slower in district hospitals (HR = 0.43, 95%CI = 0.24 to 0.75) than medical centers and faster in non-profit hospitals than public hospitals (HR = 1.79, 95%CI = 1.23 to 2.61). Quantile regression showed that penetration speed was associated with a hospital's prior anti-diabetic prescriptions (25%Q: 18.29; 50%Q: 25.57; 75%Q: 30.97). Higher peaks were found in hospitals that had adopted TZD early (25%Q: -40.33; 50%Q: -38.65; 75%Q: -32.29) and in hospitals in which the drugs penetrated more quickly (25%Q: 16.53; 50%Q: 24.91; 75%Q: 31.50). Conclusions Medical centers began to prescribe TZDs earlier, and they prescribed more TZDs at a faster pace. The TZD diffusion patterns varied among hospitals depending accreditation level, ownership type, and prescription volume of Anti-diabetic drugs. PMID:21281475

  4. The effect of smoking habit changes on body weight: Evidence from the UK.

    PubMed

    Pieroni, Luca; Salmasi, Luca

    2016-03-01

    This paper evaluates the causal relationship between smoking and body weight through two waves (2004-2006) of the British Household Panel Survey. We model the effect of changes in smoking habits, such as quitting or reducing, and account for the heterogeneous responses of individuals located at different points of the body mass distribution by quantile regression. We test our results by means of a large set of control groups and investigate their robustness by using the changes-in-changes estimator and accounting for different thresholds to define smoking reductions. Our results reveal the positive effect of quitting smoking on weight changes, which is also found to increase in the highest quantiles, whereas the decision to reduce smoking does not affect body weight. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Quantitative characterization of the regressive ecological succession by fractal analysis of plant spatial patterns

    USGS Publications Warehouse

    Alados, C.L.; Pueyo, Y.; Giner, M.L.; Navarro, T.; Escos, J.; Barroso, F.; Cabezudo, B.; Emlen, J.M.

    2003-01-01

    We studied the effect of grazing on the degree of regression of successional vegetation dynamic in a semi-arid Mediterranean matorral. We quantified the spatial distribution patterns of the vegetation by fractal analyses, using the fractal information dimension and spatial autocorrelation measured by detrended fluctuation analyses (DFA). It is the first time that fractal analysis of plant spatial patterns has been used to characterize the regressive ecological succession. Plant spatial patterns were compared over a long-term grazing gradient (low, medium and heavy grazing pressure) and on ungrazed sites for two different plant communities: A middle dense matorral of Chamaerops and Periploca at Sabinar-Romeral and a middle dense matorral of Chamaerops, Rhamnus and Ulex at Requena-Montano. The two communities differed also in the microclimatic characteristics (sea oriented at the Sabinar-Romeral site and inland oriented at the Requena-Montano site). The information fractal dimension increased as we moved from a middle dense matorral to discontinuous and scattered matorral and, finally to the late regressive succession, at Stipa steppe stage. At this stage a drastic change in the fractal dimension revealed a change in the vegetation structure, accurately indicating end successional vegetation stages. Long-term correlation analysis (DFA) revealed that an increase in grazing pressure leads to unpredictability (randomness) in species distributions, a reduction in diversity, and an increase in cover of the regressive successional species, e.g. Stipa tenacissima L. These comparisons provide a quantitative characterization of the successional dynamic of plant spatial patterns in response to grazing perturbation gradient. ?? 2002 Elsevier Science B.V. All rights reserved.

  6. Regression in autistic spectrum disorders.

    PubMed

    Stefanatos, Gerry A

    2008-12-01

    A significant proportion of children diagnosed with Autistic Spectrum Disorder experience a developmental regression characterized by a loss of previously-acquired skills. This may involve a loss of speech or social responsitivity, but often entails both. This paper critically reviews the phenomena of regression in autistic spectrum disorders, highlighting the characteristics of regression, age of onset, temporal course, and long-term outcome. Important considerations for diagnosis are discussed and multiple etiological factors currently hypothesized to underlie the phenomenon are reviewed. It is argued that regressive autistic spectrum disorders can be conceptualized on a spectrum with other regressive disorders that may share common pathophysiological features. The implications of this viewpoint are discussed.

  7. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

    PubMed

    Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

    2017-06-01

    To extend the evidence on the reliability and construct validity of the Five-Factor Model Rating Form (FFMRF) in its self-report version, two independent samples of Italian participants, which were composed of 510 adolescent high school students and 457 community-dwelling adults, respectively, were administered the FFMRF in its Italian translation. Adolescent participants were also administered the Italian translation of the Borderline Personality Features Scale for Children-11 (BPFSC-11), whereas adult participants were administered the Italian translation of the Triarchic Psychopathy Measure (TriPM). Cronbach α values were consistent with previous findings; in both samples, average interitem r values indicated acceptable internal consistency for all FFMRF scales. A multidimensional graded item response theory model indicated that the majority of FFMRF items had adequate discrimination parameters; information indices supported the reliability of the FFMRF scales. Both categorical (i.e., item-level) and scale-level regression analyses suggested that the FFMRF scores may predict a nonnegligible amount of variance in the BPFSC-11 total score in adolescent participants, and in the TriPM scale scores in adult participants.

  8. Robust inference under the beta regression model with application to health care studies.

    PubMed

    Ghosh, Abhik

    2017-01-01

    Data on rates, percentages, or proportions arise frequently in many different applied disciplines like medical biology, health care, psychology, and several others. In this paper, we develop a robust inference procedure for the beta regression model, which is used to describe such response variables taking values in (0, 1) through some related explanatory variables. In relation to the beta regression model, the issue of robustness has been largely ignored in the literature so far. The existing maximum likelihood-based inference has serious lack of robustness against outliers in data and generate drastically different (erroneous) inference in the presence of data contamination. Here, we develop the robust minimum density power divergence estimator and a class of robust Wald-type tests for the beta regression model along with several applications. We derive their asymptotic properties and describe their robustness theoretically through the influence function analyses. Finite sample performances of the proposed estimators and tests are examined through suitable simulation studies and real data applications in the context of health care and psychology. Although we primarily focus on the beta regression models with a fixed dispersion parameter, some indications are also provided for extension to the variable dispersion beta regression models with an application.

  9. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  10. Retargeted Least Squares Regression Algorithm.

    PubMed

    Zhang, Xu-Yao; Wang, Lingfeng; Xiang, Shiming; Liu, Cheng-Lin

    2015-09-01

    This brief presents a framework of retargeted least squares regression (ReLSR) for multicategory classification. The core idea is to directly learn the regression targets from data other than using the traditional zero-one matrix as regression targets. The learned target matrix can guarantee a large margin constraint for the requirement of correct classification for each data point. Compared with the traditional least squares regression (LSR) and a recently proposed discriminative LSR models, ReLSR is much more accurate in measuring the classification error of the regression model. Furthermore, ReLSR is a single and compact model, hence there is no need to train two-class (binary) machines that are independent of each other. The convex optimization problem of ReLSR is solved elegantly and efficiently with an alternating procedure including regression and retargeting as substeps. The experimental evaluation over a range of databases identifies the validity of our method.

  11. A comparison between the use of Cox regression and the use of partial least squares-Cox regression to predict the survival of kidney-transplant patients

    NASA Astrophysics Data System (ADS)

    Solimun

    2017-05-01

    The aim of this research is to model survival data from kidney-transplant patients using the partial least squares (PLS)-Cox regression, which can both meet and not meet the no-multicollinearity assumption. The secondary data were obtained from research entitled "Factors affecting the survival of kidney-transplant patients". The research subjects comprised 250 patients. The predictor variables consisted of: age (X1), sex (X2); two categories, prior hemodialysis duration (X3), diabetes (X4); two categories, prior transplantation number (X5), number of blood transfusions (X6), discrepancy score (X7), use of antilymphocyte globulin(ALG) (X8); two categories, while the response variable was patient survival time (in months). Partial least squares regression is a model that connects the predictor variables X and the response variable y and it initially aims to determine the relationship between them. Results of the above analyses suggest that the survival of kidney transplant recipients ranged from 0 to 55 months, with 62% of the patients surviving until they received treatment that lasted for 55 months. The PLS-Cox regression analysis results revealed that patients' age and the use of ALG significantly affected the survival time of patients. The factor of patients' age (X1) in the PLS-Cox regression model merely affected the failure probability by 1.201. This indicates that the probability of dying for elderly patients with a kidney transplant is 1.152 times higher than that for younger patients.

  12. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Bayesian isotonic density regression

    PubMed Central

    Wang, Lianming; Dunson, David B.

    2011-01-01

    Density regression models allow the conditional distribution of the response given predictors to change flexibly over the predictor space. Such models are much more flexible than nonparametric mean regression models with nonparametric residual distributions, and are well supported in many applications. A rich variety of Bayesian methods have been proposed for density regression, but it is not clear whether such priors have full support so that any true data-generating model can be accurately approximated. This article develops a new class of density regression models that incorporate stochastic-ordering constraints which are natural when a response tends to increase or decrease monotonely with a predictor. Theory is developed showing large support. Methods are developed for hypothesis testing, with posterior computation relying on a simple Gibbs sampler. Frequentist properties are illustrated in a simulation study, and an epidemiology application is considered. PMID:22822259

  14. Fungible weights in logistic regression.

    PubMed

    Jones, Jeff A; Waller, Niels G

    2016-06-01

    In this article we develop methods for assessing parameter sensitivity in logistic regression models. To set the stage for this work, we first review Waller's (2008) equations for computing fungible weights in linear regression. Next, we describe 2 methods for computing fungible weights in logistic regression. To demonstrate the utility of these methods, we compute fungible logistic regression weights using data from the Centers for Disease Control and Prevention's (2010) Youth Risk Behavior Surveillance Survey, and we illustrate how these alternate weights can be used to evaluate parameter sensitivity. To make our work accessible to the research community, we provide R code (R Core Team, 2015) that will generate both kinds of fungible logistic regression weights. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Application of logistic regression for landslide susceptibility zoning of Cekmece Area, Istanbul, Turkey

    NASA Astrophysics Data System (ADS)

    Duman, T. Y.; Can, T.; Gokceoglu, C.; Nefeslioglu, H. A.; Sonmez, H.

    2006-11-01

    As a result of industrialization, throughout the world, cities have been growing rapidly for the last century. One typical example of these growing cities is Istanbul, the population of which is over 10 million. Due to rapid urbanization, new areas suitable for settlement and engineering structures are necessary. The Cekmece area located west of the Istanbul metropolitan area is studied, because the landslide activity is extensive in this area. The purpose of this study is to develop a model that can be used to characterize landslide susceptibility in map form using logistic regression analysis of an extensive landslide database. A database of landslide activity was constructed using both aerial-photography and field studies. About 19.2% of the selected study area is covered by deep-seated landslides. The landslides that occur in the area are primarily located in sandstones with interbedded permeable and impermeable layers such as claystone, siltstone and mudstone. About 31.95% of the total landslide area is located at this unit. To apply logistic regression analyses, a data matrix including 37 variables was constructed. The variables used in the forwards stepwise analyses are different measures of slope, aspect, elevation, stream power index (SPI), plan curvature, profile curvature, geology, geomorphology and relative permeability of lithological units. A total of 25 variables were identified as exerting strong influence on landslide occurrence, and included by the logistic regression equation. Wald statistics values indicate that lithology, SPI and slope are more important than the other parameters in the equation. Beta coefficients of the 25 variables included the logistic regression equation provide a model for landslide susceptibility in the Cekmece area. This model is used to generate a landslide susceptibility map that correctly classified 83.8% of the landslide-prone areas.

  16. Reporting and methodological quality of meta-analyses in urological literature

    PubMed Central

    Xu, Jing

    2017-01-01

    Purpose To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. Materials and Methods We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. Results A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, “a priori” design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and “a priori” design were associated with superior reporting quality, following PRISMA guideline and “a priori” design were associated with superior methodological quality. Conclusions Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having “a priori” protocol. PMID:28439452

  17. Spontaneous regression of neuroblastoma.

    PubMed

    Brodeur, Garrett M

    2018-05-01

    Neuroblastomas are characterized by heterogeneous clinical behavior, from spontaneous regression or differentiation into a benign ganglioneuroma, to relentless progression despite aggressive, multimodality therapy. Indeed, neuroblastoma is unique among human cancers in terms of its propensity to undergo spontaneous regression. The strongest evidence for this comes from the mass screening studies conducted in Japan, North America and Europe and it is most evident in infants with stage 4S disease. This propensity is associated with a pattern of genomic change characterized by whole chromosome gains rather than segmental chromosome changes but the mechanism(s) underlying spontaneous regression are currently a matter of speculation. There is evidence to support several possible mechanisms of spontaneous regression in neuroblastomas: (1) neurotrophin deprivation, (2) loss of telomerase activity, (3) humoral or cellular immunity and (4) alterations in epigenetic regulation and possibly other mechanisms. It is likely that a better understanding of the mechanisms of spontaneous regression will help to identify targeted therapeutic approaches for these tumors. The most easily targeted mechanism is the delayed activation of developmentally programmed cell death regulated by the tropomyosin receptor kinase A (TrkA) pathway. Pan-Trk inhibitors are currently in clinical trials and so Trk inhibition might be used as the first line of therapy in infants with biologically favorable tumors that require treatment. Alternative approaches consist of breaking immune tolerance to tumor antigens but approaches to telomere shortening or epigenetic regulation are not easily druggable. The different mechanisms of spontaneous neuroblastoma regression are reviewed here, along with possible therapeutic approaches.

  18. An introduction to using Bayesian linear regression with clinical data.

    PubMed

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Principal component regression analysis with SPSS.

    PubMed

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  20. Length bias correction in gene ontology enrichment analysis using logistic regression.

    PubMed

    Mi, Gu; Di, Yanming; Emerson, Sarah; Cumbie, Jason S; Chang, Jeff H

    2012-01-01

    When assessing differential gene expression from RNA sequencing data, commonly used statistical tests tend to have greater power to detect differential expression of genes encoding longer transcripts. This phenomenon, called "length bias", will influence subsequent analyses such as Gene Ontology enrichment analysis. In the presence of length bias, Gene Ontology categories that include longer genes are more likely to be identified as enriched. These categories, however, are not necessarily biologically more relevant. We show that one can effectively adjust for length bias in Gene Ontology analysis by including transcript length as a covariate in a logistic regression model. The logistic regression model makes the statistical issue underlying length bias more transparent: transcript length becomes a confounding factor when it correlates with both the Gene Ontology membership and the significance of the differential expression test. The inclusion of the transcript length as a covariate allows one to investigate the direct correlation between the Gene Ontology membership and the significance of testing differential expression, conditional on the transcript length. We present both real and simulated data examples to show that the logistic regression approach is simple, effective, and flexible.