Sample records for conditional quantile function

  1. SEMIPARAMETRIC QUANTILE REGRESSION WITH HIGH-DIMENSIONAL COVARIATES

    PubMed Central

    Zhu, Liping; Huang, Mian; Li, Runze

    2012-01-01

    This paper is concerned with quantile regression for a semiparametric regression model, in which both the conditional mean and conditional variance function of the response given the covariates admit a single-index structure. This semiparametric regression model enables us to reduce the dimension of the covariates and simultaneously retains the flexibility of nonparametric regression. Under mild conditions, we show that the simple linear quantile regression offers a consistent estimate of the index parameter vector. This is a surprising and interesting result because the single-index model is possibly misspecified under the linear quantile regression. With a root-n consistent estimate of the index vector, one may employ a local polynomial regression technique to estimate the conditional quantile function. This procedure is computationally efficient, which is very appealing in high-dimensional data analysis. We show that the resulting estimator of the quantile function performs asymptotically as efficiently as if the true value of the index vector were known. The methodologies are demonstrated through comprehensive simulation studies and an application to a real dataset. PMID:24501536

  2. Simultaneous multiple non-crossing quantile regression estimation using kernel constraints

    PubMed Central

    Liu, Yufeng; Wu, Yichao

    2011-01-01

    Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842

  3. Consistent model identification of varying coefficient quantile regression with BIC tuning parameter selection

    PubMed Central

    Zheng, Qi; Peng, Limin

    2016-01-01

    Quantile regression provides a flexible platform for evaluating covariate effects on different segments of the conditional distribution of response. As the effects of covariates may change with quantile level, contemporaneously examining a spectrum of quantiles is expected to have a better capacity to identify variables with either partial or full effects on the response distribution, as compared to focusing on a single quantile. Under this motivation, we study a general adaptively weighted LASSO penalization strategy in the quantile regression setting, where a continuum of quantile index is considered and coefficients are allowed to vary with quantile index. We establish the oracle properties of the resulting estimator of coefficient function. Furthermore, we formally investigate a BIC-type uniform tuning parameter selector and show that it can ensure consistent model selection. Our numerical studies confirm the theoretical findings and illustrate an application of the new variable selection procedure. PMID:28008212

  4. Quantile Functions, Convergence in Quantile, and Extreme Value Distribution Theory.

    DTIC Science & Technology

    1980-11-01

    Gnanadesikan (1968). Quantile functions are advocated by Parzen (1979) as providing an approach to probability-based data analysis. Quantile functions are... Gnanadesikan , R. (1968). Probability Plotting Methods for the Analysis of Data, Biomtrika, 55, 1-17.

  5. Censored quantile regression with recursive partitioning-based weights

    PubMed Central

    Wey, Andrew; Wang, Lan; Rudser, Kyle

    2014-01-01

    Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800

  6. Smooth conditional distribution function and quantiles under random censorship.

    PubMed

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  7. A quantile count model of water depth constraints on Cape Sable seaside sparrows

    USGS Publications Warehouse

    Cade, B.S.; Dong, Q.

    2008-01-01

    1. A quantile regression model for counts of breeding Cape Sable seaside sparrows Ammodramus maritimus mirabilis (L.) as a function of water depth and previous year abundance was developed based on extensive surveys, 1992-2005, in the Florida Everglades. The quantile count model extends linear quantile regression methods to discrete response variables, providing a flexible alternative to discrete parametric distributional models, e.g. Poisson, negative binomial and their zero-inflated counterparts. 2. Estimates from our multiplicative model demonstrated that negative effects of increasing water depth in breeding habitat on sparrow numbers were dependent on recent occupation history. Upper 10th percentiles of counts (one to three sparrows) decreased with increasing water depth from 0 to 30 cm when sites were not occupied in previous years. However, upper 40th percentiles of counts (one to six sparrows) decreased with increasing water depth for sites occupied in previous years. 3. Greatest decreases (-50% to -83%) in upper quantiles of sparrow counts occurred as water depths increased from 0 to 15 cm when previous year counts were 1, but a small proportion of sites (5-10%) held at least one sparrow even as water depths increased to 20 or 30 cm. 4. A zero-inflated Poisson regression model provided estimates of conditional means that also decreased with increasing water depth but rates of change were lower and decreased with increasing previous year counts compared to the quantile count model. Quantiles computed for the zero-inflated Poisson model enhanced interpretation of this model but had greater lack-of-fit for water depths > 0 cm and previous year counts 1, conditions where the negative effect of water depths were readily apparent and fitted better with the quantile count model.

  8. Quantile Regression for Analyzing Heterogeneity in Ultra-high Dimension

    PubMed Central

    Wang, Lan; Wu, Yichao

    2012-01-01

    Ultra-high dimensional data often display heterogeneity due to either heteroscedastic variance or other forms of non-location-scale covariate effects. To accommodate heterogeneity, we advocate a more general interpretation of sparsity which assumes that only a small number of covariates influence the conditional distribution of the response variable given all candidate covariates; however, the sets of relevant covariates may differ when we consider different segments of the conditional distribution. In this framework, we investigate the methodology and theory of nonconvex penalized quantile regression in ultra-high dimension. The proposed approach has two distinctive features: (1) it enables us to explore the entire conditional distribution of the response variable given the ultra-high dimensional covariates and provides a more realistic picture of the sparsity pattern; (2) it requires substantially weaker conditions compared with alternative methods in the literature; thus, it greatly alleviates the difficulty of model checking in the ultra-high dimension. In theoretic development, it is challenging to deal with both the nonsmooth loss function and the nonconvex penalty function in ultra-high dimensional parameter space. We introduce a novel sufficient optimality condition which relies on a convex differencing representation of the penalized loss function and the subdifferential calculus. Exploring this optimality condition enables us to establish the oracle property for sparse quantile regression in the ultra-high dimension under relaxed conditions. The proposed method greatly enhances existing tools for ultra-high dimensional data analysis. Monte Carlo simulations demonstrate the usefulness of the proposed procedure. The real data example we analyzed demonstrates that the new approach reveals substantially more information compared with alternative methods. PMID:23082036

  9. HIGHLIGHTING DIFFERENCES BETWEEN CONDITIONAL AND UNCONDITIONAL QUANTILE REGRESSION APPROACHES THROUGH AN APPLICATION TO ASSESS MEDICATION ADHERENCE

    PubMed Central

    BORAH, BIJAN J.; BASU, ANIRBAN

    2014-01-01

    The quantile regression (QR) framework provides a pragmatic approach in understanding the differential impacts of covariates along the distribution of an outcome. However, the QR framework that has pervaded the applied economics literature is based on the conditional quantile regression method. It is used to assess the impact of a covariate on a quantile of the outcome conditional on specific values of other covariates. In most cases, conditional quantile regression may generate results that are often not generalizable or interpretable in a policy or population context. In contrast, the unconditional quantile regression method provides more interpretable results as it marginalizes the effect over the distributions of other covariates in the model. In this paper, the differences between these two regression frameworks are highlighted, both conceptually and econometrically. Additionally, using real-world claims data from a large US health insurer, alternative QR frameworks are implemented to assess the differential impacts of covariates along the distribution of medication adherence among elderly patients with Alzheimer’s disease. PMID:23616446

  10. Estimating effects of limiting factors with regression quantiles

    USGS Publications Warehouse

    Cade, B.S.; Terrell, J.W.; Schroeder, R.L.

    1999-01-01

    In a recent Concepts paper in Ecology, Thomson et al. emphasized that assumptions of conventional correlation and regression analyses fundamentally conflict with the ecological concept of limiting factors, and they called for new statistical procedures to address this problem. The analytical issue is that unmeasured factors may be the active limiting constraint and may induce a pattern of unequal variation in the biological response variable through an interaction with the measured factors. Consequently, changes near the maxima, rather than at the center of response distributions, are better estimates of the effects expected when the observed factor is the active limiting constraint. Regression quantiles provide estimates for linear models fit to any part of a response distribution, including near the upper bounds, and require minimal assumptions about the form of the error distribution. Regression quantiles extend the concept of one-sample quantiles to the linear model by solving an optimization problem of minimizing an asymmetric function of absolute errors. Rank-score tests for regression quantiles provide tests of hypotheses and confidence intervals for parameters in linear models with heteroscedastic errors, conditions likely to occur in models of limiting ecological relations. We used selected regression quantiles (e.g., 5th, 10th, ..., 95th) and confidence intervals to test hypotheses that parameters equal zero for estimated changes in average annual acorn biomass due to forest canopy cover of oak (Quercus spp.) and oak species diversity. Regression quantiles also were used to estimate changes in glacier lily (Erythronium grandiflorum) seedling numbers as a function of lily flower numbers, rockiness, and pocket gopher (Thomomys talpoides fossor) activity, data that motivated the query by Thomson et al. for new statistical procedures. Both example applications showed that effects of limiting factors estimated by changes in some upper regression quantile (e.g., 90-95th) were greater than if effects were estimated by changes in the means from standard linear model procedures. Estimating a range of regression quantiles (e.g., 5-95th) provides a comprehensive description of biological response patterns for exploratory and inferential analyses in observational studies of limiting factors, especially when sampling large spatial and temporal scales.

  11. Highlighting differences between conditional and unconditional quantile regression approaches through an application to assess medication adherence.

    PubMed

    Borah, Bijan J; Basu, Anirban

    2013-09-01

    The quantile regression (QR) framework provides a pragmatic approach in understanding the differential impacts of covariates along the distribution of an outcome. However, the QR framework that has pervaded the applied economics literature is based on the conditional quantile regression method. It is used to assess the impact of a covariate on a quantile of the outcome conditional on specific values of other covariates. In most cases, conditional quantile regression may generate results that are often not generalizable or interpretable in a policy or population context. In contrast, the unconditional quantile regression method provides more interpretable results as it marginalizes the effect over the distributions of other covariates in the model. In this paper, the differences between these two regression frameworks are highlighted, both conceptually and econometrically. Additionally, using real-world claims data from a large US health insurer, alternative QR frameworks are implemented to assess the differential impacts of covariates along the distribution of medication adherence among elderly patients with Alzheimer's disease. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Quantile Regression Models for Current Status Data

    PubMed Central

    Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen

    2016-01-01

    Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307

  13. Alternative Statistical Frameworks for Student Growth Percentile Estimation

    ERIC Educational Resources Information Center

    Lockwood, J. R.; Castellano, Katherine E.

    2015-01-01

    This article suggests two alternative statistical approaches for estimating student growth percentiles (SGP). The first is to estimate percentile ranks of current test scores conditional on past test scores directly, by modeling the conditional cumulative distribution functions, rather than indirectly through quantile regressions. This would…

  14. Variable Selection for Nonparametric Quantile Regression via Smoothing Spline AN OVA

    PubMed Central

    Lin, Chen-Yen; Bondell, Howard; Zhang, Hao Helen; Zou, Hui

    2014-01-01

    Quantile regression provides a more thorough view of the effect of covariates on a response. Nonparametric quantile regression has become a viable alternative to avoid restrictive parametric assumption. The problem of variable selection for quantile regression is challenging, since important variables can influence various quantiles in different ways. We tackle the problem via regularization in the context of smoothing spline ANOVA models. The proposed sparse nonparametric quantile regression (SNQR) can identify important variables and provide flexible estimates for quantiles. Our numerical study suggests the promising performance of the new procedure in variable selection and function estimation. Supplementary materials for this article are available online. PMID:24554792

  15. Spline methods for approximating quantile functions and generating random samples

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1985-01-01

    Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.

  16. Wildfire Selectivity for Land Cover Type: Does Size Matter?

    PubMed Central

    Barros, Ana M. G.; Pereira, José M. C.

    2014-01-01

    Previous research has shown that fires burn certain land cover types disproportionally to their abundance. We used quantile regression to study land cover proneness to fire as a function of fire size, under the hypothesis that they are inversely related, for all land cover types. Using five years of fire perimeters, we estimated conditional quantile functions for lower (avoidance) and upper (preference) quantiles of fire selectivity for five land cover types - annual crops, evergreen oak woodlands, eucalypt forests, pine forests and shrublands. The slope of significant regression quantiles describes the rate of change in fire selectivity (avoidance or preference) as a function of fire size. We used Monte-Carlo methods to randomly permutate fires in order to obtain a distribution of fire selectivity due to chance. This distribution was used to test the null hypotheses that 1) mean fire selectivity does not differ from that obtained by randomly relocating observed fire perimeters; 2) that land cover proneness to fire does not vary with fire size. Our results show that land cover proneness to fire is higher for shrublands and pine forests than for annual crops and evergreen oak woodlands. As fire size increases, selectivity decreases for all land cover types tested. Moreover, the rate of change in selectivity with fire size is higher for preference than for avoidance. Comparison between observed and randomized data led us to reject both null hypotheses tested ( = 0.05) and to conclude it is very unlikely the observed values of fire selectivity and change in selectivity with fire size are due to chance. PMID:24454747

  17. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  18. Quantile regression analyses of associated factors for body mass index in Korean adolescents.

    PubMed

    Kim, T H; Lee, E K; Han, E

    2015-05-01

    This study examined the influence of home and school environments, and individual health-risk behaviours on body weight outcomes in Korean adolescents. This was a cross-sectional observational study. Quantile regression models to explore heterogeneity in the association of specific factors with body mass index (BMI) over the entire conditional BMI distribution was used. A nationally representative web-based survey for youths was used. Paternal education level of college or more education was associated with lower BMI for girls, whereas college or more education of mothers was associated with higher BMI for boys; for both, the magnitude of association became larger at the upper quantiles of the conditional BMI distribution. Girls with good family economic status were more likely to have higher BMIs than those with average family economic status, particularly at the upper quantile of the conditional BMI distribution. Attending a co-ed school was associated with lower BMI for both genders with a larger association at the upper quantiles. Substantial screen time for TV watching, video games, or internet surfing was associated with a higher BMI with a larger association at the upper quantiles for both girls and boys. Dental prevention was negatively associated with BMI, whereas suicide consideration was positively associated with BMIs of both genders with a larger association at a higher quantile. These findings suggest that interventions aimed at behavioural changes and positive parental roles are needed to effectively address high adolescent BMI. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  19. A Short Research Note on Calculating Exact Distribution Functions and Random Sampling for the 3D NFW Profile

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.; Howlett, Cullan

    2018-06-01

    In this short note we publish the analytic quantile function for the Navarro, Frenk & White (NFW) profile. All known published and coded methods for sampling from the 3D NFW PDF use either accept-reject, or numeric interpolation (sometimes via a lookup table) for projecting random Uniform samples through the quantile distribution function to produce samples of the radius. This is a common requirement in N-body initial condition (IC), halo occupation distribution (HOD), and semi-analytic modelling (SAM) work for correctly assigning particles or galaxies to positions given an assumed concentration for the NFW profile. Using this analytic description allows for much faster and cleaner code to solve a common numeric problem in modern astronomy. We release R and Python versions of simple code that achieves this sampling, which we note is trivial to reproduce in any modern programming language.

  20. Nonuniform sampling by quantiles.

    PubMed

    Craft, D Levi; Sonstrom, Reilly E; Rovnyak, Virginia G; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Nonuniform sampling by quantiles

    NASA Astrophysics Data System (ADS)

    Craft, D. Levi; Sonstrom, Reilly E.; Rovnyak, Virginia G.; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license.

  2. Differential effects of dietary diversity and maternal characteristics on linear growth of children aged 6-59 months in sub-Saharan Africa: a multi-country analysis.

    PubMed

    Amugsi, Dickson A; Dimbuene, Zacharie T; Kimani-Murage, Elizabeth W; Mberu, Blessing; Ezeh, Alex C

    2017-04-01

    To investigate the differential effects of dietary diversity (DD) and maternal characteristics on child linear growth at different points of the conditional distribution of height-for-age Z-score (HAZ) in sub-Saharan Africa. Secondary analysis of data from nationally representative cross-sectional samples of singleton children aged 0-59 months, born to mothers aged 15-49 years. The outcome variable was child HAZ. Quantile regression was used to perform the multivariate analysis. The most recent Demographic and Health Surveys from Ghana, Nigeria, Kenya, Mozambique and Democratic Republic of Congo (DRC). The present analysis was restricted to children aged 6-59 months (n 31 604). DD was associated positively with HAZ in the first four quantiles (5th, 10th, 25th and 50th) and the highest quantile (90th) in Nigeria. The largest effect occurred at the very bottom (5th quantile) and the very top (90th quantile) of the conditional HAZ distribution. In DRC, DD was significantly and positively associated with HAZ in the two lower quantiles (5th, 10th). The largest effects of maternal education occurred at the lower end of the conditional HAZ distribution in Ghana, Nigeria and DRC. Maternal BMI and height also had positive effects on HAZ at different points of the conditional distribution of HAZ. Our analysis shows that the association between DD and maternal factors and HAZ differs along the conditional HAZ distribution. Intervention measures need to take into account the heterogeneous effect of the determinants of child nutritional status along the different percentiles of the HAZ distribution.

  3. Contrasting OLS and Quantile Regression Approaches to Student "Growth" Percentiles

    ERIC Educational Resources Information Center

    Castellano, Katherine Elizabeth; Ho, Andrew Dean

    2013-01-01

    Regression methods can locate student test scores in a conditional distribution, given past scores. This article contrasts and clarifies two approaches to describing these locations in terms of readily interpretable percentile ranks or "conditional status percentile ranks." The first is Betebenner's quantile regression approach that results in…

  4. Superquantile/CVaR Risk Measures: Second-Order Theory

    DTIC Science & Technology

    2014-07-17

    order version of quantile regression . Keywords: superquantiles, conditional value-at-risk, second-order superquantiles, mixed superquan- tiles... quantile regression . 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 26 19a...second-order superquantiles is in the domain of generalized regression . We laid out in [16] a parallel methodology to that of quantile regression

  5. Modeling distributional changes in winter precipitation of Canada using Bayesian spatiotemporal quantile regression subjected to different teleconnections

    NASA Astrophysics Data System (ADS)

    Tan, Xuezhi; Gan, Thian Yew; Chen, Shu; Liu, Bingjun

    2018-05-01

    Climate change and large-scale climate patterns may result in changes in probability distributions of climate variables that are associated with changes in the mean and variability, and severity of extreme climate events. In this paper, we applied a flexible framework based on the Bayesian spatiotemporal quantile (BSTQR) model to identify climate changes at different quantile levels and their teleconnections to large-scale climate patterns such as El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO) and Pacific-North American (PNA). Using the BSTQR model with time (year) as a covariate, we estimated changes in Canadian winter precipitation and their uncertainties at different quantile levels. There were some stations in eastern Canada showing distributional changes in winter precipitation such as an increase in low quantiles but a decrease in high quantiles. Because quantile functions in the BSTQR model vary with space and time and assimilate spatiotemporal precipitation data, the BSTQR model produced much spatially smoother and less uncertain quantile changes than the classic regression without considering spatiotemporal correlations. Using the BSTQR model with five teleconnection indices (i.e., SOI, PDO, PNA, NP and NAO) as covariates, we investigated effects of large-scale climate patterns on Canadian winter precipitation at different quantile levels. Winter precipitation responses to these five teleconnections were found to occur differently at different quantile levels. Effects of five teleconnections on Canadian winter precipitation were stronger at low and high than at medium quantile levels.

  6. Parameter Heterogeneity In Breast Cancer Cost Regressions – Evidence From Five European Countries

    PubMed Central

    Banks, Helen; Campbell, Harry; Douglas, Anne; Fletcher, Eilidh; McCallum, Alison; Moger, Tron Anders; Peltola, Mikko; Sveréus, Sofia; Wild, Sarah; Williams, Linda J.; Forbes, John

    2015-01-01

    Abstract We investigate parameter heterogeneity in breast cancer 1‐year cumulative hospital costs across five European countries as part of the EuroHOPE project. The paper aims to explore whether conditional mean effects provide a suitable representation of the national variation in hospital costs. A cohort of patients with a primary diagnosis of invasive breast cancer (ICD‐9 codes 174 and ICD‐10 C50 codes) is derived using routinely collected individual breast cancer data from Finland, the metropolitan area of Turin (Italy), Norway, Scotland and Sweden. Conditional mean effects are estimated by ordinary least squares for each country, and quantile regressions are used to explore heterogeneity across the conditional quantile distribution. Point estimates based on conditional mean effects provide a good approximation of treatment response for some key demographic and diagnostic specific variables (e.g. age and ICD‐10 diagnosis) across the conditional quantile distribution. For many policy variables of interest, however, there is considerable evidence of parameter heterogeneity that is concealed if decisions are based solely on conditional mean results. The use of quantile regression methods reinforce the need to consider beyond an average effect given the greater recognition that breast cancer is a complex disease reflecting patient heterogeneity. © 2015 The Authors. Health Economics Published by John Wiley & Sons Ltd. PMID:26633866

  7. The effectiveness of drinking and driving policies for different alcohol-related fatalities: a quantile regression analysis.

    PubMed

    Ying, Yung-Hsiang; Wu, Chin-Chih; Chang, Koyin

    2013-09-27

    To understand the impact of drinking and driving laws on drinking and driving fatality rates, this study explored the different effects these laws have on areas with varying severity rates for drinking and driving. Unlike previous studies, this study employed quantile regression analysis. Empirical results showed that policies based on local conditions must be used to effectively reduce drinking and driving fatality rates; that is, different measures should be adopted to target the specific conditions in various regions. For areas with low fatality rates (low quantiles), people's habits and attitudes toward alcohol should be emphasized instead of transportation safety laws because "preemptive regulations" are more effective. For areas with high fatality rates (or high quantiles), "ex-post regulations" are more effective, and impact these areas approximately 0.01% to 0.05% more than they do areas with low fatality rates.

  8. The Effectiveness of Drinking and Driving Policies for Different Alcohol-Related Fatalities: A Quantile Regression Analysis

    PubMed Central

    Ying, Yung-Hsiang; Wu, Chin-Chih; Chang, Koyin

    2013-01-01

    To understand the impact of drinking and driving laws on drinking and driving fatality rates, this study explored the different effects these laws have on areas with varying severity rates for drinking and driving. Unlike previous studies, this study employed quantile regression analysis. Empirical results showed that policies based on local conditions must be used to effectively reduce drinking and driving fatality rates; that is, different measures should be adopted to target the specific conditions in various regions. For areas with low fatality rates (low quantiles), people’s habits and attitudes toward alcohol should be emphasized instead of transportation safety laws because “preemptive regulations” are more effective. For areas with high fatality rates (or high quantiles), “ex-post regulations” are more effective, and impact these areas approximately 0.01% to 0.05% more than they do areas with low fatality rates. PMID:24084673

  9. Goodness of Fit and Misspecification in Quantile Regressions

    ERIC Educational Resources Information Center

    Furno, Marilena

    2011-01-01

    The article considers a test of specification for quantile regressions. The test relies on the increase of the objective function and the worsening of the fit when unnecessary constraints are imposed. It compares the objective functions of restricted and unrestricted models and, in its different formulations, it verifies (a) forecast ability, (b)…

  10. Boosting structured additive quantile regression for longitudinal childhood obesity data.

    PubMed

    Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

    2013-07-25

    Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

  11. Efficient Regressions via Optimally Combining Quantile Information*

    PubMed Central

    Zhao, Zhibiao; Xiao, Zhijie

    2014-01-01

    We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481

  12. Robust small area estimation of poverty indicators using M-quantile approach (Case study: Sub-district level in Bogor district)

    NASA Astrophysics Data System (ADS)

    Girinoto, Sadik, Kusman; Indahwati

    2017-03-01

    The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.

  13. Bayesian quantitative precipitation forecasts in terms of quantiles

    NASA Astrophysics Data System (ADS)

    Bentzien, Sabrina; Friederichs, Petra

    2014-05-01

    Ensemble prediction systems (EPS) for numerical weather predictions on the mesoscale are particularly developed to obtain probabilistic guidance for high impact weather. An EPS not only issues a deterministic future state of the atmosphere but a sample of possible future states. Ensemble postprocessing then translates such a sample of forecasts into probabilistic measures. This study focus on probabilistic quantitative precipitation forecasts in terms of quantiles. Quantiles are particular suitable to describe precipitation at various locations, since no assumption is required on the distribution of precipitation. The focus is on the prediction during high-impact events and related to the Volkswagen Stiftung funded project WEX-MOP (Mesoscale Weather Extremes - Theory, Spatial Modeling and Prediction). Quantile forecasts are derived from the raw ensemble and via quantile regression. Neighborhood method and time-lagging are effective tools to inexpensively increase the ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Since an EPS provides a large amount of potentially informative predictors, a variable selection is required in order to obtain a stable statistical model. A Bayesian formulation of quantile regression allows for inference about the selection of predictive covariates by the use of appropriate prior distributions. Moreover, the implementation of an additional process layer for the regression parameters accounts for spatial variations of the parameters. Bayesian quantile regression and its spatially adaptive extension is illustrated for the German-focused mesoscale weather prediction ensemble COSMO-DE-EPS, which runs (pre)operationally since December 2010 at the German Meteorological Service (DWD). Objective out-of-sample verification uses the quantile score (QS), a weighted absolute error between quantile forecasts and observations. The QS is a proper scoring function and can be decomposed into reliability, resolutions and uncertainty parts. A quantile reliability plot gives detailed insights in the predictive performance of the quantile forecasts.

  14. Quantile based Tsallis entropy in residual lifetime

    NASA Astrophysics Data System (ADS)

    Khammar, A. H.; Jahanshahi, S. M. A.

    2018-02-01

    Tsallis entropy is a generalization of type α of the Shannon entropy, that is a nonadditive entropy unlike the Shannon entropy. Shannon entropy may be negative for some distributions, but Tsallis entropy can always be made nonnegative by choosing appropriate value of α. In this paper, we derive the quantile form of this nonadditive's entropy function in the residual lifetime, namely the residual quantile Tsallis entropy (RQTE) and get the bounds for it, depending on the Renyi's residual quantile entropy. Also, we obtain relationship between RQTE and concept of proportional hazards model in the quantile setup. Based on the new measure, we propose a stochastic order and aging classes, and study its properties. Finally, we prove characterizations theorems for some well known lifetime distributions. It is shown that RQTE uniquely determines the parent distribution unlike the residual Tsallis entropy.

  15. Streamflow distribution maps for the Cannon River drainage basin, southeast Minnesota, and the St. Louis River drainage basin, northeast Minnesota

    USGS Publications Warehouse

    Smith, Erik A.; Sanocki, Chris A.; Lorenz, David L.; Jacobsen, Katrin E.

    2017-12-27

    Streamflow distribution maps for the Cannon River and St. Louis River drainage basins were developed by the U.S. Geological Survey, in cooperation with the Legislative-Citizen Commission on Minnesota Resources, to illustrate relative and cumulative streamflow distributions. The Cannon River was selected to provide baseline data to assess the effects of potential surficial sand mining, and the St. Louis River was selected to determine the effects of ongoing Mesabi Iron Range mining. Each drainage basin (Cannon, St. Louis) was subdivided into nested drainage basins: the Cannon River was subdivided into 152 nested drainage basins, and the St. Louis River was subdivided into 353 nested drainage basins. For each smaller drainage basin, the estimated volumes of groundwater discharge (as base flow) and surface runoff flowing into all surface-water features were displayed under the following conditions: (1) extreme low-flow conditions, comparable to an exceedance-probability quantile of 0.95; (2) low-flow conditions, comparable to an exceedance-probability quantile of 0.90; (3) a median condition, comparable to an exceedance-probability quantile of 0.50; and (4) a high-flow condition, comparable to an exceedance-probability quantile of 0.02.Streamflow distribution maps were developed using flow-duration curve exceedance-probability quantiles in conjunction with Soil-Water-Balance model outputs; both the flow-duration curve and Soil-Water-Balance models were built upon previously published U.S. Geological Survey reports. The selected streamflow distribution maps provide a proactive water management tool for State cooperators by illustrating flow rates during a range of hydraulic conditions. Furthermore, after the nested drainage basins are highlighted in terms of surface-water flows, the streamflows can be evaluated in the context of meeting specific ecological flows under different flow regimes and potentially assist with decisions regarding groundwater and surface-water appropriations. Presented streamflow distribution maps are foundational work intended to support the development of additional streamflow distribution maps that include statistical constraints on the selected flow conditions.

  16. The association of fatigue, pain, depression and anxiety with work and activity impairment in immune mediated inflammatory diseases.

    PubMed

    Enns, Murray W; Bernstein, Charles N; Kroeker, Kristine; Graff, Lesley; Walker, John R; Lix, Lisa M; Hitchon, Carol A; El-Gabalawy, Renée; Fisk, John D; Marrie, Ruth Ann

    2018-01-01

    Impairment in work function is a frequent outcome in patients with chronic conditions such as immune-mediated inflammatory diseases (IMID), depression and anxiety disorders. The personal and economic costs of work impairment in these disorders are immense. Symptoms of pain, fatigue, depression and anxiety are potentially remediable forms of distress that may contribute to work impairment in chronic health conditions such as IMID. The present study evaluated the association between pain [Medical Outcomes Study Pain Effects Scale], fatigue [Daily Fatigue Impact Scale], depression and anxiety [Hospital Anxiety and Depression Scale] and work impairment [Work Productivity and Activity Impairment Scale] in four patient populations: multiple sclerosis (n = 255), inflammatory bowel disease (n = 248, rheumatoid arthritis (n = 154) and a depression and anxiety group (n = 307), using quantile regression, controlling for the effects of sociodemographic factors, physical disability, and cognitive deficits. Each of pain, depression symptoms, anxiety symptoms, and fatigue individually showed significant associations with work absenteeism, presenteeism, and general activity impairment (quantile regression standardized estimates ranging from 0.3 to 1.0). When the distress variables were entered concurrently into the regression models, fatigue was a significant predictor of work and activity impairment in all models (quantile regression standardized estimates ranging from 0.2 to 0.5). These findings have important clinical implications for understanding the determinants of work impairment and for improving work-related outcomes in chronic disease.

  17. Multiple imputation for cure rate quantile regression with censored data.

    PubMed

    Wu, Yuanshan; Yin, Guosheng

    2017-03-01

    The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.

  18. Quantile regression reveals hidden bias and uncertainty in habitat models

    Treesearch

    Brian S. Cade; Barry R. Noon; Curtis H. Flather

    2005-01-01

    We simulated the effects of missing information on statistical distributions of animal response that covaried with measured predictors of habitat to evaluate the utility and performance of quantile regression for providing more useful intervals of uncertainty in habitat relationships. These procedures were evaulated for conditions in which heterogeneity and hidden bias...

  19. Early Home Activities and Oral Language Skills in Middle Childhood: A Quantile Analysis

    ERIC Educational Resources Information Center

    Law, James; Rush, Robert; King, Tom; Westrupp, Elizabeth; Reilly, Sheena

    2018-01-01

    Oral language development is a key outcome of elementary school, and it is important to identify factors that predict it most effectively. Commonly researchers use ordinary least squares regression with conclusions restricted to average performance conditional on relevant covariates. Quantile regression offers a more sophisticated alternative.…

  20. Regional trends in short-duration precipitation extremes: a flexible multivariate monotone quantile regression approach

    NASA Astrophysics Data System (ADS)

    Cannon, Alex

    2017-04-01

    Estimating historical trends in short-duration rainfall extremes at regional and local scales is challenging due to low signal-to-noise ratios and the limited availability of homogenized observational data. In addition to being of scientific interest, trends in rainfall extremes are of practical importance, as their presence calls into question the stationarity assumptions that underpin traditional engineering and infrastructure design practice. Even with these fundamental challenges, increasingly complex questions are being asked about time series of extremes. For instance, users may not only want to know whether or not rainfall extremes have changed over time, they may also want information on the modulation of trends by large-scale climate modes or on the nonstationarity of trends (e.g., identifying hiatus periods or periods of accelerating positive trends). Efforts have thus been devoted to the development and application of more robust and powerful statistical estimators for regional and local scale trends. While a standard nonparametric method like the regional Mann-Kendall test, which tests for the presence of monotonic trends (i.e., strictly non-decreasing or non-increasing changes), makes fewer assumptions than parametric methods and pools information from stations within a region, it is not designed to visualize detected trends, include information from covariates, or answer questions about the rate of change in trends. As a remedy, monotone quantile regression (MQR) has been developed as a nonparametric alternative that can be used to estimate a common monotonic trend in extremes at multiple stations. Quantile regression makes efficient use of data by directly estimating conditional quantiles based on information from all rainfall data in a region, i.e., without having to precompute the sample quantiles. The MQR method is also flexible and can be used to visualize and analyze the nonlinearity of the detected trend. However, it is fundamentally a univariate technique, and cannot incorporate information from additional covariates, for example ENSO state or physiographic controls on extreme rainfall within a region. Here, the univariate MQR model is extended to allow the use of multiple covariates. Multivariate monotone quantile regression (MMQR) is based on a single hidden-layer feedforward network with the quantile regression error function and partial monotonicity constraints. The MMQR model is demonstrated via Monte Carlo simulations and the estimation and visualization of regional trends in moderate rainfall extremes based on homogenized sub-daily precipitation data at stations in Canada.

  1. A gentle introduction to quantile regression for ecologists

    USGS Publications Warehouse

    Cade, B.S.; Noon, B.R.

    2003-01-01

    Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.

  2. Variable screening via quantile partial correlation

    PubMed Central

    Ma, Shujie; Tsai, Chih-Ling

    2016-01-01

    In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683

  3. Ensuring the consistancy of Flow Direction Curve reconstructions: the 'quantile solidarity' approach

    NASA Astrophysics Data System (ADS)

    Poncelet, Carine; Andreassian, Vazken; Oudin, Ludovic

    2015-04-01

    Flow Duration Curves (FDCs) are a hydrologic tool describing the distribution of streamflows at a catchment outlet. FDCs are usually used for calibration of hydrological models, managing water quality and classifying catchments, among others. For gauged catchments, empirical FDCs can be computed from streamflow records. For ungauged catchments, on the other hand, FDCs cannot be obtained from streamflow records and must therefore be obtained in another manner, for example through reconstructions. Regression-based reconstructions are methods relying on the evaluation of quantiles separately from catchments' attributes (climatic or physical features).The advantage of this category of methods is that it is informative about the processes and it is non-parametric. However, the large number of parameters required can cause unwanted artifacts, typically reconstructions that do not always produce increasing quantiles. In this paper we propose a new approach named Quantile Solidarity (QS), which is applied under strict proxy-basin test conditions (Klemes, 1986) to a set of 600 French catchments. Half of the catchments are considered as gauged and used to calibrate the regression and compute residuals of the regression. The QS approach consists in a three-step regionalization scheme, which first links quantile values to physical descriptors, then reduces the number of regression parameters and finally exploits the spatial correlation of the residuals. The innovation is the utilisation of the parameters continuity across the quantiles to dramatically reduce the number of parameters. The second half of catchment is used as an independent validation set over which we show that the QS approach ensures strictly growing FDC reconstructions in ungauged conditions. Reference: V. KLEMEŠ (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24

  4. Forecasting conditional climate-change using a hybrid approach

    USGS Publications Warehouse

    Esfahani, Akbar Akbari; Friedel, Michael J.

    2014-01-01

    A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.

  5. Ordinary Least Squares and Quantile Regression: An Inquiry-Based Learning Approach to a Comparison of Regression Methods

    ERIC Educational Resources Information Center

    Helmreich, James E.; Krog, K. Peter

    2018-01-01

    We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…

  6. Incremental impact of body mass status with modifiable unhealthy lifestyle behaviors on pharmaceutical expenditure.

    PubMed

    Kim, Tae Hyun; Lee, Eui-Kyung; Han, Euna

    Overweight/obesity is a growing health risk in Korea. The impact of overweight/obesity on pharmaceutical expenditure can be larger if individuals have multiple risk factors and multiple comorbidities. The current study estimated the combined effects of overweight/obesity and other unhealthy behaviors on pharmaceutical expenditure. An instrumental variable quantile regression model was estimated using Korea Health Panel Study data. The current study extracted data from 3 waves (2009, 2010, and 2011). The final sample included 7148 person-year observations for adults aged 20 years or older. Overweight/obese individuals had higher pharmaceutical expenditure than their non-obese counterparts only at the upper quantiles of the conditional distribution of pharmaceutical expenditure (by 119% at the 90th quantile and 115% at the 95th). The current study found a stronger association at the upper quantiles among men (152%, 144%, and 150% at the 75th, 90th, and 95th quantiles, respectively) than among women (152%, 150%, and 148% at the 75th, 90th, and 95th quantiles, respectively). The association at the upper quantiles was stronger when combined with moderate to heavy drinking and no regular physical check-up, particularly among males. The current study confirms that the association of overweight/obesity with modifiable unhealthy behaviors on pharmaceutical expenditure is larger than with overweight/obesity alone. Assessing the effect of overweight/obesity with lifestyle risk factors can help target groups for public health intervention programs. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Effects of environmental variables on invasive amphibian activity: Using model selection on quantiles for counts

    USGS Publications Warehouse

    Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin

    2018-01-01

    Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.

  8. An impact of environmental changes on flows in the reach scale under a range of climatic conditions

    NASA Astrophysics Data System (ADS)

    Karamuz, Emilia; Romanowicz, Renata J.

    2016-04-01

    The present paper combines detection and adequate identification of causes of changes in flow regime at cross-sections along the Middle River Vistula reach using different methods. Two main experimental set ups (designs) have been applied to study the changes, a moving three-year window and low- and high-flow event based approach. In the first experiment, a Stochastic Transfer Function (STF) model and a quantile-based statistical analysis of flow patterns were compared. These two methods are based on the analysis of changes of the STF model parameters and standardised differences of flow quantile values. In the second experiment, in addition to the STF-based also a 1-D distributed model, MIKE11 was applied. The first step of the procedure used in the study is to define the river reaches that have recorded information on land use and water management changes. The second task is to perform the moving window analysis of standardised differences of flow quantiles and moving window optimisation of the STF model for flow routing. The third step consists of an optimisation of the STF and MIKE11 models for high- and low-flow events. The final step is to analyse the results and relate the standardised quantile changes and model parameter changes to historical land use changes and water management practices. Results indicate that both models give consistent assessment of changes in the channel for medium and high flows. ACKNOWLEDGEMENTS This research was supported by the Institute of Geophysics Polish Academy of Sciences through the Young Scientist Grant no. 3b/IGF PAN/2015.

  9. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.

  10. Numerical analysis of the accuracy of bivariate quantile distributions utilizing copulas compared to the GUM supplement 2 for oil pressure balance uncertainties

    NASA Astrophysics Data System (ADS)

    Ramnath, Vishal

    2017-11-01

    In the field of pressure metrology the effective area is Ae = A0 (1 + λP) where A0 is the zero-pressure area and λ is the distortion coefficient and the conventional practise is to construct univariate probability density functions (PDFs) for A0 and λ. As a result analytical generalized non-Gaussian bivariate joint PDFs has not featured prominently in pressure metrology. Recently extended lambda distribution based quantile functions have been successfully utilized for summarizing univariate arbitrary PDF distributions of gas pressure balances. Motivated by this development we investigate the feasibility and utility of extending and applying quantile functions to systems which naturally exhibit bivariate PDFs. Our approach is to utilize the GUM Supplement 1 methodology to solve and generate Monte Carlo based multivariate uncertainty data for an oil based pressure balance laboratory standard that is used to generate known high pressures, and which are in turn cross-floated against another pressure balance transfer standard in order to deduce the transfer standard's respective area. We then numerically analyse the uncertainty data by formulating and constructing an approximate bivariate quantile distribution that directly couples A0 and λ in order to compare and contrast its accuracy to an exact GUM Supplement 2 based uncertainty quantification analysis.

  11. Food away from home and body mass outcomes: taking heterogeneity into account enhances quality of results.

    PubMed

    Kim, Tae Hyun; Lee, Eui-Kyung; Han, Euna

    2014-09-01

    The aim of this study was to explore the heterogeneous association of consumption of food away from home (FAFH) with individual body mass outcomes including body mass index and waist circumference over the entire conditional distribution of each outcome. Information on 16,403 adults obtained from nationally representative data on nutrition and behavior in Korea was used. A quantile regression model captured the variability of the association of FAFH with body mass outcomes across the entire conditional distribution of each outcome measure. Heavy FAFH consumption was defined as obtaining ≥1400 kcal from FAFH on a single day. Heavy FAFH consumption, specifically at full-service restaurants, was significantly associated with higher body mass index (+0.46 kg/m2 at the 50th quantile, 0.55 at the 75th, 0.66 at the 90th, and 0.44 at the 95th) and waist circumference (+0.96 cm at the 25th quantile, 1.06 cm at the 50th, 1.35 cm at the 75th, and 0.96 cm at the 90th quantiles) with overall larger associations at higher quantiles. Findings of the study indicate that conventional regression methods may mask important heterogeneity in the association between heavy FAFH consumption and body mass outcomes. Further public health efforts are needed to improve the nutritional quality of affordable FAFH choices and nutrition education and to establish a healthy food consumption environment. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Quantile regression models of animal habitat relationships

    USGS Publications Warehouse

    Cade, Brian S.

    2003-01-01

    Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative (interference interactions) or positive (facilitation interactions), either upper (τ > 0.5) or lower (τ < 0.5) quantile regression parameters were less biased than mean rate parameters. Sampling (n = 20 - 300) simulations demonstrated that confidence intervals constructed by inverting rankscore tests provided valid coverage of these biased parameters. Quantile regression was used to estimate effects of physical habitat resources on a bivalve mussel (Macomona liliana) in a New Zealand harbor by modeling the spatial trend surface as a cubic polynomial of location coordinates.

  13. Estimating the Extreme Behaviors of Students Performance Using Quantile Regression--Evidences from Taiwan

    ERIC Educational Resources Information Center

    Chen, Sheng-Tung; Kuo, Hsiao-I.; Chen, Chi-Chung

    2012-01-01

    The two-stage least squares approach together with quantile regression analysis is adopted here to estimate the educational production function. Such a methodology is able to capture the extreme behaviors of the two tails of students' performance and the estimation outcomes have important policy implications. Our empirical study is applied to the…

  14. The weighted function method: A handy tool for flood frequency analysis or just a curiosity?

    NASA Astrophysics Data System (ADS)

    Bogdanowicz, Ewa; Kochanek, Krzysztof; Strupczewski, Witold G.

    2018-04-01

    The idea of the Weighted Function (WF) method for estimation of Pearson type 3 (Pe3) distribution introduced by Ma in 1984 has been revised and successfully applied for shifted inverse Gaussian (IGa3) distribution. Also the conditions of WF applicability to a shifted distribution have been formulated. The accuracy of WF flood quantiles for both Pe3 and IGa3 distributions was assessed by Monte Caro simulations under the true and false distribution assumption versus the maximum likelihood (MLM), moment (MOM) and L-moments (LMM) methods. Three datasets of annual peak flows of Polish catchments serve the case studies to compare the results of the WF, MOM, MLM and LMM performance for the real flood data. For the hundred-year flood the WF method revealed the explicit superiority only over the MLM surpassing the MOM and especially LMM both for the true and false distributional assumption with respect to relative bias and relative mean root square error values. Generally, the WF method performs well and for hydrological sample size and constitutes good alternative for the estimation of the flood upper quantiles.

  15. L-statistics for Repeated Measurements Data With Application to Trimmed Means, Quantiles and Tolerance Intervals.

    PubMed

    Assaad, Houssein I; Choudhary, Pankaj K

    2013-01-01

    The L -statistics form an important class of estimators in nonparametric statistics. Its members include trimmed means and sample quantiles and functions thereof. This article is devoted to theory and applications of L -statistics for repeated measurements data, wherein the measurements on the same subject are dependent and the measurements from different subjects are independent. This article has three main goals: (a) Show that the L -statistics are asymptotically normal for repeated measurements data. (b) Present three statistical applications of this result, namely, location estimation using trimmed means, quantile estimation and construction of tolerance intervals. (c) Obtain a Bahadur representation for sample quantiles. These results are generalizations of similar results for independently and identically distributed data. The practical usefulness of these results is illustrated by analyzing a real data set involving measurement of systolic blood pressure. The properties of the proposed point and interval estimators are examined via simulation.

  16. Using Gamma and Quantile Regressions to Explore the Association between Job Strain and Adiposity in the ELSA-Brasil Study: Does Gender Matter?

    PubMed

    Fonseca, Maria de Jesus Mendes da; Juvanhol, Leidjaira Lopes; Rotenberg, Lúcia; Nobre, Aline Araújo; Griep, Rosane Härter; Alves, Márcia Guimarães de Mello; Cardoso, Letícia de Oliveira; Giatti, Luana; Nunes, Maria Angélica; Aquino, Estela M L; Chor, Dóra

    2017-11-17

    This paper explores the association between job strain and adiposity, using two statistical analysis approaches and considering the role of gender. The research evaluated 11,960 active baseline participants (2008-2010) in the ELSA-Brasil study. Job strain was evaluated through a demand-control questionnaire, while body mass index (BMI) and waist circumference (WC) were evaluated in continuous form. The associations were estimated using gamma regression models with an identity link function. Quantile regression models were also estimated from the final set of co-variables established by gamma regression. The relationship that was found varied by analytical approach and gender. Among the women, no association was observed between job strain and adiposity in the fitted gamma models. In the quantile models, a pattern of increasing effects of high strain was observed at higher BMI and WC distribution quantiles. Among the men, high strain was associated with adiposity in the gamma regression models. However, when quantile regression was used, that association was found not to be homogeneous across outcome distributions. In addition, in the quantile models an association was observed between active jobs and BMI. Our results point to an association between job strain and adiposity, which follows a heterogeneous pattern. Modelling strategies can produce different results and should, accordingly, be used to complement one another.

  17. Modeling the human development index and the percentage of poor people using quantile smoothing splines

    NASA Astrophysics Data System (ADS)

    Mulyani, Sri; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Mean regression is a statistical method to explain the relationship between the response variable and the predictor variable based on the central tendency of the data (mean) of the response variable. The parameter estimation in mean regression (with Ordinary Least Square or OLS) generates a problem if we apply it to the data with a symmetric, fat-tailed, or containing outlier. Hence, an alternative method is necessary to be used to that kind of data, for example quantile regression method. The quantile regression is a robust technique to the outlier. This model can explain the relationship between the response variable and the predictor variable, not only on the central tendency of the data (median) but also on various quantile, in order to obtain complete information about that relationship. In this study, a quantile regression is developed with a nonparametric approach such as smoothing spline. Nonparametric approach is used if the prespecification model is difficult to determine, the relation between two variables follow the unknown function. We will apply that proposed method to poverty data. Here, we want to estimate the Percentage of Poor People as the response variable involving the Human Development Index (HDI) as the predictor variable.

  18. Predicting birth weight with conditionally linear transformation models.

    PubMed

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  19. Alternative configurations of Quantile Regression for estimating predictive uncertainty in water level forecasts for the Upper Severn River: a comparison

    NASA Astrophysics Data System (ADS)

    Lopez, Patricia; Verkade, Jan; Weerts, Albrecht; Solomatine, Dimitri

    2014-05-01

    Hydrological forecasting is subject to many sources of uncertainty, including those originating in initial state, boundary conditions, model structure and model parameters. Although uncertainty can be reduced, it can never be fully eliminated. Statistical post-processing techniques constitute an often used approach to estimate the hydrological predictive uncertainty, where a model of forecast error is built using a historical record of past forecasts and observations. The present study focuses on the use of the Quantile Regression (QR) technique as a hydrological post-processor. It estimates the predictive distribution of water levels using deterministic water level forecasts as predictors. This work aims to thoroughly verify uncertainty estimates using the implementation of QR that was applied in an operational setting in the UK National Flood Forecasting System, and to inter-compare forecast quality and skill in various, differing configurations of QR. These configurations are (i) 'classical' QR, (ii) QR constrained by a requirement that quantiles do not cross, (iii) QR derived on time series that have been transformed into the Normal domain (Normal Quantile Transformation - NQT), and (iv) a piecewise linear derivation of QR models. The QR configurations are applied to fourteen hydrological stations on the Upper Severn River with different catchments characteristics. Results of each QR configuration are conditionally verified for progressively higher flood levels, in terms of commonly used verification metrics and skill scores. These include Brier's probability score (BS), the continuous ranked probability score (CRPS) and corresponding skill scores as well as the Relative Operating Characteristic score (ROCS). Reliability diagrams are also presented and analysed. The results indicate that none of the four Quantile Regression configurations clearly outperforms the others.

  20. Smooth quantile normalization.

    PubMed

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  1. Height premium for job performance.

    PubMed

    Kim, Tae Hyun; Han, Euna

    2017-08-01

    This study assessed the relationship of height with wages, using the 1998 and 2012 Korean Labor and Income Panel Study data. The key independent variable was height measured in centimeters, which was included as a series of dummy indicators of height per 5cm span (<155cm, 155-160cm, 160-165cm, and ≥165cm for women; <165cm, 165-170cm, 170-175cm, 175-180cm, and ≥180cm for men). We controlled for household- and individual-level random effects. We used a random-effect quantile regression model for monthly wages to assess the heterogeneity in the height-wage relationship, across the conditional distribution of monthly wages. We found a non-linear relationship of height with monthly wages. For men, the magnitude of the height wage premium was overall larger at the upper quantile of the conditional distribution of log monthly wages than at the median to low quantile, particularly in professional and semi-professional occupations. The height-wage premium was also larger at the 90th quantile for self-employed women and salaried men. Our findings add a global dimension to the existing evidence on height-wage premium, demonstrating non-linearity in the association between height and wages and heterogeneous changes in the dispersion and direction of the association between height and wages, by wage level. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care.

    PubMed

    Kowalski, Amanda

    2016-01-02

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.

  3. Estimation of peak discharge quantiles for selected annual exceedance probabilities in northeastern Illinois

    USGS Publications Warehouse

    Over, Thomas M.; Saito, Riki J.; Veilleux, Andrea G.; Sharpe, Jennifer B.; Soong, David T.; Ishii, Audrey L.

    2016-06-28

    This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, regional skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at ungaged sites and to improve flood-quantile estimates at and near a gaged site; (2) the urbanization-adjusted annual maximum peak discharges and peak discharge quantile estimates at streamgages from 181 watersheds including the 117 study watersheds and 64 additional watersheds in the study region that were originally considered for use in the study but later deemed to be redundant.The urbanization-adjustment equations, spatial regression equations, and peak discharge quantile estimates developed in this study will be made available in the web application StreamStats, which provides automated regression-equation solutions for user-selected stream locations. Figures and tables comparing the observed and urbanization-adjusted annual maximum peak discharge records by streamgage are provided at https://doi.org/10.3133/sir20165050 for download.

  4. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  5. Distributional changes in rainfall and river flow in Sarawak, Malaysia

    NASA Astrophysics Data System (ADS)

    Sa'adi, Zulfaqar; Shahid, Shamsuddin; Ismail, Tarmizi; Chung, Eun-Sung; Wang, Xiao-Jun

    2017-11-01

    Climate change may not change the rainfall mean, but the variability and extremes. Therefore, it is required to explore the possible distributional changes of rainfall characteristics over time. The objective of present study is to assess the distributional changes in annual and northeast monsoon rainfall (November-January) and river flow in Sarawak where small changes in rainfall or river flow variability/distribution may have severe implications on ecology and agriculture. A quantile regression-based approach was used to assess the changes of scale and location of empirical probability density function over the period 1980-2014 at 31 observational stations. The results indicate that diverse variation patterns exist at all stations for annual rainfall but mainly increasing quantile trend at the lowers, and higher quantiles for the month of January and December. The significant increase in annual rainfall is found mostly in the north and central-coastal region and monsoon month rainfalls in the interior and north of Sarawak. Trends in river flow data show that changes in rainfall distribution have affected higher quantiles of river flow in monsoon months at some of the basins and therefore more flooding. The study reveals that quantile trend can provide more information of rainfall change which may be useful for climate change mitigation and adaptation planning.

  6. Removing technical variability in RNA-seq data using conditional quantile normalization.

    PubMed

    Hansen, Kasper D; Irizarry, Rafael A; Wu, Zhijin

    2012-04-01

    The ability to measure gene expression on a genome-wide scale is one of the most promising accomplishments in molecular biology. Microarrays, the technology that first permitted this, were riddled with problems due to unwanted sources of variability. Many of these problems are now mitigated, after a decade's worth of statistical methodology development. The recently developed RNA sequencing (RNA-seq) technology has generated much excitement in part due to claims of reduced variability in comparison to microarrays. However, we show that RNA-seq data demonstrate unwanted and obscuring variability similar to what was first observed in microarrays. In particular, we find guanine-cytosine content (GC-content) has a strong sample-specific effect on gene expression measurements that, if left uncorrected, leads to false positives in downstream results. We also report on commonly observed data distortions that demonstrate the need for data normalization. Here, we describe a statistical methodology that improves precision by 42% without loss of accuracy. Our resulting conditional quantile normalization algorithm combines robust generalized regression to remove systematic bias introduced by deterministic features such as GC-content and quantile normalization to correct for global distortions.

  7. Censored Quantile Instrumental Variable Estimates of the Price Elasticity of Expenditure on Medical Care

    PubMed Central

    Kowalski, Amanda

    2015-01-01

    Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member’s injury to induce variation in an individual’s own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from −0.76 to −1.49, which are an order of magnitude larger than previous estimates. PMID:26977117

  8. GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA

    PubMed Central

    Zheng, Qi; Peng, Limin; He, Xuming

    2015-01-01

    Quantile regression has become a valuable tool to analyze heterogeneous covaraite-response associations that are often encountered in practice. The development of quantile regression methodology for high dimensional covariates primarily focuses on examination of model sparsity at a single or multiple quantile levels, which are typically prespecified ad hoc by the users. The resulting models may be sensitive to the specific choices of the quantile levels, leading to difficulties in interpretation and erosion of confidence in the results. In this article, we propose a new penalization framework for quantile regression in the high dimensional setting. We employ adaptive L1 penalties, and more importantly, propose a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantile levels. Our proposed approach achieves consistent shrinkage of regression quantile estimates across a continuous range of quantiles levels, enhancing the flexibility and robustness of the existing penalized quantile regression methods. Our theoretical results include the oracle rate of uniform convergence and weak convergence of the parameter estimators. We also use numerical studies to confirm our theoretical findings and illustrate the practical utility of our proposal. PMID:26604424

  9. magicaxis: Pretty scientific plotting with minor-tick and log minor-tick support

    NASA Astrophysics Data System (ADS)

    Robotham, Aaron S. G.

    2016-04-01

    The R suite magicaxis makes useful and pretty plots for scientific plotting and includes functions for base plotting, with particular emphasis on pretty axis labelling in a number of circumstances that are often used in scientific plotting. It also includes functions for generating images and contours that reflect the 2D quantile levels of the data designed particularly for output of MCMC posteriors where visualizing the location of the 68% and 95% 2D quantiles for covariant parameters is a necessary part of the post MCMC analysis, can generate low and high error bars, and allows clipping of values, rejection of bad values, and log stretching.

  10. Quality of life in breast cancer patients--a quantile regression analysis.

    PubMed

    Pourhoseingholi, Mohamad Amin; Safaee, Azadeh; Moghimi-Dehkordi, Bijan; Zeighami, Bahram; Faghihzadeh, Soghrat; Tabatabaee, Hamid Reza; Pourhoseingholi, Asma

    2008-01-01

    Quality of life study has an important role in health care especially in chronic diseases, in clinical judgment and in medical resources supplying. Statistical tools like linear regression are widely used to assess the predictors of quality of life. But when the response is not normal the results are misleading. The aim of this study is to determine the predictors of quality of life in breast cancer patients, using quantile regression model and compare to linear regression. A cross-sectional study conducted on 119 breast cancer patients that admitted and treated in chemotherapy ward of Namazi hospital in Shiraz. We used QLQ-C30 questionnaire to assessment quality of life in these patients. A quantile regression was employed to assess the assocciated factors and the results were compared to linear regression. All analysis carried out using SAS. The mean score for the global health status for breast cancer patients was 64.92+/-11.42. Linear regression showed that only grade of tumor, occupational status, menopausal status, financial difficulties and dyspnea were statistically significant. In spite of linear regression, financial difficulties were not significant in quantile regression analysis and dyspnea was only significant for first quartile. Also emotion functioning and duration of disease statistically predicted the QOL score in the third quartile. The results have demonstrated that using quantile regression leads to better interpretation and richer inference about predictors of the breast cancer patient quality of life.

  11. Measuring racial/ethnic disparities across the distribution of health care expenditures.

    PubMed

    Cook, Benjamin Lê; Manning, Willard G

    2009-10-01

    To assess whether black-white and Hispanic-white disparities increase or abate in the upper quantiles of total health care expenditure, conditional on covariates. Nationally representative adult population of non-Hispanic whites, African Americans, and Hispanics from the 2001-2005 Medical Expenditure Panel Surveys. We examine unadjusted racial/ethnic differences across the distribution of expenditures. We apply quantile regression to measure disparities at the median, 75th, 90th, and 95th quantiles, testing for differences over the distribution of health care expenditures and across income and education categories. We test the sensitivity of the results to comparisons based only on health status and estimate a two-part model to ensure that results are not driven by an extremely skewed distribution of expenditures with a large zero mass. Black-white and Hispanic-white disparities diminish in the upper quantiles of expenditure, but expenditures for blacks and Hispanics remain significantly lower than for whites throughout the distribution. For most education and income categories, disparities exist at the median and decline, but remain significant even with increased education and income. Blacks and Hispanics receive significantly disparate care at high expenditure levels, suggesting prioritization of improved access to quality care among minorities with critical health issues.

  12. Analysis of the labor productivity of enterprises via quantile regression

    NASA Astrophysics Data System (ADS)

    Türkan, Semra

    2017-07-01

    In this study, we have analyzed the factors that affect the performance of Turkey's Top 500 Industrial Enterprises using quantile regression. The variable about labor productivity of enterprises is considered as dependent variable, the variableabout assets is considered as independent variable. The distribution of labor productivity of enterprises is right-skewed. If the dependent distribution is skewed, linear regression could not catch important aspects of the relationships between the dependent variable and its predictors due to modeling only the conditional mean. Hence, the quantile regression, which allows modelingany quantilesof the dependent distribution, including the median,appears to be useful. It examines whether relationships between dependent and independent variables are different for low, medium, and high percentiles. As a result of analyzing data, the effect of total assets is relatively constant over the entire distribution, except the upper tail. It hasa moderately stronger effect in the upper tail.

  13. Simulating Quantile Models with Applications to Economics and Management

    NASA Astrophysics Data System (ADS)

    Machado, José A. F.

    2010-05-01

    The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.

  14. Impact of body mass on job quality.

    PubMed

    Kim, Tae Hyun; Han, Euna

    2015-04-01

    The current study explores the association between body mass and job quality, a composite measurement of job characteristics, for adults. We use nationally representative data from the Korean Labor and Income Panel Study for the years 2005, 2007, and 2008 with 7282 person-year observations for men and 4611 for women. A Quality of Work Index (QWI) is calculated based on work content, job security, the possibilities for improvement, compensation, work conditions, and interpersonal relationships at work. The key independent variable is the body mass index (kg/m(2)) splined at 18.5, 25, and 30. For men, BMI is positively associated with the QWI only in the normal weight segment (+0.19 percentage points at the 10th, +0.28 at the 50th, +0.32 at the 75th, +0.34 at the 90th, and +0.48 at the 95th quantiles). A unit increase in the BMI for women is associated with a lower QWI at the lower quantiles in the normal weight segment (-0.28 at the 5th, -0.19 at the 10th, and -0.25 percentage points at the 25th quantiles) and at the upper quantiles in the overweight segment (-1.15 at the 90th and -1.66 percentage points at the 95th quantiles). The results imply a spill-over cost of overweight or obesity beyond its impact on health in terms of success in the labor market. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes

    PubMed Central

    Li, Degui; Li, Runze

    2016-01-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894

  16. Interquantile Shrinkage in Regression Models

    PubMed Central

    Jiang, Liewen; Wang, Huixia Judy; Bondell, Howard D.

    2012-01-01

    Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes towards constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplemental materials for the article are available online. PMID:24363546

  17. Quantile Regression for Recurrent Gap Time Data

    PubMed Central

    Luo, Xianghua; Huang, Chiung-Yu; Wang, Lan

    2014-01-01

    Summary Evaluating covariate effects on gap times between successive recurrent events is of interest in many medical and public health studies. While most existing methods for recurrent gap time analysis focus on modeling the hazard function of gap times, a direct interpretation of the covariate effects on the gap times is not available through these methods. In this article, we consider quantile regression that can provide direct assessment of covariate effects on the quantiles of the gap time distribution. Following the spirit of the weighted risk-set method by Luo and Huang (2011, Statistics in Medicine 30, 301–311), we extend the martingale-based estimating equation method considered by Peng and Huang (2008, Journal of the American Statistical Association 103, 637–649) for univariate survival data to analyze recurrent gap time data. The proposed estimation procedure can be easily implemented in existing software for univariate censored quantile regression. Uniform consistency and weak convergence of the proposed estimators are established. Monte Carlo studies demonstrate the effectiveness of the proposed method. An application to data from the Danish Psychiatric Central Register is presented to illustrate the methods developed in this article. PMID:23489055

  18. The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.

    PubMed

    Liu, Chunping; Laporte, Audrey; Ferguson, Brian S

    2008-09-01

    In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.

  19. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  20. Spatially Modeling the Effects of Meteorological Drivers of PM2.5 in the Eastern United States via a Local Linear Penalized Quantile Regression Estimator.

    PubMed

    Russell, Brook T; Wang, Dewei; McMahan, Christopher S

    2017-08-01

    Fine particulate matter (PM 2.5 ) poses a significant risk to human health, with long-term exposure being linked to conditions such as asthma, chronic bronchitis, lung cancer, atherosclerosis, etc. In order to improve current pollution control strategies and to better shape public policy, the development of a more comprehensive understanding of this air pollutant is necessary. To this end, this work attempts to quantify the relationship between certain meteorological drivers and the levels of PM 2.5 . It is expected that the set of important meteorological drivers will vary both spatially and within the conditional distribution of PM 2.5 levels. To account for these characteristics, a new local linear penalized quantile regression methodology is developed. The proposed estimator uniquely selects the set of important drivers at every spatial location and for each quantile of the conditional distribution of PM 2.5 levels. The performance of the proposed methodology is illustrated through simulation, and it is then used to determine the association between several meteorological drivers and PM 2.5 over the Eastern United States (US). This analysis suggests that the primary drivers throughout much of the Eastern US tend to differ based on season and geographic location, with similarities existing between "typical" and "high" PM 2.5 levels.

  1. Robust neural network with applications to credit portfolio data analysis.

    PubMed

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2010-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.

  2. The 2011 heat wave in Greater Houston: Effects of land use on temperature.

    PubMed

    Zhou, Weihe; Ji, Shuang; Chen, Tsun-Hsuan; Hou, Yi; Zhang, Kai

    2014-11-01

    Effects of land use on temperatures during severe heat waves have been rarely studied. This paper examines land use-temperature associations during the 2011 heat wave in Greater Houston. We obtained high resolution of satellite-derived land use data from the US National Land Cover Database, and temperature observations at 138 weather stations from Weather Underground, Inc (WU) during the August of 2011, which was the hottest month in Houston since 1889. Land use regression and quantile regression methods were applied to the monthly averages of daily maximum/mean/minimum temperatures and 114 land use-related predictors. Although selected variables vary with temperature metric, distance to the coastline consistently appears among all models. Other variables are generally related to high developed intensity, open water or wetlands. In addition, our quantile regression analysis shows that distance to the coastline and high developed intensity areas have larger impacts on daily average temperatures at higher quantiles, and open water area has greater impacts on daily minimum temperatures at lower quantiles. By utilizing both land use regression and quantile regression on a recent heat wave in one of the largest US metropolitan areas, this paper provides a new perspective on the impacts of land use on temperatures. Our models can provide estimates of heat exposures for epidemiological studies, and our findings can be combined with demographic variables, air conditioning and relevant diseases information to identify 'hot spots' of population vulnerability for public health interventions to reduce heat-related health effects during heat waves. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Modeling energy expenditure in children and adolescents using quantile regression

    PubMed Central

    Yang, Yunwen; Adolph, Anne L.; Puyau, Maurice R.; Vohra, Firoz A.; Zakeri, Issa F.

    2013-01-01

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obese children. First, QR models will be developed to predict minute-by-minute awake EE at different quantile levels based on heart rate (HR) and physical activity (PA) accelerometry counts, and child characteristics of age, sex, weight, and height. Second, the QR models will be used to evaluate the covariate effects of weight, PA, and HR across the conditional EE distribution. QR and ordinary least squares (OLS) regressions are estimated in 109 children, aged 5–18 yr. QR modeling of EE outperformed OLS regression for both nonobese and obese populations. Average prediction errors for QR compared with OLS were not only smaller at the median τ = 0.5 (18.6 vs. 21.4%), but also substantially smaller at the tails of the distribution (10.2 vs. 39.2% at τ = 0.1 and 8.7 vs. 19.8% at τ = 0.9). Covariate effects of weight, PA, and HR on EE for the nonobese and obese children differed across quantiles (P < 0.05). The associations (linear and quadratic) between PA and HR with EE were stronger for the obese than nonobese population (P < 0.05). In conclusion, QR provided more accurate predictions of EE compared with conventional OLS regression, especially at the tails of the distribution, and revealed substantially different covariate effects of weight, PA, and HR on EE in nonobese and obese children. PMID:23640591

  4. Data quantile-quantile plots: quantifying the time evolution of space climatology

    NASA Astrophysics Data System (ADS)

    Tindale, Elizabeth; Chapman, Sandra

    2017-04-01

    The solar wind is inherently variable across a wide range of spatio-temporal scales; embedded in the flow are the signatures of distinct non-linear physical processes from evolving turbulence to the dynamical solar corona. In-situ satellite observations of solar wind magnetic field and velocity are at minute and below time resolution and now extend over several solar cycles. Each solar cycle is unique, and the space climatology challenge is to quantify how solar wind variability changes within, and across, each distinct solar cycle, and how this in turn drives space weather at earth. We will demonstrate a novel statistical method, that of data-data quantile-quantile (DQQ) plots, which quantifies how the underlying statistical distribution of a given observable is changing in time. Importantly this method does not require any assumptions concerning the underlying functional form of the distribution and can identify multi-component behaviour that is changing in time. This can be used to determine when a sub-range of a given observable is undergoing a change in statistical distribution, or where the moments of the distribution only are changing and the functional form of the underlying distribution is not changing in time. The method is quite general; for this application we use data from the WIND satellite to compare the solar wind across the minima and maxima of solar cycles 23 and 24 [1], and how these changes are manifest in parameters that quantify coupling to the earth's magnetosphere. [1] Tindale, E., and S.C. Chapman (2016), Geophys. Res. Lett., 43(11), doi: 10.1002/2016GL068920.

  5. Linear Regression Quantile Mapping (RQM) - A new approach to bias correction with consistent quantile trends

    NASA Astrophysics Data System (ADS)

    Passow, Christian; Donner, Reik

    2017-04-01

    Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016

  6. Estimating geographic variation on allometric growth and body condition of Blue Suckers with quantile regression

    USGS Publications Warehouse

    Cade, B.S.; Terrell, J.W.; Neely, B.C.

    2011-01-01

    Increasing our understanding of how environmental factors affect fish body condition and improving its utility as a metric of aquatic system health require reliable estimates of spatial variation in condition (weight at length). We used three statistical approaches that varied in how they accounted for heterogeneity in allometric growth to estimate differences in body condition of blue suckers Cycleptus elongatus across 19 large-river locations in the central USA. Quantile regression of an expanded allometric growth model provided the most comprehensive estimates, including variation in exponents within and among locations (range = 2.88–4.24). Blue suckers from more-southerly locations had the largest exponents. Mixed-effects mean regression of a similar expanded allometric growth model allowed exponents to vary among locations (range = 3.03–3.60). Mean relative weights compared across selected intervals of total length (TL = 510–594 and 594–692 mm) in a multiplicative model involved the implicit assumption that allometric exponents within and among locations were similar to the exponent (3.46) for the standard weight equation. Proportionate differences in the quantiles of weight at length for adult blue suckers (TL = 510, 594, 644, and 692 mm) compared with their average across locations ranged from 1.08 to 1.30 for southern locations (Texas, Mississippi) and from 0.84 to 1.00 for northern locations (Montana, North Dakota); proportionate differences for mean weight ranged from 1.13 to 1.17 and from 0.87 to 0.95, respectively, and those for mean relative weight ranged from 1.10 to 1.18 and from 0.86 to 0.98, respectively. Weights for fish at longer lengths varied by 600–700 g within a location and by as much as 2,000 g among southern and northern locations. Estimates for the Wabash River, Indiana (0.96–1.07 times the average; greatest increases for lower weights at shorter TLs), and for the Missouri River from Blair, Nebraska, to Sioux City, Iowa (0.90–1.00 times the average; greatest decreases for lower weights at longer TLs), were examined in detail to explain the additional information provided by quantile estimates.

  7. Multi-element stochastic spectral projection for high quantile estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, Jordan, E-mail: jordan.ko@mac.com; Garnier, Josselin

    2013-06-15

    We investigate quantile estimation by multi-element generalized Polynomial Chaos (gPC) metamodel where the exact numerical model is approximated by complementary metamodels in overlapping domains that mimic the model’s exact response. The gPC metamodel is constructed by the non-intrusive stochastic spectral projection approach and function evaluation on the gPC metamodel can be considered as essentially free. Thus, large number of Monte Carlo samples from the metamodel can be used to estimate α-quantile, for moderate values of α. As the gPC metamodel is an expansion about the means of the inputs, its accuracy may worsen away from these mean values where themore » extreme events may occur. By increasing the approximation accuracy of the metamodel, we may eventually improve accuracy of quantile estimation but it is very expensive. A multi-element approach is therefore proposed by combining a global metamodel in the standard normal space with supplementary local metamodels constructed in bounded domains about the design points corresponding to the extreme events. To improve the accuracy and to minimize the sampling cost, sparse-tensor and anisotropic-tensor quadratures are tested in addition to the full-tensor Gauss quadrature in the construction of local metamodels; different bounds of the gPC expansion are also examined. The global and local metamodels are combined in the multi-element gPC (MEgPC) approach and it is shown that MEgPC can be more accurate than Monte Carlo or importance sampling methods for high quantile estimations for input dimensions roughly below N=8, a limit that is very much case- and α-dependent.« less

  8. Asymptotics of nonparametric L-1 regression models with dependent data

    PubMed Central

    ZHAO, ZHIBIAO; WEI, YING; LIN, DENNIS K.J.

    2013-01-01

    We investigate asymptotic properties of least-absolute-deviation or median quantile estimates of the location and scale functions in nonparametric regression models with dependent data from multiple subjects. Under a general dependence structure that allows for longitudinal data and some spatially correlated data, we establish uniform Bahadur representations for the proposed median quantile estimates. The obtained Bahadur representations provide deep insights into the asymptotic behavior of the estimates. Our main theoretical development is based on studying the modulus of continuity of kernel weighted empirical process through a coupling argument. Progesterone data is used for an illustration. PMID:24955016

  9. [Spatial heterogeneity in body condition of small yellow croaker in Yellow Sea and East China Sea based on mixed-effects model and quantile regression analysis].

    PubMed

    Liu, Zun-Lei; Yuan, Xing-Wei; Yan, Li-Ping; Yang, Lin-Lin; Cheng, Jia-Hua

    2013-09-01

    By using the 2008-2010 investigation data about the body condition of small yellow croaker in the offshore waters of southern Yellow Sea (SYS), open waters of northern East China Sea (NECS), and offshore waters of middle East China Sea (MECS), this paper analyzed the spatial heterogeneity of body length-body mass of juvenile and adult small yellow croakers by the statistical approaches of mean regression model and quantile regression model. The results showed that the residual standard errors from the analysis of covariance (ANCOVA) and the linear mixed-effects model were similar, and those from the simple linear regression were the highest. For the juvenile small yellow croakers, their mean body mass in SYS and NECS estimated by the mixed-effects mean regression model was higher than the overall average mass across the three regions, while the mean body mass in MECS was below the overall average. For the adult small yellow croakers, their mean body mass in NECS was higher than the overall average, while the mean body mass in SYS and MECS was below the overall average. The results from quantile regression indicated the substantial differences in the allometric relationships of juvenile small yellow croakers between SYS, NECS, and MECS, with the estimated mean exponent of the allometric relationship in SYS being 2.85, and the interquartile range being from 2.63 to 2.96, which indicated the heterogeneity of body form. The results from ANCOVA showed that the allometric body length-body mass relationships were significantly different between the 25th and 75th percentile exponent values (F=6.38, df=1737, P<0.01) and the 25th percentile and median exponent values (F=2.35, df=1737, P=0.039). The relationship was marginally different between the median and 75th percentile exponent values (F=2.21, df=1737, P=0.051). The estimated body length-body mass exponent of adult small yellow croakers in SYS was 3.01 (10th and 95th percentiles = 2.77 and 3.1, respectively). The estimated body length-body mass relationships were significantly different from the lower and upper quantiles of the exponent (F=3.31, df=2793, P=0.01) and the median and upper quantiles (F=3.56, df=2793, P<0.01), while no significant difference was observed between the lower and median quantiles (F=0.98, df=2793, P=0.43).

  10. Composite marginal quantile regression analysis for longitudinal adolescent body mass index data.

    PubMed

    Yang, Chi-Chuan; Chen, Yi-Hau; Chang, Hsing-Yi

    2017-09-20

    Childhood and adolescenthood overweight or obesity, which may be quantified through the body mass index (BMI), is strongly associated with adult obesity and other health problems. Motivated by the child and adolescent behaviors in long-term evolution (CABLE) study, we are interested in individual, family, and school factors associated with marginal quantiles of longitudinal adolescent BMI values. We propose a new method for composite marginal quantile regression analysis for longitudinal outcome data, which performs marginal quantile regressions at multiple quantile levels simultaneously. The proposed method extends the quantile regression coefficient modeling method introduced by Frumento and Bottai (Biometrics 2016; 72:74-84) to longitudinal data accounting suitably for the correlation structure in longitudinal observations. A goodness-of-fit test for the proposed modeling is also developed. Simulation results show that the proposed method can be much more efficient than the analysis without taking correlation into account and the analysis performing separate quantile regressions at different quantile levels. The application to the longitudinal adolescent BMI data from the CABLE study demonstrates the practical utility of our proposal. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Shrinkage Estimation of Varying Covariate Effects Based On Quantile Regression

    PubMed Central

    Peng, Limin; Xu, Jinfeng; Kutner, Nancy

    2013-01-01

    Varying covariate effects often manifest meaningful heterogeneity in covariate-response associations. In this paper, we adopt a quantile regression model that assumes linearity at a continuous range of quantile levels as a tool to explore such data dynamics. The consideration of potential non-constancy of covariate effects necessitates a new perspective for variable selection, which, under the assumed quantile regression model, is to retain variables that have effects on all quantiles of interest as well as those that influence only part of quantiles considered. Current work on l1-penalized quantile regression either does not concern varying covariate effects or may not produce consistent variable selection in the presence of covariates with partial effects, a practical scenario of interest. In this work, we propose a shrinkage approach by adopting a novel uniform adaptive LASSO penalty. The new approach enjoys easy implementation without requiring smoothing. Moreover, it can consistently identify the true model (uniformly across quantiles) and achieve the oracle estimation efficiency. We further extend the proposed shrinkage method to the case where responses are subject to random right censoring. Numerical studies confirm the theoretical results and support the utility of our proposals. PMID:25332515

  12. Variability of daily UV index in Jokioinen, Finland, in 1995-2015

    NASA Astrophysics Data System (ADS)

    Heikkilä, A.; Uusitalo, K.; Kärhä, P.; Vaskuri, A.; Lakkala, K.; Koskela, T.

    2017-02-01

    UV Index is a measure for UV radiation harmful for the human skin, developed and used to promote the sun awareness and protection of people. Monitoring programs conducted around the world have produced a number of long-term time series of UV irradiance. One of the longest time series of solar spectral UV irradiance in Europe has been obtained from the continuous measurements of Brewer #107 spectrophotometer in Jokioinen (lat. 60°44'N, lon. 23°30'E), Finland, over the years 1995-2015. We have used descriptive statistics and estimates of cumulative distribution functions, quantiles and probability density functions in the analysis of the time series of daily UV Index maxima. Seasonal differences in the estimated distributions and in the trends of the estimated quantiles are found.

  13. Using the Quantile Mapping to improve a weather generator

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Themessl, M.; Gobiet, A.

    2012-04-01

    We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.

  14. Can quantile mapping improve precipitation extremes from regional climate models?

    NASA Astrophysics Data System (ADS)

    Tani, Satyanarayana; Gobiet, Andreas

    2015-04-01

    The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.

  15. Accelerating Approximate Bayesian Computation with Quantile Regression: application to cosmological redshift distributions

    NASA Astrophysics Data System (ADS)

    Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.

    2018-02-01

    Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.

  16. Economic policy uncertainty, equity premium and dependence between their quantiles: Evidence from quantile-on-quantile approach

    NASA Astrophysics Data System (ADS)

    Raza, Syed Ali; Zaighum, Isma; Shah, Nida

    2018-02-01

    This paper examines the relationship between economic policy uncertainty and equity premium in G7 countries over a period of the monthly data from January 1989 to December 2015 using a novel technique namely QQ regression proposed by Sim and Zhou (2015). Based on QQ approach, we estimate how the quantiles of the economic policy uncertainty affect the quantiles of the equity premium. Thus, it provides a comprehensive insight into the overall dependence structure between the equity premium and economic policy uncertainty as compared to traditional techniques like OLS or quantile regression. Overall, our empirical evidence suggests the existence of a negative association between equity premium and EPU predominately in all G7 countries, especially in the extreme low and extreme high tails. However, differences exist among countries and across different quantiles of EPU and the equity premium within each country. The existence of this heterogeneity among countries is due to the differences in terms of dependency on economic policy, other stock markets, and the linkages with other country's equity market.

  17. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    USGS Publications Warehouse

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of the variance in the fledgling counts as climate, parent age class, and landscape habitat predictors. Our logistic quantile regression model can be used for any discrete response variables with fixed upper and lower bounds.

  18. Stochastic simulation of soil particle-size curves in heterogeneous aquifer systems through a Bayes space approach

    NASA Astrophysics Data System (ADS)

    Menafoglio, A.; Guadagnini, A.; Secchi, P.

    2016-08-01

    We address the problem of stochastic simulation of soil particle-size curves (PSCs) in heterogeneous aquifer systems. Unlike traditional approaches that focus solely on a few selected features of PSCs (e.g., selected quantiles), our approach considers the entire particle-size curves and can optionally include conditioning on available data. We rely on our prior work to model PSCs as cumulative distribution functions and interpret their density functions as functional compositions. We thus approximate the latter through an expansion over an appropriate basis of functions. This enables us to (a) effectively deal with the data dimensionality and constraints and (b) to develop a simulation method for PSCs based upon a suitable and well defined projection procedure. The new theoretical framework allows representing and reproducing the complete information content embedded in PSC data. As a first field application, we demonstrate the quality of unconditional and conditional simulations obtained with our methodology by considering a set of particle-size curves collected within a shallow alluvial aquifer in the Neckar river valley, Germany.

  19. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    NASA Astrophysics Data System (ADS)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  20. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  1. Groundwater depth prediction in a shallow aquifer in north China by a quantile regression model

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Wei, Wan; Zhao, Yong; Qiao, Jiale

    2017-01-01

    There is a close relationship between groundwater level in a shallow aquifer and the surface ecological environment; hence, it is important to accurately simulate and predict the groundwater level in eco-environmental construction projects. The multiple linear regression (MLR) model is one of the most useful methods to predict groundwater level (depth); however, the predicted values by this model only reflect the mean distribution of the observations and cannot effectively fit the extreme distribution data (outliers). The study reported here builds a prediction model of groundwater-depth dynamics in a shallow aquifer using the quantile regression (QR) method on the basis of the observed data of groundwater depth and related factors. The proposed approach was applied to five sites in Tianjin city, north China, and the groundwater depth was calculated in different quantiles, from which the optimal quantile was screened out according to the box plot method and compared to the values predicted by the MLR model. The results showed that the related factors in the five sites did not follow the standard normal distribution and that there were outliers in the precipitation and last-month (initial state) groundwater-depth factors because the basic assumptions of the MLR model could not be achieved, thereby causing errors. Nevertheless, these conditions had no effect on the QR model, as it could more effectively describe the distribution of original data and had a higher precision in fitting the outliers.

  2. Modeling energy expenditure in children and adolescents using quantile regression

    USDA-ARS?s Scientific Manuscript database

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obes...

  3. Quantile regression applied to spectral distance decay

    USGS Publications Warehouse

    Rocchini, D.; Cade, B.S.

    2008-01-01

    Remotely sensed imagery has long been recognized as a powerful support for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance allows us to quantitatively estimate the amount of turnover in species composition with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological data sets are characterized by a high number of zeroes that add noise to the regression model. Quantile regressions can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this letter, we used ordinary least squares (OLS) and quantile regressions to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.01), considering both OLS and quantile regressions. Nonetheless, the OLS regression estimate of the mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when the spectral distance approaches zero, was very low compared with the intercepts of the upper quantiles, which detected high species similarity when habitats are more similar. In this letter, we demonstrated the power of using quantile regressions applied to spectral distance decay to reveal species diversity patterns otherwise lost or underestimated by OLS regression. ?? 2008 IEEE.

  4. Spectral distance decay: Assessing species beta-diversity by quantile regression

    USGS Publications Warehouse

    Rocchinl, D.; Nagendra, H.; Ghate, R.; Cade, B.S.

    2009-01-01

    Remotely sensed data represents key information for characterizing and estimating biodiversity. Spectral distance among sites has proven to be a powerful approach for detecting species composition variability. Regression analysis of species similarity versus spectral distance may allow us to quantitatively estimate how beta-diversity in species changes with respect to spectral and ecological variability. In classical regression analysis, the residual sum of squares is minimized for the mean of the dependent variable distribution. However, many ecological datasets are characterized by a high number of zeroes that can add noise to the regression model. Quantile regression can be used to evaluate trend in the upper quantiles rather than a mean trend across the whole distribution of the dependent variable. In this paper, we used ordinary least square (ols) and quantile regression to estimate the decay of species similarity versus spectral distance. The achieved decay rates were statistically nonzero (p < 0.05) considering both ols and quantile regression. Nonetheless, ols regression estimate of mean decay rate was only half the decay rate indicated by the upper quantiles. Moreover, the intercept value, representing the similarity reached when spectral distance approaches zero, was very low compared with the intercepts of upper quantiles, which detected high species similarity when habitats are more similar. In this paper we demonstrated the power of using quantile regressions applied to spectral distance decay in order to reveal species diversity patterns otherwise lost or underestimated by ordinary least square regression. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  5. Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis.

    PubMed

    Nieves, Jeri W; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J Americo M; Sorenson, Eric J; D'Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi

    2016-12-01

    There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale-Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5-68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of "good" micronutrients and "good" food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated the usefulness of the weighted quantile sum regression method in the evaluation of diet. Those responsible for nutritional care of the patient with ALS should consider promoting fruit and vegetable intake since they are high in antioxidants and carotenes.

  6. Assessment tools for urban catchments: developing biological indicators based on benthic macroinvertebrates

    USGS Publications Warehouse

    Purcell, A.H.; Bressler, D.W.; Paul, M.J.; Barbour, M.T.; Rankin, E.T.; Carter, J.L.; Resh, V.H.

    2009-01-01

    Biological indicators, particularly benthic macroinvertebrates, are widely used and effective measures of the impact of urbanization on stream ecosystems. A multimetric biological index of urbanization was developed using a large benthic macroinvertebrate dataset (n = 1,835) from the Baltimore, Maryland, metropolitan area and then validated with datasets from Cleveland, Ohio (n = 79); San Jose, California (n = 85); and a different subset of the Baltimore data (n = 85). The biological metrics used to develop the multimetric index were selected using several criteria and were required to represent ecological attributes of macroinvertebrate assemblages including taxonomic composition and richness (number of taxa in the insect orders of Ephemeroptera, Plecoptera, and Trichoptera), functional feeding group (number of taxa designated as filterers), and habit (percent of individuals which cling to the substrate). Quantile regression was used to select metrics and characterize the relationship between the final biological index and an urban gradient (composed of population density, road density, and urban land use). Although more complex biological indices exist, this simplified multimetric index showed a consistent relationship between biological indicators and urban conditions (as measured by quantile regression) in three climatic regions of the United States and can serve as an assessment tool for environmental managers to prioritize urban stream sites for restoration and protection.

  7. Realistic sampling of anisotropic correlogram parameters for conditional simulation of daily rainfields

    NASA Astrophysics Data System (ADS)

    Gyasi-Agyei, Yeboah

    2018-01-01

    This paper has established a link between the spatial structure of radar rainfall, which more robustly describes the spatial structure, and gauge rainfall for improved daily rainfield simulation conditioned on the limited gauged data for regions with or without radar records. A two-dimensional anisotropic exponential function that has parameters of major and minor axes lengths, and direction, is used to describe the correlogram (spatial structure) of daily rainfall in the Gaussian domain. The link is a copula-based joint distribution of the radar-derived correlogram parameters that uses the gauge-derived correlogram parameters and maximum daily temperature as covariates of the Box-Cox power exponential margins and Gumbel copula. While the gauge-derived, radar-derived and the copula-derived correlogram parameters reproduced the mean estimates similarly using leave-one-out cross-validation of ordinary kriging, the gauge-derived parameters yielded higher standard deviation (SD) of the Gaussian quantile which reflects uncertainty in over 90% of cases. However, the distribution of the SD generated by the radar-derived and the copula-derived parameters could not be distinguished. For the validation case, the percentage of cases of higher SD by the gauge-derived parameter sets decreased to 81.2% and 86.6% for the non-calibration and the calibration periods, respectively. It has been observed that 1% reduction in the Gaussian quantile SD can cause over 39% reduction in the SD of the median rainfall estimate, actual reduction being dependent on the distribution of rainfall of the day. Hence the main advantage of using the most correct radar correlogram parameters is to reduce the uncertainty associated with conditional simulations that rely on SD through kriging.

  8. A quantile regression approach can reveal the effect of fruit and vegetable consumption on plasma homocysteine levels.

    PubMed

    Verly, Eliseu; Steluti, Josiane; Fisberg, Regina Mara; Marchioni, Dirce Maria Lobo

    2014-01-01

    A reduction in homocysteine concentration due to the use of supplemental folic acid is well recognized, although evidence of the same effect for natural folate sources, such as fruits and vegetables (FV), is lacking. The traditional statistical analysis approaches do not provide further information. As an alternative, quantile regression allows for the exploration of the effects of covariates through percentiles of the conditional distribution of the dependent variable. To investigate how the associations of FV intake with plasma total homocysteine (tHcy) differ through percentiles in the distribution using quantile regression. A cross-sectional population-based survey was conducted among 499 residents of Sao Paulo City, Brazil. The participants provided food intake and fasting blood samples. Fruit and vegetable intake was predicted by adjusting for day-to-day variation using a proper measurement error model. We performed a quantile regression to verify the association between tHcy and the predicted FV intake. The predicted values of tHcy for each percentile model were calculated considering an increase of 200 g in the FV intake for each percentile. The results showed that tHcy was inversely associated with FV intake when assessed by linear regression whereas, the association was different when using quantile regression. The relationship with FV consumption was inverse and significant for almost all percentiles of tHcy. The coefficients increased as the percentile of tHcy increased. A simulated increase of 200 g in the FV intake could decrease the tHcy levels in the overall percentiles, but the higher percentiles of tHcy benefited more. This study confirms that the effect of FV intake on lowering the tHcy levels is dependent on the level of tHcy using an innovative statistical approach. From a public health point of view, encouraging people to increase FV intake would benefit people with high levels of tHcy.

  9. Quantile equivalence to evaluate compliance with habitat management objectives

    USGS Publications Warehouse

    Cade, Brian S.; Johnson, Pamela R.

    2011-01-01

    Equivalence estimated with linear quantile regression was used to evaluate compliance with habitat management objectives at Arapaho National Wildlife Refuge based on monitoring data collected in upland (5,781 ha; n = 511 transects) and riparian and meadow (2,856 ha, n = 389 transects) habitats from 2005 to 2008. Quantiles were used because the management objectives specified proportions of the habitat area that needed to comply with vegetation criteria. The linear model was used to obtain estimates that were averaged across 4 y. The equivalence testing framework allowed us to interpret confidence intervals for estimated proportions with respect to intervals of vegetative criteria (equivalence regions) in either a liberal, benefit-of-doubt or conservative, fail-safe approach associated with minimizing alternative risks. Simple Boolean conditional arguments were used to combine the quantile equivalence results for individual vegetation components into a joint statement for the multivariable management objectives. For example, management objective 2A required at least 809 ha of upland habitat with a shrub composition ≥0.70 sagebrush (Artemisia spp.), 20–30% canopy cover of sagebrush ≥25 cm in height, ≥20% canopy cover of grasses, and ≥10% canopy cover of forbs on average over 4 y. Shrub composition and canopy cover of grass each were readily met on >3,000 ha under either conservative or liberal interpretations of sampling variability. However, there were only 809–1,214 ha (conservative to liberal) with ≥10% forb canopy cover and 405–1,098 ha with 20–30%canopy cover of sagebrush ≥25 cm in height. Only 91–180 ha of uplands simultaneously met criteria for all four components, primarily because canopy cover of sagebrush and forbs was inversely related when considered at the spatial scale (30 m) of a sample transect. We demonstrate how the quantile equivalence analyses also can help refine the numerical specification of habitat objectives and explore specification of spatial scales for objectives with respect to sampling scales used to evaluate those objectives.

  10. Predicting Word Reading Ability: A Quantile Regression Study

    ERIC Educational Resources Information Center

    McIlraith, Autumn L.

    2018-01-01

    Predictors of early word reading are well established. However, it is unclear if these predictors hold for readers across a range of word reading abilities. This study used quantile regression to investigate predictive relationships at different points in the distribution of word reading. Quantile regression analyses used preschool and…

  11. Disturbance automated reference toolset (DART): Assessing patterns in ecological recovery from energy development on the Colorado Plateau

    USGS Publications Warehouse

    Nauman, Travis; Duniway, Michael C.; Villarreal, Miguel; Poitras, Travis

    2017-01-01

    A new disturbance automated reference toolset (DART) was developed to monitor human land surface impacts using soil-type and ecological context. DART identifies reference areas with similar soils, topography, and geology; and compares the disturbance condition to the reference area condition using a quantile-based approach based on a satellite vegetation index. DART was able to represent 26–55% of variation of relative differences in bare ground and 26–41% of variation in total foliar cover when comparing sites with nearby ecological reference areas using the Soil Adjusted Total Vegetation Index (SATVI). Assessment of ecological recovery at oil and gas pads on the Colorado Plateau with DART revealed that more than half of well-pads were below the 25th percentile of reference areas. Machine learning trend analysis of poorly recovering well-pads (quantile < 0.23) had out-of-bag error rates between 37 and 40% indicating moderate association with environmental and management variables hypothesized to influence recovery. Well-pads in grasslands (median quantile [MQ] = 13%), blackbrush (Coleogyne ramosissima) shrublands (MQ = 18%), arid canyon complexes (MQ = 18%), warmer areas with more summer-dominated precipitation, and state administered areas (MQ = 12%) had low recovery rates. Results showcase the usefulness of DART for assessing discrete surface land disturbances, and highlight the need for more targeted rehabilitation efforts at oil and gas well-pads in the arid southwest US.

  12. Disturbance automated reference toolset (DART): Assessing patterns in ecological recovery from energy development on the Colorado Plateau.

    PubMed

    Nauman, Travis W; Duniway, Michael C; Villarreal, Miguel L; Poitras, Travis B

    2017-04-15

    A new disturbance automated reference toolset (DART) was developed to monitor human land surface impacts using soil-type and ecological context. DART identifies reference areas with similar soils, topography, and geology; and compares the disturbance condition to the reference area condition using a quantile-based approach based on a satellite vegetation index. DART was able to represent 26-55% of variation of relative differences in bare ground and 26-41% of variation in total foliar cover when comparing sites with nearby ecological reference areas using the Soil Adjusted Total Vegetation Index (SATVI). Assessment of ecological recovery at oil and gas pads on the Colorado Plateau with DART revealed that more than half of well-pads were below the 25th percentile of reference areas. Machine learning trend analysis of poorly recovering well-pads (quantile<0.23) had out-of-bag error rates between 37 and 40% indicating moderate association with environmental and management variables hypothesized to influence recovery. Well-pads in grasslands (median quantile [MQ]=13%), blackbrush (Coleogyne ramosissima) shrublands (MQ=18%), arid canyon complexes (MQ=18%), warmer areas with more summer-dominated precipitation, and state administered areas (MQ=12%) had low recovery rates. Results showcase the usefulness of DART for assessing discrete surface land disturbances, and highlight the need for more targeted rehabilitation efforts at oil and gas well-pads in the arid southwest US. Published by Elsevier B.V.

  13. A Quantile Regression Approach to Understanding the Relations Between Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students

    PubMed Central

    Tighe, Elizabeth L.; Schatschneider, Christopher

    2015-01-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in Adult Basic Education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. PMID:25351773

  14. Applying quantile regression for modeling equivalent property damage only crashes to identify accident blackspots.

    PubMed

    Washington, Simon; Haque, Md Mazharul; Oh, Jutaek; Lee, Dongmin

    2014-05-01

    Hot spot identification (HSID) aims to identify potential sites-roadway segments, intersections, crosswalks, interchanges, ramps, etc.-with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A novel generalized normal distribution for human longevity and other negatively skewed data.

    PubMed

    Robertson, Henry T; Allison, David B

    2012-01-01

    Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.

  16. A Novel Generalized Normal Distribution for Human Longevity and other Negatively Skewed Data

    PubMed Central

    Robertson, Henry T.; Allison, David B.

    2012-01-01

    Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution. PMID:22623974

  17. Influences of spatial and temporal variation on fish-habitat relationships defined by regression quantiles

    Treesearch

    Jason B. Dunham; Brian S. Cade; James W. Terrell

    2002-01-01

    We used regression quantiles to model potentially limiting relationships between the standing crop of cutthroat trout Oncorhynchus clarki and measures of stream channel morphology. Regression quantile models indicated that variation in fish density was inversely related to the width:depth ratio of streams but not to stream width or depth alone. The...

  18. Superquantile/CVaR Risk Measures: Second-Order Theory

    DTIC Science & Technology

    2015-07-31

    order superquantile risk minimization as well as superquantile regression , a proposed second-order version of quantile regression . Keywords...minimization as well as superquantile regression , a proposed second-order version of quantile regression . 15. SUBJECT TERMS 16. SECURITY...superquantilies, because it is deeply tied to generalized regression . The joint formula (3) is central to quantile regression , a well known alternative

  19. An application of quantile random forests for predictive mapping of forest attributes

    Treesearch

    E.A. Freeman; G.G. Moisen

    2015-01-01

    Increasingly, random forest models are used in predictive mapping of forest attributes. Traditional random forests output the mean prediction from the random trees. Quantile regression forests (QRF) is an extension of random forests developed by Nicolai Meinshausen that provides non-parametric estimates of the median predicted value as well as prediction quantiles. It...

  20. An Affine Invariant Bivariate Version of the Sign Test.

    DTIC Science & Technology

    1987-06-01

    words: affine invariance, bivariate quantile, bivariate symmetry, model,. generalized median, influence function , permutation test, normal efficiency...calculate a bivariate version of the influence function , and the resulting form is bounded, as is the case for the univartate sign test, and shows the...terms of a blvariate analogue of IHmpel’s (1974) influence function . The latter, though usually defined as a von-Mises derivative of certain

  1. Regularized quantile regression for SNP marker estimation of pig growth curves.

    PubMed

    Barroso, L M A; Nascimento, M; Nascimento, A C C; Silva, F F; Serão, N V L; Cruz, C D; Resende, M D V; Silva, F L; Azevedo, C F; Lopes, P S; Guimarães, S E F

    2017-01-01

    Genomic growth curves are generally defined only in terms of population mean; an alternative approach that has not yet been exploited in genomic analyses of growth curves is the Quantile Regression (QR). This methodology allows for the estimation of marker effects at different levels of the variable of interest. We aimed to propose and evaluate a regularized quantile regression for SNP marker effect estimation of pig growth curves, as well as to identify the chromosome regions of the most relevant markers and to estimate the genetic individual weight trajectory over time (genomic growth curve) under different quantiles (levels). The regularized quantile regression (RQR) enabled the discovery, at different levels of interest (quantiles), of the most relevant markers allowing for the identification of QTL regions. We found the same relevant markers simultaneously affecting different growth curve parameters (mature weight and maturity rate): two (ALGA0096701 and ALGA0029483) for RQR(0.2), one (ALGA0096701) for RQR(0.5), and one (ALGA0003761) for RQR(0.8). Three average genomic growth curves were obtained and the behavior was explained by the curve in quantile 0.2, which differed from the others. RQR allowed for the construction of genomic growth curves, which is the key to identifying and selecting the most desirable animals for breeding purposes. Furthermore, the proposed model enabled us to find, at different levels of interest (quantiles), the most relevant markers for each trait (growth curve parameter estimates) and their respective chromosomal positions (identification of new QTL regions for growth curves in pigs). These markers can be exploited under the context of marker assisted selection while aiming to change the shape of pig growth curves.

  2. Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru

    NASA Astrophysics Data System (ADS)

    Manzanas, R.; Gutiérrez, J. M.

    2018-05-01

    This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.

  3. A simulation study of nonparametric total deviation index as a measure of agreement based on quantile regression.

    PubMed

    Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael

    2016-01-01

    Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.

  4. A Quantile Regression Approach to Understanding the Relations Among Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students.

    PubMed

    Tighe, Elizabeth L; Schatschneider, Christopher

    2016-07-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82%-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. © Hammill Institute on Disabilities 2014.

  5. Desertification, salinization, and biotic homogenization in a dryland river ecosystem

    USGS Publications Warehouse

    Miyazono, S.; Patino, Reynaldo; Taylor, C.M.

    2015-01-01

    This study determined long-term changes in fish assemblages, river discharge, salinity, and local precipitation, and examined hydrological drivers of biotic homogenization in a dryland river ecosystem, the Trans-Pecos region of the Rio Grande/Rio Bravo del Norte (USA/Mexico). Historical (1977-1989) and current (2010-2011) fish assemblages were analyzed by rarefaction analysis (species richness), nonmetric multidimensional scaling (composition/variability), multiresponse permutation procedures (composition), and paired t-test (variability). Trends in hydrological conditions (1970s-2010s) were examined by Kendall tau and quantile regression, and associations between streamfiow and specific conductance (salinity) by generalized linear models. Since the 1970s, species richness and variability of fish assemblages decreased in the Rio Grande below the confluence with the Rio Conchos (Mexico), a major tributary, but not above it. There was increased representation of lower-flow/higher-salinity tolerant species, thus making fish communities below the confluence taxonomically and functionally more homogeneous to those above it. Unlike findings elsewhere, this biotic homogenization was due primarily to changes in the relative abundances of native species. While Rio Conchos discharge was > 2-fold higher than Rio Grande discharge above their confluence, Rio Conchos discharge decreased during the study period causing Rio Grande discharge below the confluence to also decrease. Rio Conchos salinity is lower than Rio Grande salinity above their confluence and, as Rio Conchos discharge decreased, it caused Rio Grande salinity below the confluence to increase (reduced dilution). Trends in discharge did not correspond to trends in precipitation except at extreme-high (90th quantile) levels. In conclusion, decreasing discharge from the Rio Conchos has led to decreasing flow and increasing salinity in the Rio Grande below the confluence. This spatially uneven desertification and salinization of the Rio Grande has in turn led to a region-wide homogenization of hydrological conditions and of taxonomic and functional attributes of fish assemblages.

  6. Desertification, salinization, and biotic homogenization in a dryland river ecosystem.

    PubMed

    Miyazono, Seiji; Patiño, Reynaldo; Taylor, Christopher M

    2015-04-01

    This study determined long-term changes in fish assemblages, river discharge, salinity, and local precipitation, and examined hydrological drivers of biotic homogenization in a dryland river ecosystem, the Trans-Pecos region of the Rio Grande/Rio Bravo del Norte (USA/Mexico). Historical (1977-1989) and current (2010-2011) fish assemblages were analyzed by rarefaction analysis (species richness), nonmetric multidimensional scaling (composition/variability), multiresponse permutation procedures (composition), and paired t-test (variability). Trends in hydrological conditions (1970s-2010s) were examined by Kendall tau and quantile regression, and associations between streamflow and specific conductance (salinity) by generalized linear models. Since the 1970s, species richness and variability of fish assemblages decreased in the Rio Grande below the confluence with the Rio Conchos (Mexico), a major tributary, but not above it. There was increased representation of lower-flow/higher-salinity tolerant species, thus making fish communities below the confluence taxonomically and functionally more homogeneous to those above it. Unlike findings elsewhere, this biotic homogenization was due primarily to changes in the relative abundances of native species. While Rio Conchos discharge was>2-fold higher than Rio Grande discharge above their confluence, Rio Conchos discharge decreased during the study period causing Rio Grande discharge below the confluence to also decrease. Rio Conchos salinity is lower than Rio Grande salinity above their confluence and, as Rio Conchos discharge decreased, it caused Rio Grande salinity below the confluence to increase (reduced dilution). Trends in discharge did not correspond to trends in precipitation except at extreme-high (90th quantile) levels. In conclusion, decreasing discharge from the Rio Conchos has led to decreasing flow and increasing salinity in the Rio Grande below the confluence. This spatially uneven desertification and salinization of the Rio Grande has in turn led to a region-wide homogenization of hydrological conditions and of taxonomic and functional attributes of fish assemblages. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Examining the Reliability of Student Growth Percentiles Using Multidimensional IRT

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li

    2015-01-01

    Student growth percentiles (SGPs, Betebenner, 2009) are used to locate a student's current score in a conditional distribution based on the student's past scores. Currently, following Betebenner (2009), quantile regression (QR) is most often used operationally to estimate the SGPs. Alternatively, multidimensional item response theory (MIRT) may…

  8. Regionalisation of a distributed method for flood quantiles estimation: Revaluation of local calibration hypothesis to enhance the spatial structure of the optimised parameter

    NASA Astrophysics Data System (ADS)

    Odry, Jean; Arnaud, Patrick

    2016-04-01

    The SHYREG method (Aubert et al., 2014) associates a stochastic rainfall generator and a rainfall-runoff model to produce rainfall and flood quantiles on a 1 km2 mesh covering the whole French territory. The rainfall generator is based on the description of rainy events by descriptive variables following probability distributions and is characterised by a high stability. This stochastic generator is fully regionalised, and the rainfall-runoff transformation is calibrated with a single parameter. Thanks to the stability of the approach, calibration can be performed against only flood quantiles associated with observated frequencies which can be extracted from relatively short time series. The aggregation of SHYREG flood quantiles to the catchment scale is performed using an areal reduction factor technique unique on the whole territory. Past studies demonstrated the accuracy of SHYREG flood quantiles estimation for catchments where flow data are available (Arnaud et al., 2015). Nevertheless, the parameter of the rainfall-runoff model is independently calibrated for each target catchment. As a consequence, this parameter plays a corrective role and compensates approximations and modelling errors which makes difficult to identify its proper spatial pattern. It is an inherent objective of the SHYREG approach to be completely regionalised in order to provide a complete and accurate flood quantiles database throughout France. Consequently, it appears necessary to identify the model configuration in which the calibrated parameter could be regionalised with acceptable performances. The revaluation of some of the method hypothesis is a necessary step before the regionalisation. Especially the inclusion or the modification of the spatial variability of imposed parameters (like production and transfer reservoir size, base flow addition and quantiles aggregation function) should lead to more realistic values of the only calibrated parameter. The objective of the work presented here is to develop a SHYREG evaluation scheme focusing on both local and regional performances. Indeed, it is necessary to maintain the accuracy of at site flood quantiles estimation while identifying a configuration leading to a satisfactory spatial pattern of the calibrated parameter. This ability to be regionalised can be appraised by the association of common regionalisation techniques and split sample validation tests on a set of around 1,500 catchments representing the whole diversity of France physiography. Also, the presence of many nested catchments and a size-based split sample validation make possible to assess the relevance of the calibrated parameter spatial structure inside the largest catchments. The application of this multi-objective evaluation leads to the selection of a version of SHYREG more suitable for regionalisation. References: Arnaud, P., Cantet, P., Aubert, Y., 2015. Relevance of an at-site flood frequency analysis method for extreme events based on stochastic simulation of hourly rainfall. Hydrological Sciences Journal: on press. DOI:10.1080/02626667.2014.965174 Aubert, Y., Arnaud, P., Ribstein, P., Fine, J.A., 2014. The SHYREG flow method-application to 1605 basins in metropolitan France. Hydrological Sciences Journal, 59(5): 993-1005. DOI:10.1080/02626667.2014.902061

  9. Implementation and Evaluation of the Streamflow Statistics (StreamStats) Web Application for Computing Basin Characteristics and Flood Peaks in Illinois

    USGS Publications Warehouse

    Ishii, Audrey L.; Soong, David T.; Sharpe, Jennifer B.

    2010-01-01

    Illinois StreamStats (ILSS) is a Web-based application for computing selected basin characteristics and flood-peak quantiles based on the most recently (2010) published (Soong and others, 2004) regional flood-frequency equations at any rural stream location in Illinois. Limited streamflow statistics including general statistics, flow durations, and base flows also are available for U.S. Geological Survey (USGS) streamflow-gaging stations. ILSS can be accessed on the Web at http://streamstats.usgs.gov/ by selecting the State Applications hyperlink and choosing Illinois from the pull-down menu. ILSS was implemented for Illinois by obtaining and projecting ancillary geographic information system (GIS) coverages; populating the StreamStats database with streamflow-gaging station data; hydroprocessing the 30-meter digital elevation model (DEM) for Illinois to conform to streams represented in the National Hydrographic Dataset 1:100,000 stream coverage; and customizing the Web-based Extensible Markup Language (XML) programs for computing basin characteristics for Illinois. The basin characteristics computed by ILSS then were compared to the basin characteristics used in the published study, and adjustments were applied to the XML algorithms for slope and basin length. Testing of ILSS was accomplished by comparing flood quantiles computed by ILSS at a an approximately random sample of 170 streamflow-gaging stations computed by ILSS with the published flood quantile estimates. Differences between the log-transformed flood quantiles were not statistically significant at the 95-percent confidence level for the State as a whole, nor by the regions determined by each equation, except for region 1, in the northwest corner of the State. In region 1, the average difference in flood quantile estimates ranged from 3.76 percent for the 2-year flood quantile to 4.27 percent for the 500-year flood quantile. The total number of stations in region 1 was small (21) and the mean difference is not large (less than one-tenth of the average prediction error for the regression-equation estimates). The sensitivity of the flood-quantile estimates to differences in the computed basin characteristics are determined and presented in tables. A test of usage consistency was conducted by having at least 7 new users compute flood quantile estimates at 27 locations. The average maximum deviation of the estimate from the mode value at each site was 1.31 percent after four mislocated sites were removed. A comparison of manual 100-year flood-quantile computations with ILSS at 34 sites indicated no statistically significant difference. ILSS appears to be an accurate, reliable, and effective tool for flood-quantile estimates.

  10. Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.

    2013-09-01

    The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.

  11. Effect of psychosocial stressors on patients with Crohn's disease: threatening life experiences and family relations.

    PubMed

    Slonim-Nevo, Vered; Sarid, Orly; Friger, Michael; Schwartz, Doron; Chernin, Elena; Shahar, Ilana; Sergienko, Ruslan; Vardi, Hillel; Rosenthal, Alexander; Mushkalo, Alexander; Dizengof, Vitaly; Ben-Yakov, Gil; Abu-Freha, Naim; Munteanu, Daniella; Gaspar, Nava; Eidelman, Leslie; Segal, Arik; Fich, Alexander; Greenberg, Dan; Odes, Shmuel

    2016-09-01

    Threatening life experiences and adverse family relations are major psychosocial stressors affecting mental and physical health in chronic illnesses, but their influence in Crohn's disease (CD) is unclear. We assessed whether these stressors would predict the psychological and medical condition of CD patients. Consecutive adult CD patients completed a series of instruments including demography, Patient Harvey-Bradshaw Index (P-HBI), Short Inflammatory Bowel Disease Questionnaire (SIBDQ), short-form survey instrument (SF-36), brief symptom inventory (BSI), family assessment device (FAD), and list of threatening life experiences (LTE). Associations of FAD and LTE with P-HBI, SIBDQ, SF-36, and BSI were examined by multiple linear and quantile regression analyses. The cohort included 391 patients, mean age 38.38±13.95 years, 59.6% women, with intermediate economic status. The median scores were as follows: P-HBI 4 (2-8), FAD 1.67 (1.3-2.1), LTE 1 (0-3), SF-36 physical health 43.75 (33.7-51.0), SF-36 mental health 42.99 (34.1-51.9), and BSI-Global Severity Index 0.81 (0.4-1.4). The SIBDQ was 47.27±13.9. LTE was associated with increased P-HBI in all quantiles and FAD in the 50% quantile. FAD and LTE were associated with reduced SIBDQ (P<0.001). Higher LTE was associated with lower SF-36 physical and mental health (P<0.001); FAD was associated with reduced mental health (P<0.001). FAD and LTE were associated positively with GSI in all quantiles; age was associated negatively. CD patients with more threatening life experiences and adverse family relations were less healthy both physically and mentally. Physicians offering patients sociopsychological therapy should relate to threatening life experiences and family relations.

  12. Association Between Dietary Intake and Function in Amyotrophic Lateral Sclerosis

    PubMed Central

    Nieves, Jeri W.; Gennings, Chris; Factor-Litvak, Pam; Hupf, Jonathan; Singleton, Jessica; Sharf, Valerie; Oskarsson, Björn; Fernandes Filho, J. Americo M.; Sorenson, Eric J.; D’Amico, Emanuele; Goetz, Ray; Mitsumoto, Hiroshi

    2017-01-01

    IMPORTANCE There is growing interest in the role of nutrition in the pathogenesis and progression of amyotrophic lateral sclerosis (ALS). OBJECTIVE To evaluate the associations between nutrients, individually and in groups, and ALS function and respiratory function at diagnosis. DESIGN, SETTING, AND PARTICIPANTS A cross-sectional baseline analysis of the Amyotrophic Lateral Sclerosis Multicenter Cohort Study of Oxidative Stress study was conducted from March 14, 2008, to February 27, 2013, at 16 ALS clinics throughout the United States among 302 patients with ALS symptom duration of 18 months or less. EXPOSURES Nutrient intake, measured using a modified Block Food Frequency Questionnaire (FFQ). MAIN OUTCOMES AND MEASURES Amyotrophic lateral sclerosis function, measured using the ALS Functional Rating Scale–Revised (ALSFRS-R), and respiratory function, measured using percentage of predicted forced vital capacity (FVC). RESULTS Baseline data were available on 302 patients with ALS (median age, 63.2 years [interquartile range, 55.5–68.0 years]; 178 men and 124 women). Regression analysis of nutrients found that higher intakes of antioxidants and carotenes from vegetables were associated with higher ALSFRS-R scores or percentage FVC. Empirically weighted indices using the weighted quantile sum regression method of “good” micronutrients and “good” food groups were positively associated with ALSFRS-R scores (β [SE], 2.7 [0.69] and 2.9 [0.9], respectively) and percentage FVC (β [SE], 12.1 [2.8] and 11.5 [3.4], respectively) (all P < .001). Positive and significant associations with ALSFRS-R scores (β [SE], 1.5 [0.61]; P = .02) and percentage FVC (β [SE], 5.2 [2.2]; P = .02) for selected vitamins were found in exploratory analyses. CONCLUSIONS AND RELEVANCE Antioxidants, carotenes, fruits, and vegetables were associated with higher ALS function at baseline by regression of nutrient indices and weighted quantile sum regression analysis. We also demonstrated the usefulness of the weighted quantile sum regression method in the evaluation of diet. Those responsible for nutritional care of the patient with ALS should consider promoting fruit and vegetable intake since they are high in antioxidants and carotenes. PMID:27775751

  13. Quantile regression via vector generalized additive models.

    PubMed

    Yee, Thomas W

    2004-07-30

    One of the most popular methods for quantile regression is the LMS method of Cole and Green. The method naturally falls within a penalized likelihood framework, and consequently allows for considerable flexible because all three parameters may be modelled by cubic smoothing splines. The model is also very understandable: for a given value of the covariate, the LMS method applies a Box-Cox transformation to the response in order to transform it to standard normality; to obtain the quantiles, an inverse Box-Cox transformation is applied to the quantiles of the standard normal distribution. The purposes of this article are three-fold. Firstly, LMS quantile regression is presented within the framework of the class of vector generalized additive models. This confers a number of advantages such as a unifying theory and estimation process. Secondly, a new LMS method based on the Yeo-Johnson transformation is proposed, which has the advantage that the response is not restricted to be positive. Lastly, this paper describes a software implementation of three LMS quantile regression methods in the S language. This includes the LMS-Yeo-Johnson method, which is estimated efficiently by a new numerical integration scheme. The LMS-Yeo-Johnson method is illustrated by way of a large cross-sectional data set from a New Zealand working population. Copyright 2004 John Wiley & Sons, Ltd.

  14. Quantile uncertainty and value-at-risk model risk.

    PubMed

    Alexander, Carol; Sarabia, José María

    2012-08-01

    This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value-at-Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of "model risk" in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value-at-Risk model risk and compute the required regulatory capital add-on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value-at-Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks. © 2012 Society for Risk Analysis.

  15. Ecological limit functions relating fish community response to hydrologic departures of the ecological flow regime in the Tennessee River basin, United States

    USGS Publications Warehouse

    Knight, Rodney R.; Murphy, Jennifer C.; Wolfe, William J.; Saylor, Charles F.; Wales, Amy K.

    2014-01-01

    Ecological limit functions relating streamflow and aquatic ecosystems remain elusive despite decades of research. We investigated functional relationships between species richness and changes in streamflow characteristics at 662 fish sampling sites in the Tennessee River basin. Our approach included the following: (1) a brief summary of relevant literature on functional relations between fish and streamflow, (2) the development of ecological limit functions that describe the strongest discernible relationships between fish species richness and streamflow characteristics, (3) the evaluation of proposed definitions of hydrologic reference conditions, and (4) an investigation of the internal structures of wedge-shaped distributions underlying ecological limit functions.Twenty-one ecological limit functions were developed across three ecoregions that relate the species richness of 11 fish groups and departures from hydrologic reference conditions using multivariate and quantile regression methods. Each negatively sloped function is described using up to four streamflow characteristics expressed in terms of cumulative departure from hydrologic reference conditions. Negative slopes indicate increased departure results in decreased species richness.Sites with the highest measured fish species richness generally had near-reference hydrologic conditions for a given ecoregion. Hydrology did not generally differ between sites with the highest and lowest fish species richness, indicating that other environmental factors likely limit species richness at sites with reference hydrology.Use of ecological limit functions to make decisions regarding proposed hydrologic regime changes, although commonly presented as a management tool, is not as straightforward or informative as often assumed. We contend that statistical evaluation of the internal wedge structure below limit functions may provide a probabilistic understanding of how aquatic ecology is influenced by altered hydrology and may serve as the basis for evaluating the potential effect of proposed hydrologic changes.

  16. Estimating risks to aquatic life using quantile regression

    USGS Publications Warehouse

    Schmidt, Travis S.; Clements, William H.; Cade, Brian S.

    2012-01-01

    One of the primary goals of biological assessment is to assess whether contaminants or other stressors limit the ecological potential of running waters. It is important to interpret responses to contaminants relative to other environmental factors, but necessity or convenience limit quantification of all factors that influence ecological potential. In these situations, the concept of limiting factors is useful for data interpretation. We used quantile regression to measure risks to aquatic life exposed to metals by including all regression quantiles (τ  =  0.05–0.95, by increments of 0.05), not just the upper limit of density (e.g., 90th quantile). We measured population densities (individuals/0.1 m2) of 2 mayflies (Rhithrogena spp., Drunella spp.) and a caddisfly (Arctopsyche grandis), aqueous metal mixtures (Cd, Cu, Zn), and other limiting factors (basin area, site elevation, discharge, temperature) at 125 streams in Colorado. We used a model selection procedure to test which factor was most limiting to density. Arctopsyche grandis was limited by other factors, whereas metals limited most quantiles of density for the 2 mayflies. Metals reduced mayfly densities most at sites where other factors were not limiting. Where other factors were limiting, low mayfly densities were observed despite metal concentrations. Metals affected mayfly densities most at quantiles above the mean and not just at the upper limit of density. Risk models developed from quantile regression showed that mayfly densities observed at background metal concentrations are improbable when metal mixtures are at US Environmental Protection Agency criterion continuous concentrations. We conclude that metals limit potential density, not realized average density. The most obvious effects on mayfly populations were at upper quantiles and not mean density. Therefore, we suggest that policy developed from mean-based measures of effects may not be as useful as policy based on the concept of limiting factors.

  17. Quantile Regression in the Study of Developmental Sciences

    PubMed Central

    Petscher, Yaacov; Logan, Jessica A. R.

    2014-01-01

    Linear regression analysis is one of the most common techniques applied in developmental research, but only allows for an estimate of the average relations between the predictor(s) and the outcome. This study describes quantile regression, which provides estimates of the relations between the predictor(s) and outcome, but across multiple points of the outcome’s distribution. Using data from the High School and Beyond and U.S. Sustained Effects Study databases, quantile regression is demonstrated and contrasted with linear regression when considering models with: (a) one continuous predictor, (b) one dichotomous predictor, (c) a continuous and a dichotomous predictor, and (d) a longitudinal application. Results from each example exhibited the differential inferences which may be drawn using linear or quantile regression. PMID:24329596

  18. Managing more than the mean: Using quantile regression to identify factors related to large elk groups

    USGS Publications Warehouse

    Brennan, Angela K.; Cross, Paul C.; Creely, Scott

    2015-01-01

    Synthesis and applications. Our analysis of elk group size distributions using quantile regression suggests that private land, irrigation, open habitat, elk density and wolf abundance can affect large elk group sizes. Thus, to manage larger groups by removal or dispersal of individuals, we recommend incentivizing hunting on private land (particularly if irrigated) during the regular and late hunting seasons, promoting tolerance of wolves on private land (if elk aggregate in these areas to avoid wolves) and creating more winter range and varied habitats. Relationships to the variables of interest also differed by quantile, highlighting the importance of using quantile regression to examine response variables more completely to uncover relationships important to conservation and management.

  19. Asymmetric impact of rainfall on India's food grain production: evidence from quantile autoregressive distributed lag model

    NASA Astrophysics Data System (ADS)

    Pal, Debdatta; Mitra, Subrata Kumar

    2018-01-01

    This study used a quantile autoregressive distributed lag (QARDL) model to capture asymmetric impact of rainfall on food production in India. It was found that the coefficient corresponding to the rainfall in the QARDL increased till the 75th quantile and started decreasing thereafter, though it remained in the positive territory. Another interesting finding is that at the 90th quantile and above the coefficients of rainfall though remained positive was not statistically significant and therefore, the benefit of high rainfall on crop production was not conclusive. However, the impact of other determinants, such as fertilizer and pesticide consumption, is quite uniform over the whole range of the distribution of food grain production.

  20. Bayesian estimation of extreme flood quantiles using a rainfall-runoff model and a stochastic daily rainfall generator

    NASA Astrophysics Data System (ADS)

    Costa, Veber; Fernandes, Wilson

    2017-11-01

    Extreme flood estimation has been a key research topic in hydrological sciences. Reliable estimates of such events are necessary as structures for flood conveyance are continuously evolving in size and complexity and, as a result, their failure-associated hazards become more and more pronounced. Due to this fact, several estimation techniques intended to improve flood frequency analysis and reducing uncertainty in extreme quantile estimation have been addressed in the literature in the last decades. In this paper, we develop a Bayesian framework for the indirect estimation of extreme flood quantiles from rainfall-runoff models. In the proposed approach, an ensemble of long daily rainfall series is simulated with a stochastic generator, which models extreme rainfall amounts with an upper-bounded distribution function, namely, the 4-parameter lognormal model. The rationale behind the generation model is that physical limits for rainfall amounts, and consequently for floods, exist and, by imposing an appropriate upper bound for the probabilistic model, more plausible estimates can be obtained for those rainfall quantiles with very low exceedance probabilities. Daily rainfall time series are converted into streamflows by routing each realization of the synthetic ensemble through a conceptual hydrologic model, the Rio Grande rainfall-runoff model. Calibration of parameters is performed through a nonlinear regression model, by means of the specification of a statistical model for the residuals that is able to accommodate autocorrelation, heteroscedasticity and nonnormality. By combining the outlined steps in a Bayesian structure of analysis, one is able to properly summarize the resulting uncertainty and estimating more accurate credible intervals for a set of flood quantiles of interest. The method for extreme flood indirect estimation was applied to the American river catchment, at the Folsom dam, in the state of California, USA. Results show that most floods, including exceptionally large non-systematic events, were reasonably estimated with the proposed approach. In addition, by accounting for uncertainties in each modeling step, one is able to obtain a better understanding of the influential factors in large flood formation dynamics.

  1. 78 FR 16808 - Connect America Fund; High-Cost Universal Service Support

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-19

    ... to use one regression to generate a single cap on total loop costs for each study area. A single cap.... * * * A preferable, and simpler, approach would be to develop one conditional quantile model for aggregate.... Total universal service support for such carriers was approaching $2 billion annually--more than 40...

  2. Flood quantile estimation at ungauged sites by Bayesian networks

    NASA Astrophysics Data System (ADS)

    Mediero, L.; Santillán, D.; Garrote, L.

    2012-04-01

    Estimating flood quantiles at a site for which no observed measurements are available is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. The most common technique used is the multiple regression analysis, which relates physical and climatic basin characteristic to flood quantiles. Regression equations are fitted from flood frequency data and basin characteristics at gauged sites. Regression equations are a rigid technique that assumes linear relationships between variables and cannot take the measurement errors into account. In addition, the prediction intervals are estimated in a very simplistic way from the variance of the residuals in the estimated model. Bayesian networks are a probabilistic computational structure taken from the field of Artificial Intelligence, which have been widely and successfully applied to many scientific fields like medicine and informatics, but application to the field of hydrology is recent. Bayesian networks infer the joint probability distribution of several related variables from observations through nodes, which represent random variables, and links, which represent causal dependencies between them. A Bayesian network is more flexible than regression equations, as they capture non-linear relationships between variables. In addition, the probabilistic nature of Bayesian networks allows taking the different sources of estimation uncertainty into account, as they give a probability distribution as result. A homogeneous region in the Tagus Basin was selected as case study. A regression equation was fitted taking the basin area, the annual maximum 24-hour rainfall for a given recurrence interval and the mean height as explanatory variables. Flood quantiles at ungauged sites were estimated by Bayesian networks. Bayesian networks need to be learnt from a huge enough data set. As observational data are reduced, a stochastic generator of synthetic data was developed. Synthetic basin characteristics were randomised, keeping the statistical properties of observed physical and climatic variables in the homogeneous region. The synthetic flood quantiles were stochastically generated taking the regression equation as basis. The learnt Bayesian network was validated by the reliability diagram, the Brier Score and the ROC diagram, which are common measures used in the validation of probabilistic forecasts. Summarising, the flood quantile estimations through Bayesian networks supply information about the prediction uncertainty as a probability distribution function of discharges is given as result. Therefore, the Bayesian network model has application as a decision support for water resources and planning management.

  3. More green space is related to less antidepressant prescription rates in the Netherlands: A Bayesian geoadditive quantile regression approach.

    PubMed

    Helbich, Marco; Klein, Nadja; Roberts, Hannah; Hagedoorn, Paulien; Groenewegen, Peter P

    2018-06-20

    Exposure to green space seems to be beneficial for self-reported mental health. In this study we used an objective health indicator, namely antidepressant prescription rates. Current studies rely exclusively upon mean regression models assuming linear associations. It is, however, plausible that the presence of green space is non-linearly related with different quantiles of the outcome antidepressant prescription rates. These restrictions may contribute to inconsistent findings. Our aim was: a) to assess antidepressant prescription rates in relation to green space, and b) to analyze how the relationship varies non-linearly across different quantiles of antidepressant prescription rates. We used cross-sectional data for the year 2014 at a municipality level in the Netherlands. Ecological Bayesian geoadditive quantile regressions were fitted for the 15%, 50%, and 85% quantiles to estimate green space-prescription rate correlations, controlling for physical activity levels, socio-demographics, urbanicity, etc. RESULTS: The results suggested that green space was overall inversely and non-linearly associated with antidepressant prescription rates. More important, the associations differed across the quantiles, although the variation was modest. Significant non-linearities were apparent: The associations were slightly positive in the lower quantile and strongly negative in the upper one. Our findings imply that an increased availability of green space within a municipality may contribute to a reduction in the number of antidepressant prescriptions dispensed. Green space is thus a central health and community asset, whilst a minimum level of 28% needs to be established for health gains. The highest effectiveness occurred at a municipality surface percentage higher than 79%. This inverse dose-dependent relation has important implications for setting future community-level health and planning policies. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Heritability Across the Distribution: An Application of Quantile Regression

    PubMed Central

    Petrill, Stephen A.; Hart, Sara A.; Schatschneider, Christopher; Thompson, Lee A.; Deater-Deckard, Kirby; DeThorne, Laura S.; Bartlett, Christopher

    2016-01-01

    We introduce a new method for analyzing twin data called quantile regression. Through the application presented here, quantile regression is able to assess the genetic and environmental etiology of any skill or ability, at multiple points in the distribution of that skill or ability. This method is compared to the Cherny et al. (Behav Genet 22:153–162, 1992) method in an application to four different reading-related outcomes in 304 pairs of first-grade same sex twins enrolled in the Western Reserve Reading Project. Findings across the two methods were similar; both indicated some variation across the distribution of the genetic and shared environmental influences on non-word reading. However, quantile regression provides more details about the location and size of the measured effect. Applications of the technique are discussed. PMID:21877231

  5. Analysis of Impact of Geographical Environment and Socio-economic Factors on the Spatial Distribution of Kaohsiung Dengue Fever Epidemic

    NASA Astrophysics Data System (ADS)

    Hsu, Wei-Yin; Wen, Tzai-Hung; Yu, Hwa-Lung

    2013-04-01

    Taiwan is located in subtropical and tropical regions with high temperature and high humidity in the summer. This kind of climatic condition is the hotbed for the propagation and spread of the dengue vector mosquito. Kaohsiung City has been the worst dengue fever epidemic city in Taiwan. During the study period, from January 1998 to December 2011, Taiwan CDC recorded 7071 locally dengue epidemic cases in Kaohsiung City, and the number of imported case is 118. Our research uses Quantile Regression, a spatial infection disease distribution, to analyze the correlation between dengue epidemic and geographic environmental factors and human society factors in Kaohsiung. According to our experiment statistics, agriculture and natural forest have a positive relation to dengue fever(5.5~34.39 and 3.91~15.52). The epidemic will rise when the ratio for agriculture and natural forest increases. Residential ratio has a negative relation for quantile 0.1 to 0.4(-0.005~-0.78), and a positive relation for quantile 0.5 to0.9(0.01~18.0) . The mean income is also a significant factor in social economy field, and it has a negative relation to dengue fever(-0.01~-0.04). Conclusion from our research is that the main factor affecting the degree of dengue fever in predilection area is the residential proportion and the ratio of agriculture and natural forest plays an important role affecting the degree of dengue fever in non predilection area. Moreover, the serious epidemic area located by regression model is the same as the actual condition in Kaohsiung. This model can be used to predict the serious epidemic area of dengue fever and provide some references for the Health Agencies

  6. Quantifying Population-Level Risks Using an Individual-Based Model: Sea Otters, Harlequin Ducks, and the Exxon Valdez Oil Spill

    PubMed Central

    Harwell, Mark A; Gentile, John H; Parker, Keith R

    2012-01-01

    Ecological risk assessments need to advance beyond evaluating risks to individuals that are largely based on toxicity studies conducted on a few species under laboratory conditions, to assessing population-level risks to the environment, including considerations of variability and uncertainty. Two individual-based models (IBMs), recently developed to assess current risks to sea otters and seaducks in Prince William Sound more than 2 decades after the Exxon Valdez oil spill (EVOS), are used to explore population-level risks. In each case, the models had previously shown that there were essentially no remaining risks to individuals from polycyclic aromatic hydrocarbons (PAHs) derived from the EVOS. New sensitivity analyses are reported here in which hypothetical environmental exposures to PAHs were heuristically increased until assimilated doses reached toxicity reference values (TRVs) derived at the no-observed-adverse-effects and lowest-observed-adverse-effects levels (NOAEL and LOAEL, respectively). For the sea otters, this was accomplished by artificially increasing the number of sea otter pits that would intersect remaining patches of subsurface oil residues by orders of magnitude over actual estimated rates. Similarly, in the seaduck assessment, the PAH concentrations in the constituents of diet, sediments, and seawater were increased in proportion to their relative contributions to the assimilated doses by orders of magnitude over measured environmental concentrations, to reach the NOAEL and LOAEL thresholds. The stochastic IBMs simulated millions of individuals. From these outputs, frequency distributions were derived of assimilated doses for populations of 500 000 sea otters or seaducks in each of 7 or 8 classes, respectively. Doses to several selected quantiles were analyzed, ranging from the 1-in-1000th most-exposed individuals (99.9% quantile) to the median-exposed individuals (50% quantile). The resulting families of quantile curves provide the basis for characterizing the environmental thresholds below which no population-level effects could be detected and above which population-level effects would be expected to become manifest. This approach provides risk managers an enhanced understanding of the risks to populations under various conditions and assumptions, whether under hypothetically increased exposure regimes, as demonstrated here, or in situations in which actual exposures are near toxic effects levels. This study shows that individual-based models are especially amenable and appropriate for conducting population-level risk assessments, and that they can readily be used to answer questions about the risks to individuals and populations across a variety of exposure conditions. Integr Environ Assess Manag 2012; 8: 503–522. © 2012 SETAC PMID:22275071

  7. Quantifying population-level risks using an individual-based model: sea otters, Harlequin Ducks, and the Exxon Valdez oil spill.

    PubMed

    Harwell, Mark A; Gentile, John H; Parker, Keith R

    2012-07-01

    Ecological risk assessments need to advance beyond evaluating risks to individuals that are largely based on toxicity studies conducted on a few species under laboratory conditions, to assessing population-level risks to the environment, including considerations of variability and uncertainty. Two individual-based models (IBMs), recently developed to assess current risks to sea otters and seaducks in Prince William Sound more than 2 decades after the Exxon Valdez oil spill (EVOS), are used to explore population-level risks. In each case, the models had previously shown that there were essentially no remaining risks to individuals from polycyclic aromatic hydrocarbons (PAHs) derived from the EVOS. New sensitivity analyses are reported here in which hypothetical environmental exposures to PAHs were heuristically increased until assimilated doses reached toxicity reference values (TRVs) derived at the no-observed-adverse-effects and lowest-observed-adverse-effects levels (NOAEL and LOAEL, respectively). For the sea otters, this was accomplished by artificially increasing the number of sea otter pits that would intersect remaining patches of subsurface oil residues by orders of magnitude over actual estimated rates. Similarly, in the seaduck assessment, the PAH concentrations in the constituents of diet, sediments, and seawater were increased in proportion to their relative contributions to the assimilated doses by orders of magnitude over measured environmental concentrations, to reach the NOAEL and LOAEL thresholds. The stochastic IBMs simulated millions of individuals. From these outputs, frequency distributions were derived of assimilated doses for populations of 500,000 sea otters or seaducks in each of 7 or 8 classes, respectively. Doses to several selected quantiles were analyzed, ranging from the 1-in-1000th most-exposed individuals (99.9% quantile) to the median-exposed individuals (50% quantile). The resulting families of quantile curves provide the basis for characterizing the environmental thresholds below which no population-level effects could be detected and above which population-level effects would be expected to become manifest. This approach provides risk managers an enhanced understanding of the risks to populations under various conditions and assumptions, whether under hypothetically increased exposure regimes, as demonstrated here, or in situations in which actual exposures are near toxic effects levels. This study shows that individual-based models are especially amenable and appropriate for conducting population-level risk assessments, and that they can readily be used to answer questions about the risks to individuals and populations across a variety of exposure conditions. Copyright © 2012 SETAC.

  8. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  9. Forecasting peak asthma admissions in London: an application of quantile regression models.

    PubMed

    Soyiri, Ireneous N; Reidpath, Daniel D; Sarran, Christophe

    2013-07-01

    Asthma is a chronic condition of great public health concern globally. The associated morbidity, mortality and healthcare utilisation place an enormous burden on healthcare infrastructure and services. This study demonstrates a multistage quantile regression approach to predicting excess demand for health care services in the form of asthma daily admissions in London, using retrospective data from the Hospital Episode Statistics, weather and air quality. Trivariate quantile regression models (QRM) of asthma daily admissions were fitted to a 14-day range of lags of environmental factors, accounting for seasonality in a hold-in sample of the data. Representative lags were pooled to form multivariate predictive models, selected through a systematic backward stepwise reduction approach. Models were cross-validated using a hold-out sample of the data, and their respective root mean square error measures, sensitivity, specificity and predictive values compared. Two of the predictive models were able to detect extreme number of daily asthma admissions at sensitivity levels of 76 % and 62 %, as well as specificities of 66 % and 76 %. Their positive predictive values were slightly higher for the hold-out sample (29 % and 28 %) than for the hold-in model development sample (16 % and 18 %). QRMs can be used in multistage to select suitable variables to forecast extreme asthma events. The associations between asthma and environmental factors, including temperature, ozone and carbon monoxide can be exploited in predicting future events using QRMs.

  10. Forecasting peak asthma admissions in London: an application of quantile regression models

    NASA Astrophysics Data System (ADS)

    Soyiri, Ireneous N.; Reidpath, Daniel D.; Sarran, Christophe

    2013-07-01

    Asthma is a chronic condition of great public health concern globally. The associated morbidity, mortality and healthcare utilisation place an enormous burden on healthcare infrastructure and services. This study demonstrates a multistage quantile regression approach to predicting excess demand for health care services in the form of asthma daily admissions in London, using retrospective data from the Hospital Episode Statistics, weather and air quality. Trivariate quantile regression models (QRM) of asthma daily admissions were fitted to a 14-day range of lags of environmental factors, accounting for seasonality in a hold-in sample of the data. Representative lags were pooled to form multivariate predictive models, selected through a systematic backward stepwise reduction approach. Models were cross-validated using a hold-out sample of the data, and their respective root mean square error measures, sensitivity, specificity and predictive values compared. Two of the predictive models were able to detect extreme number of daily asthma admissions at sensitivity levels of 76 % and 62 %, as well as specificities of 66 % and 76 %. Their positive predictive values were slightly higher for the hold-out sample (29 % and 28 %) than for the hold-in model development sample (16 % and 18 %). QRMs can be used in multistage to select suitable variables to forecast extreme asthma events. The associations between asthma and environmental factors, including temperature, ozone and carbon monoxide can be exploited in predicting future events using QRMs.

  11. A Simple Formula for Quantiles on the TI-82/83 Calculator.

    ERIC Educational Resources Information Center

    Eisner, Milton P.

    1997-01-01

    The concept of percentile is a fundamental part of every course in basic statistics. Many such courses are now taught to students and require them to have TI-82 or TI-83 calculators. The functions defined in these calculators enable students to easily find the percentiles of a discrete data set. (PVD)

  12. Adult Literacy, Heterogeneity and Returns to Schooling in Chile

    ERIC Educational Resources Information Center

    Patrinos, Harry Anthony; Sakellariou, Chris

    2015-01-01

    We examine the importance of adult functional literacy skills for individuals using a quantile regression methodology. The inclusion of the direct measure of basic skills reduces the return to schooling by 27%, equivalent to two additional years of schooling, while a one standard deviation increase in the score increases earnings by 20%. For those…

  13. Nonparametric methods for drought severity estimation at ungauged sites

    NASA Astrophysics Data System (ADS)

    Sadri, S.; Burn, D. H.

    2012-12-01

    The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.

  14. Non-inferiority tests for anti-infective drugs using control group quantiles.

    PubMed

    Fay, Michael P; Follmann, Dean A

    2016-12-01

    In testing for non-inferiority of anti-infective drugs, the primary endpoint is often the difference in the proportion of failures between the test and control group at a landmark time. The landmark time is chosen to approximately correspond to the qth historic quantile of the control group, and the non-inferiority margin is selected to be reasonable for the target level q. For designing these studies, a troubling issue is that the landmark time must be pre-specified, but there is no guarantee that the proportion of control failures at the landmark time will be close to the target level q. If the landmark time is far from the target control quantile, then the pre-specified non-inferiority margin may not longer be reasonable. Exact variable margin tests have been developed by Röhmel and Kieser to address this problem, but these tests can have poor power if the observed control failure rate at the landmark time is far from its historic value. We develop a new variable margin non-inferiority test where we continue sampling until a pre-specified proportion of failures, q, have occurred in the control group, where q is the target quantile level. The test does not require any assumptions on the failure time distributions, and hence, no knowledge of the true [Formula: see text] control quantile for the study is needed. Our new test is exact and has power comparable to (or greater than) its competitors when the true control quantile from the study equals (or differs moderately from) its historic value. Our nivm R package performs the test and gives confidence intervals on the difference in failure rates at the true target control quantile. The tests can be applied to time to cure or other numeric variables as well. A substantial proportion of new anti-infective drugs being developed use non-inferiority tests in their development, and typically, a pre-specified landmark time and its associated difference margin are set at the design stage to match a specific target control quantile. If through changing standard of care or selection of a different population the target quantile for the control group changes from its historic value, then the appropriateness of the pre-specified margin at the landmark time may be questionable. Our proposed test avoids this problem by sampling until a pre-specified proportion of the controls have failed. © The Author(s) 2016.

  15. Analysis of the Influence of Quantile Regression Model on Mainland Tourists' Service Satisfaction Performance

    PubMed Central

    Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen

    2014-01-01

    It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models. PMID:24574916

  16. Analysis of the influence of quantile regression model on mainland tourists' service satisfaction performance.

    PubMed

    Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen

    2014-01-01

    It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models.

  17. Quantiles for Finite Mixtures of Normal Distributions

    ERIC Educational Resources Information Center

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  18. Environmental determinants of different blood lead levels in children: a quantile analysis from a nationwide survey.

    PubMed

    Etchevers, Anne; Le Tertre, Alain; Lucas, Jean-Paul; Bretin, Philippe; Oulhote, Youssef; Le Bot, Barbara; Glorennec, Philippe

    2015-01-01

    Blood lead levels (BLLs) have substantially decreased in recent decades in children in France. However, further reducing exposure is a public health goal because there is no clear toxicological threshold. The identification of the environmental determinants of BLLs as well as risk factors associated with high BLLs is important to update prevention strategies. We aimed to estimate the contribution of environmental sources of lead to different BLLs in children in France. We enrolled 484 children aged from 6months to 6years, in a nationwide cross-sectional survey in 2008-2009. We measured lead concentrations in blood and environmental samples (water, soils, household settled dusts, paints, cosmetics and traditional cookware). We performed two models: a multivariate generalized additive model on the geometric mean (GM), and a quantile regression model on the 10th, 25th, 50th, 75th and 90th quantile of BLLs. The GM of BLLs was 13.8μg/L (=1.38μg/dL) (95% confidence intervals (CI): 12.7-14.9) and the 90th quantile was 25.7μg/L (CI: 24.2-29.5). Household and common area dust, tap water, interior paint, ceramic cookware, traditional cosmetics, playground soil and dust, and environmental tobacco smoke were associated with the GM of BLLs. Household dust and tap water made the largest contributions to both the GM and the 90th quantile of BLLs. The concentration of lead in dust was positively correlated with all quantiles of BLLs even at low concentrations. Lead concentrations in tap water above 5μg/L were also positively correlated with the GM, 75th and 90th quantiles of BLLs in children drinking tap water. Preventative actions must target household settled dust and tap water to reduce the BLLs of children in France. The use of traditional cosmetics should be avoided whereas ceramic cookware should be limited to decorative purposes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Growth curves of preschool children in the northeast of iran: a population based study using quantile regression approach.

    PubMed

    Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad

    2013-01-14

    Growth charts are widely used to assess children's growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children's weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn't require restricted assumptions, proposed for estimation reference curves and normal values.

  20. Growth Curves of Preschool Children in the Northeast of Iran: A Population Based Study Using Quantile Regression Approach

    PubMed Central

    Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad

    2013-01-01

    Introduction: Growth charts are widely used to assess children’s growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. Methods: A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. Results: The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children’s weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. Conclusion: The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn’t require restricted assumptions, proposed for estimation reference curves and normal values. PMID:23618470

  1. Spatial quantile regression using INLA with applications to childhood overweight in Malawi.

    PubMed

    Mtambo, Owen P L; Masangwi, Salule J; Kazembe, Lawrence N M

    2015-04-01

    Analyses of childhood overweight have mainly used mean regression. However, using quantile regression is more appropriate as it provides flexibility to analyse the determinants of overweight corresponding to quantiles of interest. The main objective of this study was to fit a Bayesian additive quantile regression model with structured spatial effects for childhood overweight in Malawi using the 2010 Malawi DHS data. Inference was fully Bayesian using R-INLA package. The significant determinants of childhood overweight ranged from socio-demographic factors such as type of residence to child and maternal factors such as child age and maternal BMI. We observed significant positive structured spatial effects on childhood overweight in some districts of Malawi. We recommended that the childhood malnutrition policy makers should consider timely interventions based on risk factors as identified in this paper including spatial targets of interventions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Quantile rank maps: a new tool for understanding individual brain development.

    PubMed

    Chen, Huaihou; Kelly, Clare; Castellanos, F Xavier; He, Ye; Zuo, Xi-Nian; Reiss, Philip T

    2015-05-01

    We propose a novel method for neurodevelopmental brain mapping that displays how an individual's values for a quantity of interest compare with age-specific norms. By estimating smoothly age-varying distributions at a set of brain regions of interest, we derive age-dependent region-wise quantile ranks for a given individual, which can be presented in the form of a brain map. Such quantile rank maps could potentially be used for clinical screening. Bootstrap-based confidence intervals are proposed for the quantile rank estimates. We also propose a recalibrated Kolmogorov-Smirnov test for detecting group differences in the age-varying distribution. This test is shown to be more robust to model misspecification than a linear regression-based test. The proposed methods are applied to brain imaging data from the Nathan Kline Institute Rockland Sample and from the Autism Brain Imaging Data Exchange (ABIDE) sample. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. A quantile regression model for failure-time data with time-dependent covariates

    PubMed Central

    Gorfine, Malka; Goldberg, Yair; Ritov, Ya’acov

    2017-01-01

    Summary Since survival data occur over time, often important covariates that we wish to consider also change over time. Such covariates are referred as time-dependent covariates. Quantile regression offers flexible modeling of survival data by allowing the covariates to vary with quantiles. This article provides a novel quantile regression model accommodating time-dependent covariates, for analyzing survival data subject to right censoring. Our simple estimation technique assumes the existence of instrumental variables. In addition, we present a doubly-robust estimator in the sense of Robins and Rotnitzky (1992, Recovery of information and adjustment for dependent censoring using surrogate markers. In: Jewell, N. P., Dietz, K. and Farewell, V. T. (editors), AIDS Epidemiology. Boston: Birkhaäuser, pp. 297–331.). The asymptotic properties of the estimators are rigorously studied. Finite-sample properties are demonstrated by a simulation study. The utility of the proposed methodology is demonstrated using the Stanford heart transplant dataset. PMID:27485534

  4. Regional estimation of extreme suspended sediment concentrations using watershed characteristics

    NASA Astrophysics Data System (ADS)

    Tramblay, Yves; Ouarda, Taha B. M. J.; St-Hilaire, André; Poulin, Jimmy

    2010-01-01

    SummaryThe number of stations monitoring daily suspended sediment concentration (SSC) has been decreasing since the 1980s in North America while suspended sediment is considered as a key variable for water quality. The objective of this study is to test the feasibility of regionalising extreme SSC, i.e. estimating SSC extremes values for ungauged basins. Annual maximum SSC for 72 rivers in Canada and USA were modelled with probability distributions in order to estimate quantiles corresponding to different return periods. Regionalisation techniques, originally developed for flood prediction in ungauged basins, were tested using the climatic, topographic, land cover and soils attributes of the watersheds. Two approaches were compared, using either physiographic characteristics or seasonality of extreme SSC to delineate the regions. Multiple regression models to estimate SSC quantiles as a function of watershed characteristics were built in each region, and compared to a global model including all sites. Regional estimates of SSC quantiles were compared with the local values. Results show that regional estimation of extreme SSC is more efficient than a global regression model including all sites. Groups/regions of stations have been identified, using either the watershed characteristics or the seasonality of occurrence for extreme SSC values providing a method to better describe the extreme events of SSC. The most important variables for predicting extreme SSC are the percentage of clay in the soils, precipitation intensity and forest cover.

  5. Birthweight Related Factors in Northwestern Iran: Using Quantile Regression Method.

    PubMed

    Fallah, Ramazan; Kazemnejad, Anoshirvan; Zayeri, Farid; Shoghli, Alireza

    2015-11-18

    Birthweight is one of the most important predicting indicators of the health status in adulthood. Having a balanced birthweight is one of the priorities of the health system in most of the industrial and developed countries. This indicator is used to assess the growth and health status of the infants. The aim of this study was to assess the birthweight of the neonates by using quantile regression in Zanjan province. This analytical descriptive study was carried out using pre-registered (March 2010 - March 2012) data of neonates in urban/rural health centers of Zanjan province using multiple-stage cluster sampling. Data were analyzed using multiple linear regressions andquantile regression method and SAS 9.2 statistical software. From 8456 newborn baby, 4146 (49%) were female. The mean age of the mothers was 27.1±5.4 years. The mean birthweight of the neonates was 3104 ± 431 grams. Five hundred and seventy-three patients (6.8%) of the neonates were less than 2500 grams. In all quantiles, gestational age of neonates (p<0.05), weight and educational level of the mothers (p<0.05) showed a linear significant relationship with the i of the neonates. However, sex and birth rank of the neonates, mothers age, place of residence (urban/rural) and career were not significant in all quantiles (p>0.05). This study revealed the results of multiple linear regression and quantile regression were not identical. We strictly recommend the use of quantile regression when an asymmetric response variable or data with outliers is available.

  6. Birthweight Related Factors in Northwestern Iran: Using Quantile Regression Method

    PubMed Central

    Fallah, Ramazan; Kazemnejad, Anoshirvan; Zayeri, Farid; Shoghli, Alireza

    2016-01-01

    Introduction: Birthweight is one of the most important predicting indicators of the health status in adulthood. Having a balanced birthweight is one of the priorities of the health system in most of the industrial and developed countries. This indicator is used to assess the growth and health status of the infants. The aim of this study was to assess the birthweight of the neonates by using quantile regression in Zanjan province. Methods: This analytical descriptive study was carried out using pre-registered (March 2010 - March 2012) data of neonates in urban/rural health centers of Zanjan province using multiple-stage cluster sampling. Data were analyzed using multiple linear regressions andquantile regression method and SAS 9.2 statistical software. Results: From 8456 newborn baby, 4146 (49%) were female. The mean age of the mothers was 27.1±5.4 years. The mean birthweight of the neonates was 3104 ± 431 grams. Five hundred and seventy-three patients (6.8%) of the neonates were less than 2500 grams. In all quantiles, gestational age of neonates (p<0.05), weight and educational level of the mothers (p<0.05) showed a linear significant relationship with the i of the neonates. However, sex and birth rank of the neonates, mothers age, place of residence (urban/rural) and career were not significant in all quantiles (p>0.05). Conclusion: This study revealed the results of multiple linear regression and quantile regression were not identical. We strictly recommend the use of quantile regression when an asymmetric response variable or data with outliers is available. PMID:26925889

  7. Association between the Infant and Child Feeding Index (ICFI) and nutritional status of 6- to 35-month-old children in rural western China.

    PubMed

    Qu, Pengfei; Mi, Baibing; Wang, Duolao; Zhang, Ruo; Yang, Jiaomei; Liu, Danmeng; Dang, Shaonong; Yan, Hong

    2017-01-01

    The objective of this study was to determine the relationship between the quality of feeding practices and children's nutritional status in rural western China. A sample of 12,146 pairs of 6- to 35-month-old children and their mothers were recruited using stratified multistage cluster random sampling in rural western China. Quantile regression was used to analyze the relationship between the Infant and Child Feeding Index (ICFI) and children's nutritional status. In rural western China, 24.37% of all infants and young children suffer from malnutrition. Of this total, 19.57%, 8.74% and 4.63% of infants and children are classified as stunting, underweight and wasting, respectively. After adjusting for covariates, the quantile regression results suggested that qualified ICFI (ICFI > 13.8) was associated with all length and HAZ quantiles (P<0.05) and had a greater effect on the following: poor length and HAZ, the β-estimates (length) from 0.76 cm (95% CI: 0.53 to 0.99 cm) to 0.34 cm (95% CI: 0.09 to 0.59 cm) and the β-estimates (HAZ) from 0.17 (95% CI: 0.10 to 0.24) to 0.11 (95% CI: 0.04 to 0.19). Qualified ICFI was also associated with most weight quantiles (P<0.05 except the 80th and 90th quantiles) and poor and intermediate WAZ quantiles (P<0.05 including the 10th, 20th 30th and 40th quantiles). Additionally, qualified ICFI had a greater effect on poor weight and WAZ quantiles in which the β-estimates (weight) were from 0.20 kg (95% CI: 0.14 to 0.26 kg) to 0.06 kg (95% CI: 0.00 to 0.12 kg) and the β-estimates (WAZ) were from 0.14 (95% CI: 0.08 to 0.21) to 0.05 (95% CI: 0.01 to 0.10). Feeding practices were associated with the physical development of infants and young children, and proper feeding practices had a greater effect on poor physical development in infants and young children. For mothers in rural western China, proper guidelines and messaging on complementary feeding practices are necessary.

  8. Pressure Points in Reading Comprehension: A Quantile Multiple Regression Analysis

    ERIC Educational Resources Information Center

    Logan, Jessica

    2017-01-01

    The goal of this study was to examine how selected pressure points or areas of vulnerability are related to individual differences in reading comprehension and whether the importance of these pressure points varies as a function of the level of children's reading comprehension. A sample of 245 third-grade children were given an assessment battery…

  9. A Generalized Approach to the Two Sample Problem: The Quantile Approach.

    DTIC Science & Technology

    1981-04-01

    advantages in this regard as remarked in Parzen (1979) and Wilk and Gnanadesikan (1968). One explanation of its statistical virtues is the fact that Q...differences between male and female right congruence kneecap angles. Wilkand Gnanadesikan (1968)have named a plot of q versus G- [F(q)] a Q-Q plot and...function techniques. 5.3.5 Comparison Function Techniques Wilk and Gnanadesikan (1968) stimulated research in the area of probability plotting where they

  10. Intersection of All Top Quantile

    EPA Pesticide Factsheets

    This layer combines the Top quantiles of the CES, CEVA, and EJSM layers so that viewers can see the overlap of 00e2??hot spots00e2?? for each method. This layer was created by James Sadd of Occidental College of Los Angeles

  11. Intersection of Screening Methods High Quantile

    EPA Pesticide Factsheets

    This layer combines the high quantiles of the CES, CEVA, and EJSM layers so that viewers can see the overlap of 00e2??hot spots00e2?? for each method. This layer was created by James Sadd of Occidental College of Los Angeles

  12. Influences of spatial and temporal variation on fish-habitat relationships defined by regression quantiles

    USGS Publications Warehouse

    Dunham, J.B.; Cade, B.S.; Terrell, J.W.

    2002-01-01

    We used regression quantiles to model potentially limiting relationships between the standing crop of cutthroat trout Oncorhynchus clarki and measures of stream channel morphology. Regression quantile models indicated that variation in fish density was inversely related to the width:depth ratio of streams but not to stream width or depth alone. The spatial and temporal stability of model predictions were examined across years and streams, respectively. Variation in fish density with width:depth ratio (10th-90th regression quantiles) modeled for streams sampled in 1993-1997 predicted the variation observed in 1998-1999, indicating similar habitat relationships across years. Both linear and nonlinear models described the limiting relationships well, the latter performing slightly better. Although estimated relationships were transferable in time, results were strongly dependent on the influence of spatial variation in fish density among streams. Density changes with width:depth ratio in a single stream were responsible for the significant (P < 0.10) negative slopes estimated for the higher quantiles (>80th). This suggests that stream-scale factors other than width:depth ratio play a more direct role in determining population density. Much of the variation in densities of cutthroat trout among streams was attributed to the occurrence of nonnative brook trout Salvelinus fontinalis (a possible competitor) or connectivity to migratory habitats. Regression quantiles can be useful for estimating the effects of limiting factors when ecological responses are highly variable, but our results indicate that spatiotemporal variability in the data should be explicitly considered. In this study, data from individual streams and stream-specific characteristics (e.g., the occurrence of nonnative species and habitat connectivity) strongly affected our interpretation of the relationship between width:depth ratio and fish density.

  13. Early Warning Signals of Financial Crises with Multi-Scale Quantile Regressions of Log-Periodic Power Law Singularities.

    PubMed

    Zhang, Qun; Zhang, Qunzhi; Sornette, Didier

    2016-01-01

    We augment the existing literature using the Log-Periodic Power Law Singular (LPPLS) structures in the log-price dynamics to diagnose financial bubbles by providing three main innovations. First, we introduce the quantile regression to the LPPLS detection problem. This allows us to disentangle (at least partially) the genuine LPPLS signal and the a priori unknown complicated residuals. Second, we propose to combine the many quantile regressions with a multi-scale analysis, which aggregates and consolidates the obtained ensembles of scenarios. Third, we define and implement the so-called DS LPPLS Confidence™ and Trust™ indicators that enrich considerably the diagnostic of bubbles. Using a detailed study of the "S&P 500 1987" bubble and presenting analyses of 16 historical bubbles, we show that the quantile regression of LPPLS signals contributes useful early warning signals. The comparison between the constructed signals and the price development in these 16 historical bubbles demonstrates their significant predictive ability around the real critical time when the burst/rally occurs.

  14. Quantile regression and clustering analysis of standardized precipitation index in the Tarim River Basin, Xinjiang, China

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Xia, Jun; Zhang, Yongyong; Han, Jian; Wu, Xia

    2017-11-01

    Because drought is a very common and widespread natural disaster, it has attracted a great deal of academic interest. Based on 12-month time scale standardized precipitation indices (SPI12) calculated from precipitation data recorded between 1960 and 2015 at 22 weather stations in the Tarim River Basin (TRB), this study aims to identify the trends of SPI and drought duration, severity, and frequency at various quantiles and to perform cluster analysis of drought events in the TRB. The results indicated that (1) both precipitation and temperature at most stations in the TRB exhibited significant positive trends during 1960-2015; (2) multiple scales of SPIs changed significantly around 1986; (3) based on quantile regression analysis of temporal drought changes, the positive SPI slopes indicated less severe and less frequent droughts at lower quantiles, but clear variation was detected in the drought frequency; and (4) significantly different trends were found in drought frequency probably between severe droughts and drought frequency.

  15. Quantile regression for the statistical analysis of immunological data with many non-detects.

    PubMed

    Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth

    2012-07-07

    Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.

  16. Probabilistic forecasting for extreme NO2 pollution episodes.

    PubMed

    Aznarte, José L

    2017-10-01

    In this study, we investigate the convenience of quantile regression to predict extreme concentrations of NO 2 . Contrarily to the usual point-forecasting, where a single value is forecast for each horizon, probabilistic forecasting through quantile regression allows for the prediction of the full probability distribution, which in turn allows to build models specifically fit for the tails of this distribution. Using data from the city of Madrid, including NO 2 concentrations as well as meteorological measures, we build models that predict extreme NO 2 concentrations, outperforming point-forecasting alternatives, and we prove that the predictions are accurate, reliable and sharp. Besides, we study the relative importance of the independent variables involved, and show how the important variables for the median quantile are different than those important for the upper quantiles. Furthermore, we present a method to compute the probability of exceedance of thresholds, which is a simple and comprehensible manner to present probabilistic forecasts maximizing their usefulness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A perturbation approach for assessing trends in precipitation extremes across Iran

    NASA Astrophysics Data System (ADS)

    Tabari, Hossein; AghaKouchak, Amir; Willems, Patrick

    2014-11-01

    Extreme precipitation events have attracted a great deal of attention among the scientific community because of their devastating consequences on human livelihood and socio-economic development. To assess changes in precipitation extremes in a given region, it is essential to analyze decadal oscillations in precipitation extremes. This study examines temporal oscillations in precipitation data in several sub-regions of Iran using a novel quantile perturbation method during 1980-2010. Precipitation data from NASA's Modern-Era Retrospective Analysis for Research and Applications-Land (MERRA-Land) are used in this study. The results indicate significant anomalies in precipitation extremes in the northwest and southeast regions of Iran. Analysis of extreme precipitation perturbations reveals that perturbations for the monthly aggregation level are generally lower than the annual perturbations. Furthermore, high-oscillation and low-oscillation periods are found in extreme precipitation quantiles across different seasons. In all selected regions, a significant anomaly (i.e., extreme wet/dry conditions) in precipitation extremes is observed during spring.

  18. Comparison of different hydrological similarity measures to estimate flow quantiles

    NASA Astrophysics Data System (ADS)

    Rianna, M.; Ridolfi, E.; Napolitano, F.

    2017-07-01

    This paper aims to evaluate the influence of hydrological similarity measures on the definition of homogeneous regions. To this end, several attribute sets have been analyzed in the context of the Region of Influence (ROI) procedure. Several combinations of geomorphological, climatological, and geographical characteristics are also used to cluster potentially homogeneous regions. To verify the goodness of the resulting pooled sites, homogeneity tests arecarried out. Through a Monte Carlo simulation and a jack-knife procedure, flow quantiles areestimated for the regions effectively resulting as homogeneous. The analysis areperformed in both the so-called gauged and ungauged scenarios to analyze the effect of hydrological measures on flow quantiles estimation.

  19. Quantile regression in the presence of monotone missingness with sensitivity analysis

    PubMed Central

    Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.

    2016-01-01

    In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008

  20. Quantile Regression with Censored Data

    ERIC Educational Resources Information Center

    Lin, Guixian

    2009-01-01

    The Cox proportional hazards model and the accelerated failure time model are frequently used in survival data analysis. They are powerful, yet have limitation due to their model assumptions. Quantile regression offers a semiparametric approach to model data with possible heterogeneity. It is particularly powerful for censored responses, where the…

  1. Measuring disparities across the distribution of mental health care expenditures.

    PubMed

    Le Cook, Benjamin; Manning, Willard; Alegria, Margarita

    2013-03-01

    Previous mental health care disparities studies predominantly compare mean mental health care use across racial/ethnic groups, leaving policymakers with little information on disparities among those with a higher level of expenditures. To identify racial/ethnic disparities among individuals at varying quantiles of mental health care expenditures. To assess whether disparities in the upper quantiles of expenditure differ by insurance status, income and education. Data were analyzed from a nationally representative sample of white, black and Latino adults 18 years and older (n=83,878). Our dependent variable was total mental health care expenditure. We measured disparities in any mental health care expenditures, disparities in mental health care expenditure at the 95th, 97.5 th, and 99 th expenditure quantiles of the full population using quantile regression, and at the 50 th, 75 th, and 95 th quantiles for positive users. In the full population, we tested interaction coefficients between race/ethnicity and income, insurance, and education levels to determine whether racial/ethnic disparities in the upper quantiles differed by income, insurance and education. Significant Black-white and Latino-white disparities were identified in any mental health care expenditures. In the full population, moving up the quantiles of mental health care expenditures, Black-White and Latino-White disparities were reduced but remained statistically significant. No statistically significant disparities were found in analyses of positive users only. The magnitude of black-white disparities was smaller among those enrolled in public insurance programs compared to the privately insured and uninsured in the 97.5 th and 99 th quantiles. Disparities persist in the upper quantiles among those in higher income categories and after excluding psychiatric inpatient and emergency department (ED) visits. Disparities exist in any mental health care and among those that use the most mental health care resources, but much of disparities seem to be driven by lack of access. The data do not allow us to disentangle whether disparities were related to white respondent's overuse or underuse as compared to minority groups. The cross-sectional data allow us to make only associational claims about the role of insurance, income, and education in disparities. With these limitations in mind, we identified a persistence of disparities in overall expenditures even among those in the highest income categories, after controlling for mental health status and observable sociodemographic characteristics. Interventions are needed to equalize resource allocation to racial/ethnic minority patients regardless of their income, with emphasis on outreach interventions to address the disparities in access that are responsible for the no/low expenditures for even Latinos at higher levels of illness severity. Increased policy efforts are needed to reduce the gap in health insurance for Latinos and improve outreach programs to enroll those in need into mental health care services. Future studies that conclusively disentangle overuse and appropriate use in these populations are warranted.

  2. Variability in reaction time performance of younger and older adults.

    PubMed

    Hultsch, David F; MacDonald, Stuart W S; Dixon, Roger A

    2002-03-01

    Age differences in three basic types of variability were examined: variability between persons (diversity), variability within persons across tasks (dispersion), and variability within persons across time (inconsistency). Measures of variability were based on latency performance from four measures of reaction time (RT) performed by a total of 99 younger adults (ages 17--36 years) and 763 older adults (ages 54--94 years). Results indicated that all three types of variability were greater in older compared with younger participants even when group differences in speed were statistically controlled. Quantile-quantile plots showed age and task differences in the shape of the inconsistency distributions. Measures of within-person variability (dispersion and inconsistency) were positively correlated. Individual differences in RT inconsistency correlated negatively with level of performance on measures of perceptual speed, working memory, episodic memory, and crystallized abilities. Partial set correlation analyses indicated that inconsistency predicted cognitive performance independent of level of performance. The results indicate that variability of performance is an important indicator of cognitive functioning and aging.

  3. Estimation of peak discharge quantiles for selected annual exceedance probabilities in Northeastern Illinois.

    DOT National Transportation Integrated Search

    2016-06-01

    This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, : 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years,...

  4. Quantile Regression in the Study of Developmental Sciences

    ERIC Educational Resources Information Center

    Petscher, Yaacov; Logan, Jessica A. R.

    2014-01-01

    Linear regression analysis is one of the most common techniques applied in developmental research, but only allows for an estimate of the average relations between the predictor(s) and the outcome. This study describes quantile regression, which provides estimates of the relations between the predictor(s) and outcome, but across multiple points of…

  5. Principles of Quantile Regression and an Application

    ERIC Educational Resources Information Center

    Chen, Fang; Chalhoub-Deville, Micheline

    2014-01-01

    Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…

  6. Hospital charges associated with motorcycle crash factors: a quantile regression analysis.

    PubMed

    Olsen, Cody S; Thomas, Andrea M; Cook, Lawrence J

    2014-08-01

    Previous studies of motorcycle crash (MC) related hospital charges use trauma registries and hospital records, and do not adjust for the number of motorcyclists not requiring medical attention. This may lead to conservative estimates of helmet use effectiveness. MC records were probabilistically linked with emergency department and hospital records to obtain total hospital charges. Missing data were imputed. Multivariable quantile regression estimated reductions in hospital charges associated with helmet use and other crash factors. Motorcycle helmets were associated with reduced median hospital charges of $256 (42% reduction) and reduced 98th percentile of $32,390 (33% reduction). After adjusting for other factors, helmets were associated with reductions in charges in all upper percentiles studied. Quantile regression models described homogenous and heterogeneous associations between other crash factors and charges. Quantile regression comprehensively describes associations between crash factors and hospital charges. Helmet use among motorcyclists is associated with decreased hospital charges. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Early Warning Signals of Financial Crises with Multi-Scale Quantile Regressions of Log-Periodic Power Law Singularities

    PubMed Central

    Zhang, Qun; Zhang, Qunzhi; Sornette, Didier

    2016-01-01

    We augment the existing literature using the Log-Periodic Power Law Singular (LPPLS) structures in the log-price dynamics to diagnose financial bubbles by providing three main innovations. First, we introduce the quantile regression to the LPPLS detection problem. This allows us to disentangle (at least partially) the genuine LPPLS signal and the a priori unknown complicated residuals. Second, we propose to combine the many quantile regressions with a multi-scale analysis, which aggregates and consolidates the obtained ensembles of scenarios. Third, we define and implement the so-called DS LPPLS Confidence™ and Trust™ indicators that enrich considerably the diagnostic of bubbles. Using a detailed study of the “S&P 500 1987” bubble and presenting analyses of 16 historical bubbles, we show that the quantile regression of LPPLS signals contributes useful early warning signals. The comparison between the constructed signals and the price development in these 16 historical bubbles demonstrates their significant predictive ability around the real critical time when the burst/rally occurs. PMID:27806093

  8. Impact of climate change on Gironde Estuary

    NASA Astrophysics Data System (ADS)

    Laborie, Vanessya; Hissel, François; Sergent, Philippe

    2014-05-01

    Within the THESEUS European project, a simplified mathematical model for storm surge levels in the Bay of Biscay was adjusted on 10 events at Le Verdon using wind and pressure fields from CLM/SGA, so that the water levels at Le Verdon have the same statistic quantiles as observed tide records for the period [1960-2000]. The analysis of future storm surge levels shows a decrease in their quantiles at Le Verdon, whereas there is an increase of the quantiles of total water levels. This increase is smaller than the sea level rise and gets even smaller as one enters farther upstream in the estuary. A numerical model of the Gironde Estuary was then used to evaluate future water levels at 6 locations of the estuary from Le Verdon to Bordeaux and to assess the changes in the quantiles of water levels during the XXIst century using ONERC's pessimistic scenario for sea level rise (60 cm). The model was fed by several data sources : wind fields at Royan and Mérignac interpolated from the grid of the European Climatolologic Model CLM/SGA, a tide signal at Le Verdon, the discharges of Garonne (at La Réole), the Dordogne (at Pessac) and Isle (at Libourne). A series of flood maps for different return periods between 2 and 100 years and for four time periods ([1960-1999], [2010-2039], [2040-2069] and [2070-2099]) have been built for the region of Bordeaux. Quantiles of water levels in the floodplain have also been calculated. The impact of climate change on the evolution of flooded areas in the Gironde Estuary and on quantiles of water levels in the floodplain mainly depends on the sea level rise. Areas which are not currently flooded for low return periods will be inundated in 2100. The influence of river discharges and dike breaching should also be taken into account for more accurate results.

  9. Comparing least-squares and quantile regression approaches to analyzing median hospital charges.

    PubMed

    Olsen, Cody S; Clark, Amy E; Thomas, Andrea M; Cook, Lawrence J

    2012-07-01

    Emergency department (ED) and hospital charges obtained from administrative data sets are useful descriptors of injury severity and the burden to EDs and the health care system. However, charges are typically positively skewed due to costly procedures, long hospital stays, and complicated or prolonged treatment for few patients. The median is not affected by extreme observations and is useful in describing and comparing distributions of hospital charges. A least-squares analysis employing a log transformation is one approach for estimating median hospital charges, corresponding confidence intervals (CIs), and differences between groups; however, this method requires certain distributional properties. An alternate method is quantile regression, which allows estimation and inference related to the median without making distributional assumptions. The objective was to compare the log-transformation least-squares method to the quantile regression approach for estimating median hospital charges, differences in median charges between groups, and associated CIs. The authors performed simulations using repeated sampling of observed statewide ED and hospital charges and charges randomly generated from a hypothetical lognormal distribution. The median and 95% CI and the multiplicative difference between the median charges of two groups were estimated using both least-squares and quantile regression methods. Performance of the two methods was evaluated. In contrast to least squares, quantile regression produced estimates that were unbiased and had smaller mean square errors in simulations of observed ED and hospital charges. Both methods performed well in simulations of hypothetical charges that met least-squares method assumptions. When the data did not follow the assumed distribution, least-squares estimates were often biased, and the associated CIs had lower than expected coverage as sample size increased. Quantile regression analyses of hospital charges provide unbiased estimates even when lognormal and equal variance assumptions are violated. These methods may be particularly useful in describing and analyzing hospital charges from administrative data sets. © 2012 by the Society for Academic Emergency Medicine.

  10. Gender difference in the association between food away-from-home consumption and body weight outcomes among Chinese adults.

    PubMed

    Du, Wen-Wen; Zhang, Bing; Wang, Hui-Jun; Wang, Zhi-Hong; Su, Chang; Zhang, Ji-Guo; Zhang, Ji; Jia, Xiao-Fang; Jiang, Hong-Ru

    2016-11-01

    The present study aimed to explore the associations between food away-from-home (FAFH) consumption and body weight outcomes among Chinese adults. FAFH was defined as food prepared at restaurants and the percentage of energy from FAFH was calculated. Measured BMI and waist circumference (WC) were used as body weight outcomes. Quantile regression models for BMI and WC were performed separately by gender. Information on demographic, socio-economic, diet and health parameters at individual, household and community levels was collected in twelve provinces of China. A cross-sectional sample of 7738 non-pregnant individuals aged 18-60 years from the China Health and Nutrition Survey 2011 was analysed. For males, quantile regression models showed that percentage of energy from FAFH was associated with an increase in BMI of 0·01, 0·01, 0·01, 0·02, 0·02 and 0·03 kg/m2 at the 5th, 25th, 50th, 75th, 90th and 95th quantile, and an increase in WC of 0·04, 0·06, 0·06, 0·04, 0·06, 0·05 and 0·07 cm at the 5th, 10th, 25th, 50th, 75th, 90th and 95th quantile. For females, percentage of energy from FAFH was associated with 0·01, 0·01, 0·01 and 0·02 kg/m2 increase in BMI at the 10th, 25th, 90th and 95th quantile, and with 0·05, 0·04, 0·03 and 0·03 cm increase in WC at the 5th, 10th, 25th and 75th quantile. Our findings suggest that FAFH consumption is relatively more important for BMI and WC among males rather than females in China. Public health initiatives are needed to encourage Chinese adults to make healthy food choices when eating out.

  11. Trends of VOC exposures among a nationally representative sample: Analysis of the NHANES 1988 through 2004 data sets

    NASA Astrophysics Data System (ADS)

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2011-09-01

    Exposures to volatile organic compounds (VOCs) are ubiquitous due to emissions from personal, commercial and industrial products, but quantitative and representative information regarding long term exposure trends is lacking. This study characterizes trends from 1988 to 2004 for the 15 VOCs measured in blood in five cohorts of the National Health and Nutrition Examination Survey (NHANES), a large and representative sample of U.S. adults. Trends were evaluated at various percentiles using linear quantile regression (QR) models, which were adjusted for solvent-related occupations and cotinine levels. Most VOCs showed decreasing trends at all quantiles, e.g., median exposures declined by 2.5 (m,p-xylene) to 6.4 (tetrachloroethene) percent per year over the 15 year period. Trends varied by VOC and quantile, and were grouped into three patterns: similar decreases at all quantiles (including benzene, toluene); most rapid decreases at upper quantiles (ethylbenzene, m,p-xylene, o-xylene, styrene, chloroform, tetrachloroethene); and fastest declines at central quantiles (1,4-dichlorobenzene). These patterns reflect changes in exposure sources, e.g., upper-percentile exposures may result mostly from occupational exposure, while lower percentile exposures arise from general environmental sources. Both VOC emissions aggregated at the national level and VOC concentrations measured in ambient air also have declined substantially over the study period and are supportive of the exposure trends, although the NHANES data suggest the importance of indoor sources and personal activities on VOC exposures. While piecewise QR models suggest that exposures of several VOCs decreased little or any during the 1990's, followed by more rapid decreases from 1999 to 2004, questions are raised concerning the reliability of VOC data in several of the NHANES cohorts and its applicability as an exposure indicator, as demonstrated by the modest correlation between VOC levels in blood and personal air collected in the 1999/2000 cohort. Despite some limitations, the NHANES data provides a unique, long term and direct measurement of VOC exposures and trends.

  12. Predictors of High Profit and High Deficit Outliers under SwissDRG of a Tertiary Care Center

    PubMed Central

    Mehra, Tarun; Müller, Christian Thomas Benedikt; Volbracht, Jörk; Seifert, Burkhardt; Moos, Rudolf

    2015-01-01

    Principles Case weights of Diagnosis Related Groups (DRGs) are determined by the average cost of cases from a previous billing period. However, a significant amount of cases are largely over- or underfunded. We therefore decided to analyze earning outliers of our hospital as to search for predictors enabling a better grouping under SwissDRG. Methods 28,893 inpatient cases without additional private insurance discharged from our hospital in 2012 were included in our analysis. Outliers were defined by the interquartile range method. Predictors for deficit and profit outliers were determined with logistic regressions. Predictors were shortlisted with the LASSO regularized logistic regression method and compared to results of Random forest analysis. 10 of these parameters were selected for quantile regression analysis as to quantify their impact on earnings. Results Psychiatric diagnosis and admission as an emergency case were significant predictors for higher deficit with negative regression coefficients for all analyzed quantiles (p<0.001). Admission from an external health care provider was a significant predictor for a higher deficit in all but the 90% quantile (p<0.001 for Q10, Q20, Q50, Q80 and p = 0.0017 for Q90). Burns predicted higher earnings for cases which were favorably remunerated (p<0.001 for the 90% quantile). Osteoporosis predicted a higher deficit in the most underfunded cases, but did not predict differences in earnings for balanced or profitable cases (Q10 and Q20: p<0.00, Q50: p = 0.10, Q80: p = 0.88 and Q90: p = 0.52). ICU stay, mechanical and patient clinical complexity level score (PCCL) predicted higher losses at the 10% quantile but also higher profits at the 90% quantile (p<0.001). Conclusion We suggest considering psychiatric diagnosis, admission as an emergencay case and admission from an external health care provider as DRG split criteria as they predict large, consistent and significant losses. PMID:26517545

  13. Relationship between Urbanization and Cancer Incidence in Iran Using Quantile Regression.

    PubMed

    Momenyan, Somayeh; Sadeghifar, Majid; Sarvi, Fatemeh; Khodadost, Mahmoud; Mosavi-Jarrahi, Alireza; Ghaffari, Mohammad Ebrahim; Sekhavati, Eghbal

    2016-01-01

    Quantile regression is an efficient method for predicting and estimating the relationship between explanatory variables and percentile points of the response distribution, particularly for extreme percentiles of the distribution. To study the relationship between urbanization and cancer morbidity, we here applied quantile regression. This cross-sectional study was conducted for 9 cancers in 345 cities in 2007 in Iran. Data were obtained from the Ministry of Health and Medical Education and the relationship between urbanization and cancer morbidity was investigated using quantile regression and least square regression. Fitting models were compared using AIC criteria. R (3.0.1) software and the Quantreg package were used for statistical analysis. With the quantile regression model all percentiles for breast, colorectal, prostate, lung and pancreas cancers demonstrated increasing incidence rate with urbanization. The maximum increase for breast cancer was in the 90th percentile (β=0.13, p-value<0.001), for colorectal cancer was in the 75th percentile (β=0.048, p-value<0.001), for prostate cancer the 95th percentile (β=0.55, p-value<0.001), for lung cancer was in 95th percentile (β=0.52, p-value=0.006), for pancreas cancer was in 10th percentile (β=0.011, p-value<0.001). For gastric, esophageal and skin cancers, with increasing urbanization, the incidence rate was decreased. The maximum decrease for gastric cancer was in the 90th percentile(β=0.003, p-value<0.001), for esophageal cancer the 95th (β=0.04, p-value=0.4) and for skin cancer also the 95th (β=0.145, p-value=0.071). The AIC showed that for upper percentiles, the fitting of quantile regression was better than least square regression. According to the results of this study, the significant impact of urbanization on cancer morbidity requirs more effort and planning by policymakers and administrators in order to reduce risk factors such as pollution in urban areas and ensure proper nutrition recommendations are made.

  14. Predictors of High Profit and High Deficit Outliers under SwissDRG of a Tertiary Care Center.

    PubMed

    Mehra, Tarun; Müller, Christian Thomas Benedikt; Volbracht, Jörk; Seifert, Burkhardt; Moos, Rudolf

    2015-01-01

    Case weights of Diagnosis Related Groups (DRGs) are determined by the average cost of cases from a previous billing period. However, a significant amount of cases are largely over- or underfunded. We therefore decided to analyze earning outliers of our hospital as to search for predictors enabling a better grouping under SwissDRG. 28,893 inpatient cases without additional private insurance discharged from our hospital in 2012 were included in our analysis. Outliers were defined by the interquartile range method. Predictors for deficit and profit outliers were determined with logistic regressions. Predictors were shortlisted with the LASSO regularized logistic regression method and compared to results of Random forest analysis. 10 of these parameters were selected for quantile regression analysis as to quantify their impact on earnings. Psychiatric diagnosis and admission as an emergency case were significant predictors for higher deficit with negative regression coefficients for all analyzed quantiles (p<0.001). Admission from an external health care provider was a significant predictor for a higher deficit in all but the 90% quantile (p<0.001 for Q10, Q20, Q50, Q80 and p = 0.0017 for Q90). Burns predicted higher earnings for cases which were favorably remunerated (p<0.001 for the 90% quantile). Osteoporosis predicted a higher deficit in the most underfunded cases, but did not predict differences in earnings for balanced or profitable cases (Q10 and Q20: p<0.00, Q50: p = 0.10, Q80: p = 0.88 and Q90: p = 0.52). ICU stay, mechanical and patient clinical complexity level score (PCCL) predicted higher losses at the 10% quantile but also higher profits at the 90% quantile (p<0.001). We suggest considering psychiatric diagnosis, admission as an emergency case and admission from an external health care provider as DRG split criteria as they predict large, consistent and significant losses.

  15. Association Between Awareness of Hypertension and Health-Related Quality of Life in a Cross-Sectional Population-Based Study in Rural Area of Northwest China.

    PubMed

    Mi, Baibing; Dang, Shaonong; Li, Qiang; Zhao, Yaling; Yang, Ruihai; Wang, Duolao; Yan, Hong

    2015-07-01

    Hypertensive patients have more complex health care needs and are more likely to have poorer health-related quality of life than normotensive people. The awareness of hypertension could be related to reduce health-related quality of life. We propose the use of quantile regression to explore more detailed relationships between awareness of hypertension and health-related quality of life. In a cross-sectional, population-based study, 2737 participants (including 1035 hypertensive patients and 1702 normotensive participants) completed the Short-Form Health Survey. A quantile regression model was employed to investigate the association of physical component summary scores and mental component summary scores with awareness of hypertension and to evaluate the associated factors. Patients who were aware of hypertension (N = 554) had lower scores than patients who were unaware of hypertension (N = 481). The median (IQR) of physical component summary scores: 48.20 (13.88) versus 53.27 (10.79), P < 0.01; the mental component summary scores: 50.68 (15.09) versus 51.70 (10.65), P = 0.03. adjusting for covariates, the quantile regression results suggest awareness of hypertension was associated with most physical component summary scores quantiles (P < 0.05 except 10th and 20th quantiles) in which the β-estimates from -2.14 (95% CI: -3.80 to -0.48) to -1.45 (95% CI: -2.42 to -0.47), as the same significant trend with some poorer mental component summary scores quantiles in which the β-estimates from -3.47 (95% CI: -6.65 to -0.39) to -2.18 (95% CI: -4.30 to -0.06). The awareness of hypertension has a greater effect on those with intermediate physical component summary status: the β-estimates were equal to -2.04 (95% CI: -3.51 to -0.57, P < 0.05) at the 40th and decreased further to -1.45 (95% CI: -2.42 to -0.47, P < 0.01) at the 90th quantile. Awareness of hypertension was negatively related to health-related quality of life in hypertensive patients in rural western China, which has a greater effect on mental component summary scores with the poorer status and on physical component summary scores with the intermediate status.

  16. Measuring Disparities across the Distribution of Mental Health Care Expenditures

    PubMed Central

    Cook, Benjamin Lê; Manning, Willard; Alegría, Margarita

    2013-01-01

    Background Previous mental health care disparities studies predominantly compare mean mental health care use across racial/ethnic groups, leaving policymakers with little information on disparities among those with a higher level of expenditures. Aims of the Study To identify racial/ethnic disparities among individuals at varying quantiles of mental health care expenditures. To assess whether disparities in the upper quantiles of expenditure differ by insurance status, income and education. Methods Data were analyzed from a nationally representative sample of white, black and Latino adults 18 years and older (n=83,878). Our dependent variable was total mental health care expenditure. We measured disparities in any mental health care expenditures, disparities in mental health care expenditure at the 95th, 97.5th, and 99th expenditure quantiles of the full population using quantile regression, and at the 50th, 75th, and 95th quantiles for positive users. In the full population, we tested interaction coefficients between race/ethnicity and income, insurance, and education levels to determine whether racial/ethnic disparities in the upper quantiles differed by income, insurance and education. Results Significant Black-white and Latino-white disparities were identified in any mental health care expenditures. In the full population, moving up the quantiles of mental health care expenditures, Black-White and Latino-White disparities were reduced but remained statistically significant. No statistically significant disparities were found in analyses of positive users only. The magnitude of black-white disparities was smaller among those enrolled in public insurance programs compared to the privately insured and uninsured in the 97.5th and 99th quantiles. Disparities persist in the upper quantiles among those in higher income categories and after excluding psychiatric inpatient and emergency department (ED) visits. Discussion Disparities exist in any mental health care and among those that use the most mental health care resources, but much of disparities seem to be driven by lack of access. The data do not allow us to disentangle whether disparities were related to white respondent’s overuse or underuse as compared to minority groups. The cross-sectional data allow us to make only associational claims about the role of insurance, income, and education in disparities. With these limitations in mind, we identified a persistence of disparities in overall expenditures even among those in the highest income categories, after controlling for mental health status and observable sociodemographic characteristics. Implications for Health Care Provision and Use Interventions are needed to equalize resource allocation to racial/ethnic minority patients regardless of their income, with emphasis on outreach interventions to address the disparities in access that are responsible for the no/low expenditures for even Latinos at higher levels of illness severity. Implications for Health Policies Increased policy efforts are needed to reduce the gap in health insurance for Latinos and improve outreach programs to enroll those in need into mental health care services. Implications for Further Research Future studies that conclusively disentangle overuse and appropriate use in these populations are warranted. PMID:23676411

  17. Seasonal effects of wind conditions on migration patterns of soaring American white pelican.

    PubMed

    Gutierrez Illan, Javier; Wang, Guiming; Cunningham, Fred L; King, D Tommy

    2017-01-01

    Energy and time expenditures are determinants of bird migration strategies. Soaring birds have developed migration strategies to minimize these costs, optimizing the use of all the available resources to facilitate their displacement. We analysed the effects of different wind factors (tailwind, turbulence, vertical updrafts) on the migratory flying strategies adopted by 24 satellite-tracked American white pelicans (Pelecanus erythrorhynchos) throughout spring and autumn in North America. We hypothesize that different wind conditions encountered along migration routes between spring and autumn induce pelicans to adopt different flying strategies and use of these wind resources. Using quantile regression and fine-scale atmospheric data, we found that the pelicans optimized the use of available wind resources, flying faster and more direct routes in spring than in autumn. They actively selected tailwinds in both spring and autumn displacements but relied on available updrafts predominantly in their spring migration, when they needed to arrive at the breeding regions. These effects varied depending on the flying speed of the pelicans. We found significant directional correlations between the pelican migration flights and wind direction. In light of our results, we suggest plasticity of migratory flight strategies by pelicans is likely to enhance their ability to cope with the effects of ongoing climate change and the alteration of wind regimes. Here, we also demonstrate the usefulness and applicability of quantile regression techniques to investigate complex ecological processes such as variable effects of atmospheric conditions on soaring migration.

  18. Seasonal effects of wind conditions on migration patterns of soaring American white pelican

    PubMed Central

    Wang, Guiming; Cunningham, Fred L.; King, D. Tommy

    2017-01-01

    Energy and time expenditures are determinants of bird migration strategies. Soaring birds have developed migration strategies to minimize these costs, optimizing the use of all the available resources to facilitate their displacement. We analysed the effects of different wind factors (tailwind, turbulence, vertical updrafts) on the migratory flying strategies adopted by 24 satellite-tracked American white pelicans (Pelecanus erythrorhynchos) throughout spring and autumn in North America. We hypothesize that different wind conditions encountered along migration routes between spring and autumn induce pelicans to adopt different flying strategies and use of these wind resources. Using quantile regression and fine-scale atmospheric data, we found that the pelicans optimized the use of available wind resources, flying faster and more direct routes in spring than in autumn. They actively selected tailwinds in both spring and autumn displacements but relied on available updrafts predominantly in their spring migration, when they needed to arrive at the breeding regions. These effects varied depending on the flying speed of the pelicans. We found significant directional correlations between the pelican migration flights and wind direction. In light of our results, we suggest plasticity of migratory flight strategies by pelicans is likely to enhance their ability to cope with the effects of ongoing climate change and the alteration of wind regimes. Here, we also demonstrate the usefulness and applicability of quantile regression techniques to investigate complex ecological processes such as variable effects of atmospheric conditions on soaring migration. PMID:29065188

  19. Determinants of Academic Attainment in the United States: A Quantile Regression Analysis of Test Scores

    ERIC Educational Resources Information Center

    Haile, Getinet Astatike; Nguyen, Anh Ngoc

    2008-01-01

    We investigate the determinants of high school students' academic attainment in mathematics, reading and science in the United States; focusing particularly on possible differential impacts of ethnicity and family background across the distribution of test scores. Using data from the NELS2000 and employing quantile regression, we find two…

  20. Longitudinal analysis of the strengths and difficulties questionnaire scores of the Millennium Cohort Study children in England using M-quantile random-effects regression.

    PubMed

    Tzavidis, Nikos; Salvati, Nicola; Schmid, Timo; Flouri, Eirini; Midouhas, Emily

    2016-02-01

    Multilevel modelling is a popular approach for longitudinal data analysis. Statistical models conventionally target a parameter at the centre of a distribution. However, when the distribution of the data is asymmetric, modelling other location parameters, e.g. percentiles, may be more informative. We present a new approach, M -quantile random-effects regression, for modelling multilevel data. The proposed method is used for modelling location parameters of the distribution of the strengths and difficulties questionnaire scores of children in England who participate in the Millennium Cohort Study. Quantile mixed models are also considered. The analyses offer insights to child psychologists about the differential effects of risk factors on children's outcomes.

  1. Heterogeneous effects of oil shocks on exchange rates: evidence from a quantile regression approach.

    PubMed

    Su, Xianfang; Zhu, Huiming; You, Wanhai; Ren, Yinghua

    2016-01-01

    The determinants of exchange rates have attracted considerable attention among researchers over the past several decades. Most studies, however, ignore the possibility that the impact of oil shocks on exchange rates could vary across the exchange rate returns distribution. We employ a quantile regression approach to address this issue. Our results indicate that the effect of oil shocks on exchange rates is heterogeneous across quantiles. A large US depreciation or appreciation tends to heighten the effects of oil shocks on exchange rate returns. Positive oil demand shocks lead to appreciation pressures in oil-exporting countries and this result is robust across lower and upper return distributions. These results offer rich and useful information for investors and decision-makers.

  2. Association of Perceived Stress with Stressful Life Events, Lifestyle and Sociodemographic Factors: A Large-Scale Community-Based Study Using Logistic Quantile Regression

    PubMed Central

    Feizi, Awat; Aliyari, Roqayeh; Roohafza, Hamidreza

    2012-01-01

    Objective. The present paper aimed at investigating the association between perceived stress and major life events stressors in Iranian general population. Methods. In a cross-sectional large-scale community-based study, 4583 people aged 19 and older, living in Isfahan, Iran, were investigated. Logistic quantile regression was used for modeling perceived stress, measured by GHQ questionnaire, as the bounded outcome (dependent), variable, and as a function of most important stressful life events, as the predictor variables, controlling for major lifestyle and sociodemographic factors. This model provides empirical evidence of the predictors' effects heterogeneity depending on individual location on the distribution of perceived stress. Results. The results showed that among four stressful life events, family conflicts and social problems were more correlated with level of perceived stress. Higher levels of education were negatively associated with perceived stress and its coefficients monotonically decrease beyond the 30th percentile. Also, higher levels of physical activity were associated with perception of low levels of stress. The pattern of gender's coefficient over the majority of quantiles implied that females are more affected by stressors. Also high perceived stress was associated with low or middle levels of income. Conclusions. The results of current research suggested that in a developing society with high prevalence of stress, interventions targeted toward promoting financial and social equalities, social skills training, and healthy lifestyle may have the potential benefits for large parts of the population, most notably female and lower educated people. PMID:23091560

  3. Taxonomy, Traits, and Environment Determine Isoprenoid Emission from an Evergreen Tropical forest.

    NASA Astrophysics Data System (ADS)

    Taylor, T.; Alves, E. G.; Tota, J.; Oliveira Junior, R. C.; Camargo, P. B. D.; Saleska, S. R.

    2016-12-01

    Volatile isoprenoid emissions from the leaves of tropical forest trees significantly affects atmospheric chemistry, aerosols, and cloud dynamics, as well as the physiology of the emitting leaves. Emission is associated with plant tolerance to heat and drought stress. Despite a potentially central role of isoprenoid emissions in tropical forest-climate interactions, we have a poor understanding of the relationship between emissions and ecological axes of forest function. We used a custom instrument to quantify leaf isoprenoid emission rates from over 200 leaves and 80 trees at a site in the eastern Brazilian Amazon. We related standardized leaf emission capacity (EC: leaf emission rate at 1000 PAR) to tree taxonomy, height, light environment, wood traits, and leaf traits. Taxonomy was the strongest predictor of EC, and non-emitters could be found throughout the canopy. But we found that environment and leaf carbon economics constrained the upper bound of EC. For example, the relationship between EC and specific leaf area (SLA; fresh leaf area / dry mass) is described by an envelope with a decreasing upper bound on EC as SLA increases (quantile regression: 85th quantile, p<0.01). That result suggests a limitation on emissions related to leaf carbon investment strategies. EC was highest in the mid-canopy, and in leaves growing under less direct light. While inferences of ecosystem emissions focus on environmental conditions in the canopy, our results suggest that sub-canopy leaves are more responsive. This work is allowing us to develop an ecological understanding of isoprenoid emissions from forests, the basis for a predictive model of emissions that depends on both environmental factors and biological emission capacity that is grounded in plant traits and phylogeny.

  4. Statistical Models and Inference Procedures for Structural and Materials Reliability

    DTIC Science & Technology

    1990-12-01

    as an official Department of the Army positio~n, policy, or decision, unless sD designated by other documentazion. 12a. DISTRIBUTION /AVAILABILITY...Some general stress-strength models were also developed and applied to the failure of systems subject to cyclic loading. Involved in the failure of...process control ideas and sequential design and analysis methods. Finally, smooth nonparametric quantile .wJ function estimators were studied. All of

  5. Probabilistic properties of the date of maximum river flow, an approach based on circular statistics in lowland, highland and mountainous catchment

    NASA Astrophysics Data System (ADS)

    Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz

    2018-04-01

    Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.

  6. Using nonlinear quantile regression to estimate the self-thinning boundary curve

    Treesearch

    Quang V. Cao; Thomas J. Dean

    2015-01-01

    The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...

  7. Gender Gaps in Mathematics, Science and Reading Achievements in Muslim Countries: Evidence from Quantile Regression Analyses

    ERIC Educational Resources Information Center

    Shafiq, M. Najeeb

    2011-01-01

    Using quantile regression analyses, this study examines gender gaps in mathematics, science, and reading in Azerbaijan, Indonesia, Jordan, the Kyrgyz Republic, Qatar, Tunisia, and Turkey among 15 year-old students. The analyses show that girls in Azerbaijan achieve as well as boys in mathematics and science and overachieve in reading. In Jordan,…

  8. Gender Gaps in Mathematics, Science and Reading Achievements in Muslim Countries: A Quantile Regression Approach

    ERIC Educational Resources Information Center

    Shafiq, M. Najeeb

    2013-01-01

    Using quantile regression analyses, this study examines gender gaps in mathematics, science, and reading in Azerbaijan, Indonesia, Jordan, the Kyrgyz Republic, Qatar, Tunisia, and Turkey among 15-year-old students. The analyses show that girls in Azerbaijan achieve as well as boys in mathematics and science and overachieve in reading. In Jordan,…

  9. A Quantile Regression Approach to Understanding the Relations among Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students

    ERIC Educational Resources Information Center

    Tighe, Elizabeth L.; Schatschneider, Christopher

    2016-01-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological…

  10. Trait Mindfulness as a Limiting Factor for Residual Depressive Symptoms: An Explorative Study Using Quantile Regression

    PubMed Central

    Radford, Sholto; Eames, Catrin; Brennan, Kate; Lambert, Gwladys; Crane, Catherine; Williams, J. Mark G.; Duggan, Danielle S.; Barnhofer, Thorsten

    2014-01-01

    Mindfulness has been suggested to be an important protective factor for emotional health. However, this effect might vary with regard to context. This study applied a novel statistical approach, quantile regression, in order to investigate the relation between trait mindfulness and residual depressive symptoms in individuals with a history of recurrent depression, while taking into account symptom severity and number of episodes as contextual factors. Rather than fitting to a single indicator of central tendency, quantile regression allows exploration of relations across the entire range of the response variable. Analysis of self-report data from 274 participants with a history of three or more previous episodes of depression showed that relatively higher levels of mindfulness were associated with relatively lower levels of residual depressive symptoms. This relationship was most pronounced near the upper end of the response distribution and moderated by the number of previous episodes of depression at the higher quantiles. The findings suggest that with lower levels of mindfulness, residual symptoms are less constrained and more likely to be influenced by other factors. Further, the limiting effect of mindfulness on residual symptoms is most salient in those with higher numbers of episodes. PMID:24988072

  11. Confidence intervals for expected moments algorithm flood quantile estimates

    USGS Publications Warehouse

    Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.

    2001-01-01

    Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.

  12. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features.

    PubMed

    Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara

    2017-01-01

    In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.

  13. Robust and efficient estimation with weighted composite quantile regression

    NASA Astrophysics Data System (ADS)

    Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng

    2016-09-01

    In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.

  14. Approximating Long-Term Statistics Early in the Global Precipitation Measurement Era

    NASA Technical Reports Server (NTRS)

    Stanley, Thomas; Kirschbaum, Dalia B.; Huffman, George J.; Adler, Robert F.

    2017-01-01

    Long-term precipitation records are vital to many applications, especially the study of extreme events. The Tropical Rainfall Measuring Mission (TRMM) has served this need, but TRMMs successor mission, Global Precipitation Measurement (GPM), does not yet provide a long-term record. Quantile mapping, the conversion of values across paired empirical distributions, offers a simple, established means to approximate such long-term statistics, but only within appropriately defined domains. This method was applied to a case study in Central America, demonstrating that quantile mapping between TRMM and GPM data maintains the performance of a real-time landslide model. Use of quantile mapping could bring the benefits of the latest satellite-based precipitation dataset to existing user communities such as those for hazard assessment, crop forecasting, numerical weather prediction, and disease tracking.

  15. The use of quantile regression to forecast higher than expected respiratory deaths in a daily time series: a study of New York City data 1987-2000.

    PubMed

    Soyiri, Ireneous N; Reidpath, Daniel D

    2013-01-01

    Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal/temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept.

  16. The Use of Quantile Regression to Forecast Higher Than Expected Respiratory Deaths in a Daily Time Series: A Study of New York City Data 1987-2000

    PubMed Central

    Soyiri, Ireneous N.; Reidpath, Daniel D.

    2013-01-01

    Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal / temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept. PMID:24147122

  17. [Socioeconomic factors conditioning obesity in adults. Evidence based on quantile regression and panel data].

    PubMed

    Temporelli, Karina L; Viego, Valentina N

    2016-08-01

    Objective To measure the effect of socioeconomic variables on the prevalence of obesity. Factors such as income level, urbanization, incorporation of women into the labor market and access to unhealthy foods are considered in this paper. Method Econometric estimates of the proportion of obese men and women by country were calculated using models based on panel data and quantile regressions, with data from 192 countries for the period 2002-2005.Levels of per capita income, urbanization, income/big mac ratio price and labor indicators for female population were considered as explanatory variables. Results Factors that have influence over obesity in adults differ between men and women; accessibility to fast food is related to male obesity, while the employment mode causes higher rates in women. The underlying socioeconomic factors for obesity are also different depending on the magnitude of this problem in each country; in countries with low prevalence, a greater level of income favor the transition to obesogenic habits, while a higher income level mitigates the problem in those countries with high rates of obesity. Discussion Identifying the socio-economic causes of the significant increase in the prevalence of obesity is essential for the implementation of effective strategies for prevention, since this condition not only affects the quality of life of those who suffer from it but also puts pressure on health systems due to the treatment costs of associated diseases.

  18. Updating estimates of low streamflow statistics to account for possible trends

    NASA Astrophysics Data System (ADS)

    Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.

    2017-12-01

    Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.

  19. No causal impact of serum vascular endothelial growth factor level on temporal changes in body mass index in Japanese male workers: a five-year longitudinal study.

    PubMed

    Imatoh, Takuya; Kamimura, Seiichiro; Miyazaki, Motonobu

    2017-03-01

    It has been reported that adipocytes secrete vascular endothelial growth factor. Therefore, we conducted a 5-year longitudinal epidemiological study to further elucidate the association between vascular endothelial growth factor levels and temporal changes in body mass index. Our study subjects were Japanese male workers, who had regular health check-ups. Vascular endothelial growth factor levels were measured at baseline. To examine the association between vascular endothelial growth factor levels and overweight, we calculated the odds ratio using a multivariate logistic regression model. Moreover, linear mixed effect models were used to assess the association between vascular endothelial growth factor level and temporal changes in body mass index during the 5-year follow-up period. Vascular endothelial growth factor levels were marginally higher in subjects with a body mass index greater than 25 kg/m 2 compared with in those with a body mass index less than 25 kg/m 2 (505.4 vs. 465.5 pg/mL, P = 0.1) and were weakly correlated with leptin levels (β: 0.05, P = 0.07). In multivariate logistic regression, subjects in the highest vascular endothelial growth factor quantile were significantly associated with an increased risk for overweight compared with those in the lowest quantile (odds ratio 1.65, 95 % confidential interval: 1.10-2.50). Moreover P for trend was significant (P for trend = 0.003). However, the linear mixed effect model revealed that vascular endothelial growth factor levels were not associated with changes in body mass index over a 5-year period (quantile 2, β: 0.06, P = 0.46; quantile 3, β: -0.06, P = 0.45; quantile 4, β: -0.10, P = 0.22; quantile 1 as reference). Our results suggested that high vascular endothelial growth factor levels were significantly associated with overweight in Japanese males but high vascular endothelial growth factor levels did not necessarily cause obesity.

  20. Trends of VOC exposures among a nationally representative sample: Analysis of the NHANES 1988 through 2004 data sets

    PubMed Central

    Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart

    2015-01-01

    Exposures to volatile organic compounds (VOCs) are ubiquitous due to emissions from personal, commercial and industrial products, but quantitative and representative information regarding long term exposure trends is lacking. This study characterizes trends from1988 to 2004 for the 15 VOCs measured in blood in five cohorts of the National Health and Nutrition Examination Survey (NHANES), a large and representative sample of U.S. adults. Trends were evaluated at various percentiles using linear quantile regression (QR) models, which were adjusted for solvent-related occupations and cotinine levels. Most VOCs showed decreasing trends at all quantiles, e.g., median exposures declined by 2.5 (m, p-xylene) to 6.4 (tetrachloroethene) percent per year over the 15 year period. Trends varied by VOC and quantile, and were grouped into three patterns: similar decreases at all quantiles (including benzene, toluene); most rapid decreases at upper quantiles (ethylbenzene, m, p-xylene, o-xylene, styrene, chloroform, tetrachloroethene); and fastest declines at central quantiles (1,4-dichlorobenzene). These patterns reflect changes in exposure sources, e.g., upper-percentile exposures may result mostly from occupational exposure, while lower percentile exposures arise from general environmental sources. Both VOC emissions aggregated at the national level and VOC concentrations measured in ambient air also have declined substantially over the study period and are supportive of the exposure trends, although the NHANES data suggest the importance of indoor sources and personal activities on VOC exposures. While piecewise QR models suggest that exposures of several VOCs decreased little or any during the 1990’s, followed by more rapid decreases from 1999 to 2004, questions are raised concerning the reliability of VOC data in several of the NHANES cohorts and its applicability as an exposure indicator, as demonstrated by the modest correlation between VOC levels in blood and personal air collected in the 1999/2000 cohort. Despite some limitations, the NHANES data provides a unique, long term and direct measurement of VOC exposures and trends. PMID:25705111

  1. Understanding Child Stunting in India: A Comprehensive Analysis of Socio-Economic, Nutritional and Environmental Determinants Using Additive Quantile Regression

    PubMed Central

    Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A.

    2013-01-01

    Background Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. Objective We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Design Using cross-sectional data for children aged 0–24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. Results At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Conclusions Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role. PMID:24223839

  2. Understanding child stunting in India: a comprehensive analysis of socio-economic, nutritional and environmental determinants using additive quantile regression.

    PubMed

    Fenske, Nora; Burns, Jacob; Hothorn, Torsten; Rehfuess, Eva A

    2013-01-01

    Most attempts to address undernutrition, responsible for one third of global child deaths, have fallen behind expectations. This suggests that the assumptions underlying current modelling and intervention practices should be revisited. We undertook a comprehensive analysis of the determinants of child stunting in India, and explored whether the established focus on linear effects of single risks is appropriate. Using cross-sectional data for children aged 0-24 months from the Indian National Family Health Survey for 2005/2006, we populated an evidence-based diagram of immediate, intermediate and underlying determinants of stunting. We modelled linear, non-linear, spatial and age-varying effects of these determinants using additive quantile regression for four quantiles of the Z-score of standardized height-for-age and logistic regression for stunting and severe stunting. At least one variable within each of eleven groups of determinants was significantly associated with height-for-age in the 35% Z-score quantile regression. The non-modifiable risk factors child age and sex, and the protective factors household wealth, maternal education and BMI showed the largest effects. Being a twin or multiple birth was associated with dramatically decreased height-for-age. Maternal age, maternal BMI, birth order and number of antenatal visits influenced child stunting in non-linear ways. Findings across the four quantile and two logistic regression models were largely comparable. Our analysis confirms the multifactorial nature of child stunting. It emphasizes the need to pursue a systems-based approach and to consider non-linear effects, and suggests that differential effects across the height-for-age distribution do not play a major role.

  3. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Treesearch

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  4. Statistical downscaling modeling with quantile regression using lasso to estimate extreme rainfall

    NASA Astrophysics Data System (ADS)

    Santri, Dewi; Wigena, Aji Hamim; Djuraidah, Anik

    2016-02-01

    Rainfall is one of the climatic elements with high diversity and has many negative impacts especially extreme rainfall. Therefore, there are several methods that required to minimize the damage that may occur. So far, Global circulation models (GCM) are the best method to forecast global climate changes include extreme rainfall. Statistical downscaling (SD) is a technique to develop the relationship between GCM output as a global-scale independent variables and rainfall as a local- scale response variable. Using GCM method will have many difficulties when assessed against observations because GCM has high dimension and multicollinearity between the variables. The common method that used to handle this problem is principal components analysis (PCA) and partial least squares regression. The new method that can be used is lasso. Lasso has advantages in simultaneuosly controlling the variance of the fitted coefficients and performing automatic variable selection. Quantile regression is a method that can be used to detect extreme rainfall in dry and wet extreme. Objective of this study is modeling SD using quantile regression with lasso to predict extreme rainfall in Indramayu. The results showed that the estimation of extreme rainfall (extreme wet in January, February and December) in Indramayu could be predicted properly by the model at quantile 90th.

  5. On the Computation of Optimal Designs for Certain Time Series Models with Applications to Optimal Quantile Selection for Location or Scale Parameter Estimation.

    DTIC Science & Technology

    1981-07-01

    process is observed over all of (0,1], the reproducing kernel Hilbert space (RKHS) techniques developed by Parzen (1961a, 1961b) 2 may be used to construct...covariance kernel,R, for the process (1.1) is the reproducing kernel for a reproducing kernel Hilbert space (RKHS) which will be denoted as H(R) (c.f...2.6), it is known that (c.f. Eubank, Smith and Smith (1981a, 1981b)), i) H(R) is a Hilbert function space consisting of functions which satisfy for fEH

  6. CO-occurring exposure to perchlorate, nitrate and thiocyanate alters thyroid function in healthy pregnant women

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horton, Megan K., E-mail: megan.horton@mssm.edu; Blount, Benjamin C.; Valentin-Blasini, Liza

    Background: Adequate maternal thyroid function during pregnancy is necessary for normal fetal brain development, making pregnancy a critical window of vulnerability to thyroid disrupting insults. Sodium/iodide symporter (NIS) inhibitors, namely perchlorate, nitrate, and thiocyanate, have been shown individually to competitively inhibit uptake of iodine by the thyroid. Several epidemiologic studies examined the association between these individual exposures and thyroid function. Few studies have examined the effect of this chemical mixture on thyroid function during pregnancy Objectives: We examined the cross sectional association between urinary perchlorate, thiocyanate and nitrate concentrations and thyroid function among healthy pregnant women living in New Yorkmore » City using weighted quantile sum (WQS) regression. Methods: We measured thyroid stimulating hormone (TSH) and free thyroxine (FreeT4) in blood samples; perchlorate, thiocyanate, nitrate and iodide in urine samples collected from 284 pregnant women at 12 (±2.8) weeks gestation. We examined associations between urinary analyte concentrations and TSH or FreeT4 using linear regression or WQS adjusting for gestational age, urinary iodide and creatinine. Results: Individual analyte concentrations in urine were significantly correlated (Spearman's r 0.4–0.5, p<0.001). Linear regression analyses did not suggest associations between individual concentrations and thyroid function. The WQS revealed a significant positive association between the weighted sum of urinary concentrations of the three analytes and increased TSH. Perchlorate had the largest weight in the index, indicating the largest contribution to the WQS. Conclusions: Co-exposure to perchlorate, nitrate and thiocyanate may alter maternal thyroid function, specifically TSH, during pregnancy. - Highlights: • Perchlorate, nitrate, thiocyanate and iodide measured in maternal urine. • Thyroid function (TSH and Free T4) measured in maternal blood. • Weighted quantile sum (WQS) regression examined complex mixture effect. • WQS identified an inverse association between the exposure mixture and maternal TSH. • Perchlorate indicated as the ‘bad actor’ of the mixture.« less

  7. Global Climate Model Simulated Hydrologic Droughts and Floods in the Nelson-Churchill Watershed

    NASA Astrophysics Data System (ADS)

    Vieira, M. J. F.; Stadnyk, T. A.; Koenig, K. A.

    2014-12-01

    There is uncertainty surrounding the duration, magnitude and frequency of historical hydroclimatic extremes such as hydrologic droughts and floods prior to the observed record. In regions where paleoclimatic studies are less reliable, Global Climate Models (GCMs) can provide useful information about past hydroclimatic conditions. This study evaluates the use of Coupled Model Intercomparison Project 5 (CMIP5) GCMs to enhance the understanding of historical droughts and floods across the Canadian Prairie region in the Nelson-Churchill Watershed (NCW). The NCW is approximately 1.4 million km2 in size and drains into Hudson Bay in Northern Manitoba, Canada. One hundred years of observed hydrologic records show extended dry and wet periods in this region; however paleoclimatic studies suggest that longer, more severe droughts have occurred in the past. In Manitoba, where hydropower is the primary source of electricity, droughts are of particular interest as they are important for future resource planning. Twenty-three GCMs with daily runoff are evaluated using 16 metrics for skill in reproducing historic annual runoff patterns. A common 56-year historic period of 1950-2005 is used for this evaluation to capture wet and dry periods. GCM runoff is then routed at a grid resolution of 0.25° using the WATFLOOD hydrological model storage-routing algorithm to develop streamflow scenarios. Reservoir operation is naturalized and a consistent temperature scenario is used to determine ice-on and ice-off conditions. These streamflow simulations are compared with the historic record to remove bias using quantile mapping of empirical distribution functions. GCM runoff data from pre-industrial and future projection experiments are also bias corrected to obtain extended streamflow simulations. GCM streamflow simulations of more than 650 years include a stationary (pre-industrial) period and future periods forced by radiative forcing scenarios. Quantile mapping adjusts for magnitude only while maintaining the GCM's sequencing of events, allowing for the examination of differences in historic and future hydroclimatic extremes. These bias corrected streamflow scenarios provide an alternative to stochastic simulations for hydrologic data analysis and can aid future resource planning and environmental studies.

  8. A User’s Guide to BISAM (BIvariate SAMple): The Bivariate Data Modeling Program.

    DTIC Science & Technology

    1983-08-01

    method for the null case specified and is then used to form the bivariate density-quantile function as described in section 4. If D(U) in stage...employed assigns average ranks for tied observations. Other methods for assigning ranks to tied observations are often employed but are not attempted...34 €.. . . . .. . .. . . . ,.. . ,•. . . ... *.., .. , - . . . . - - . . .. - -. .. observations will weaken the results obtained since underlying continuous distributions are assumed. One should avoid such situations if possible. Two methods

  9. Secure Learning and Learning for Security: Research in the Intersection

    DTIC Science & Technology

    2010-05-13

    researchers to consider how Machine Learning and Statistics might be leveraged for constructing intelli - gent attacks. In a similar vein, security...Quantiles S am pl e Q ua nt ile s...8217 Residuals in Flow 144 Theoretical Quantiles S am pl e Q ua nt ile s 0 1 2 3 4 5 6 7 5. 0e + 07 1. 0e + 08 1. 5e + 08 Comparing Actual and Synthetic

  10. Strategies to take into account variations in extreme rainfall events for design storms in urban area: an example over Naples (Southern Italy)

    NASA Astrophysics Data System (ADS)

    Mercogliano, P.; Rianna, G.

    2017-12-01

    Eminent works highlighted how available observations display ongoing increases in extreme rainfall events while climate models assess them for future. Although the constraints in rainfall networks observations and uncertainties in climate modelling currently affect in significant way investigations, the huge impacts potentially induced by climate changes (CC) suggest adopting effective adaptation measures in order to take proper precautions. In this regard, design storms are used by engineers to size hydraulic infrastructures potentially affected by direct (e.g. pluvial/urban flooding) and indirect (e.g. river flooding) effects of extreme rainfall events. Usually they are expressed as IDF curves, mathematical relationships between rainfall Intensity, Duration, and the return period (frequency, F). They are estimated interpreting through Extreme Theories Statistical Theories (ETST) past rainfall records under the assumption of steady conditions resulting then unsuitable under climate change. In this work, a methodology to estimate future variations in IDF curves is presented and carried out for the city of Naples (Southern Italy). In this regard, the Equidistance Quantile Matching Approach proposed by Sivrastav et al. (2014) is adopted. According it, daily-subdaily maximum precipitation observations [a] and the analogous daily data provided by climate projections on current [b] and future time spans [c] are interpreted in IDF terms through Generalized Extreme Value (GEV) approach. After, quantile based mapping approach is used to establish a statistical relationship between cumulative distribution functions resulting by GEV of [a] and [b] (spatial downscaling) and [b] and [c] functions (temporal downscaling). Coupling so-obtained relations permits generating IDF curves under CC assumption. To account for uncertainties in future projections, all climate simulations available for the area in Euro-Cordex multimodel ensemble at 0.11° (about 12 km) are considered under three different concentration scenarios (RCP2.6, RCP4.5 and RCP8.5). The results appear largely influenced by models, RCPs and time horizon of interest; nevertheless, clear indications of increases are detectable although with different magnitude on the different precipitation durations.

  11. An Investigation of Factors Influencing Nurses' Clinical Decision-Making Skills.

    PubMed

    Wu, Min; Yang, Jinqiu; Liu, Lingying; Ye, Benlan

    2016-08-01

    This study aims to investigate the influencing factors on nurses' clinical decision-making (CDM) skills. A cross-sectional nonexperimental research design was conducted in the medical, surgical, and emergency departments of two university hospitals, between May and June 2014. We used a quantile regression method to identify the influencing factors across different quantiles of the CDM skills distribution and compared the results with the corresponding ordinary least squares (OLS) estimates. Our findings revealed that nurses were best at the skills of managing oneself. Educational level, experience, and the total structural empowerment had significant positive impacts on nurses' CDM skills, while the nurse-patient relationship, patient care and interaction, formal empowerment, and information empowerment were negatively correlated with nurses' CDM skills. These variables explained no more than 30% of the variance in nurses' CDM skills and mainly explained the lower quantiles of nurses' CDM skills distribution. © The Author(s) 2016.

  12. Effects of export concentration on CO2 emissions in developed countries: an empirical analysis.

    PubMed

    Apergis, Nicholas; Can, Muhlis; Gozgor, Giray; Lau, Chi Keung Marco

    2018-03-08

    This paper provides the evidence on the short- and the long-run effects of the export product concentration on the level of CO 2 emissions in 19 developed (high-income) economies, spanning the period 1962-2010. To this end, the paper makes use of the nonlinear panel unit root and cointegration tests with multiple endogenous structural breaks. It also considers the mean group estimations, the autoregressive distributed lag model, and the panel quantile regression estimations. The findings illustrate that the environmental Kuznets curve (EKC) hypothesis is valid in the panel dataset of 19 developed economies. In addition, it documents that a higher level of the product concentration of exports leads to lower CO 2 emissions. The results from the panel quantile regressions also indicate that the effect of the export product concentration upon the per capita CO 2 emissions is relatively high at the higher quantiles.

  13. Heterogeneity in Smokers' Responses to Tobacco Control Policies.

    PubMed

    Nesson, Erik

    2017-02-01

    This paper uses unconditional quantile regression to estimate whether smokers' responses to tobacco control policies change across the distribution of smoking levels. I measure smoking behavior with the number of cigarettes smoked per day and also with serum cotinine levels, a continuous biomarker of nicotine exposure, using individual-level repeated cross-section data from the National Health and Nutrition Examination Surveys. I find that the cigarette taxes lead to reductions in both the number of cigarettes smoked per day and in smokers' cotinine levels. These reductions are most pronounced in the middle quantiles of both distributions in terms of marginal effects, but most pronounced in the lower quantiles in terms of tax elasticities. I do not find that higher cigarette taxes lead to statistically significant changes in the amount of nicotine smokers ingest from each cigarette. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Bias correction of daily satellite precipitation data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.

    2018-05-01

    Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.

  15. An observationally centred method to quantify local climate change as a distribution

    NASA Astrophysics Data System (ADS)

    Stainforth, David; Chapman, Sandra; Watkins, Nicholas

    2013-04-01

    For planning and adaptation, guidance on trends in local climate is needed at the specific thresholds relevant to particular impact or policy endeavours. This requires quantifying trends at specific quantiles in distributions of variables such as daily temperature or precipitation. These non-normal distributions vary both geographically and in time. The trends in the relevant quantiles may not simply follow the trend in the distribution mean. We present a method[1] for analysing local climatic timeseries data to assess which quantiles of the local climatic distribution show the greatest and most robust trends. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily temperature from specific locations across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of the sensitivity of different quantiles of the distributions to changing climate. Geographical location and temperature are treated as independent variables, we thus obtain as outputs how the trend or sensitivity varies with temperature (or occurrence likelihood), and with geographical location. These sensitivities are found to be geographically varying across Europe; as one would expect given the different influences on local climate between, say, Western Scotland and central Italy. We find as an output many regionally consistent patterns of response of potential value in adaptation planning. We discuss methods to quantify the robustness of these observed sensitivities and their statistical likelihood. This also quantifies the level of detail needed from climate models if they are to be used as tools to assess climate change impact. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, in press [2] Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119, doi:10.1029/2008JD10201

  16. Future extreme water levels and floodplains in Gironde Estuary considering climate change

    NASA Astrophysics Data System (ADS)

    Laborie, V.; Hissel, F.; Sergent, P.

    2012-04-01

    Within THESEUS European project, an overflowing model of Gironde Estuary has been used to evaluate future surge levels at Le Verdon and future water levels at 6 specific sites of the estuary : le Verdon, Richard, Laména, Pauillac, Le Marquis and Bordeaux. It was then used to study the evolution of floodplains' location and areas towards 2100 in the entire Estuary. In this study, no breaching and no modification in the elevation of the dike was considered. The model was fed by several data sources : wind fields at Royan and Mérignac interpolated from the grid of the European Climatolologic Model CLM/SGA, a tide signal at Le Verdon, the discharges of Garonne (at La Réole), the Dordogne (at Pessac) and Isle (at Libourne). A simplified mathematical model of surge levels has been adjusted at Le Verdon with 10 surge storms and by using wind and pressure fields given by CLM/SGA. This adjustment was led so that the statistical analysis of the global signal at Le Verdon gives the same quantiles as the same analysis driven on maregraphic observations for the period [1960 ; 2000]. The assumption used for sea level rise was the pessimistic one of the French national institute for climate change: 60 cm in 2100. The model was then used to study the evolution of extreme water levels towards 2100. The analysis of surge levels at Le Verdon shows a decrease in quantiles which is coherent with the analysis of climatologic fields. The analysis of water levels shows that the increase in mean water levels quantiles represents only a part of sea level rise in Gironde Estuary. Moreover this effect seems to decrease from the maritime limit of the model towards upstream. Concerning floodplains, those corresponding to return periods from 2 to 100 years for present conditions and 3 slices [2010; 2039], [2040; 2069] and [2070; 2099] have been mapped for 3 areas in Gironde Estuary : around Le Verdon, at the confluence between Garonne and Dordogne, and near Bordeaux. Concerning the evolution of floodplains in Gironde Estuary, taking into account IPCC scenario A1B, under the same assumptions, it appears that the impact of the climate change on the quantiles of water levels in floodplains depends on the sea level rise over the period considered ([2010; 2039], [2040; 2069], [2070; 2099]) and that areas which are not flooded today for weak return periods become submerged towards 2100. The neighborhood of Le Verdon undergoes a negative impact only in the medium and long term. For the period [2010; 2039], a small reduction of floodplains can be observed in quantiles of water levels for all return periods. Under those assumptions, in the area of Bordeaux, significant effects would be felt along the road RN230 towards 2100. The effects of the discharges and dike breaching will have to be studied in order to precise these results.

  17. Geostatistical Interpolation of Particle-Size Curves in Heterogeneous Aquifers

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Menafoglio, A.; Secchi, P.

    2013-12-01

    We address the problem of predicting the spatial field of particle-size curves (PSCs) from measurements associated with soil samples collected at a discrete set of locations within an aquifer system. Proper estimates of the full PSC are relevant to applications related to groundwater hydrology, soil science and geochemistry and aimed at modeling physical and chemical processes occurring in heterogeneous earth systems. Hence, we focus on providing kriging estimates of the entire PSC at unsampled locations. To this end, we treat particle-size curves as cumulative distribution functions, model their densities as functional compositional data and analyze them by embedding these into the Hilbert space of compositional functions endowed with the Aitchison geometry. On this basis, we develop a new geostatistical methodology for the analysis of spatially dependent functional compositional data. Our functional compositional kriging (FCK) approach allows providing predictions at unsampled location of the entire particle-size curve, together with a quantification of the associated uncertainty, by fully exploiting both the functional form of the data and their compositional nature. This is a key advantage of our approach with respect to traditional methodologies, which treat only a set of selected features (e.g., quantiles) of PSCs. Embedding the full PSC into a geostatistical analysis enables one to provide a complete characterization of the spatial distribution of lithotypes in a reservoir, eventually leading to improved predictions of soil hydraulic attributes through pedotransfer functions as well as of soil geochemical parameters which are relevant in sorption/desorption and cation exchange processes. We test our new method on PSCs sampled along a borehole located within an alluvial aquifer near the city of Tuebingen, Germany. The quality of FCK predictions is assessed through leave-one-out cross-validation. A comparison between hydraulic conductivity estimates obtained via FCK approach and those predicted by classical kriging of effective particle diameters (i.e., quantiles of the PSCs) is finally performed.

  18. Copula-based assessment of the relationship between food peaks and flood volumes using information on historical floods by Bayesian Monte Carlo Markov Chain simulations

    NASA Astrophysics Data System (ADS)

    Gaál, Ladislav; Szolgay, Ján.; Bacigál, Tomáå.¡; Kohnová, Silvia

    2010-05-01

    Copula-based estimation methods of hydro-climatological extremes have increasingly been gaining attention of researchers and practitioners in the last couple of years. Unlike the traditional estimation methods which are based on bivariate cumulative distribution functions (CDFs), copulas are a relatively flexible tool of statistics that allow for modelling dependencies between two or more variables such as flood peaks and flood volumes without making strict assumptions on the marginal distributions. The dependence structure and the reliability of the joint estimates of hydro-climatological extremes, mainly in the right tail of the joint CDF not only depends on the particular copula adopted but also on the data available for the estimation of the marginal distributions of the individual variables. Generally, data samples for frequency modelling have limited temporal extent, which is a considerable drawback of frequency analyses in practice. Therefore, it is advised to deal with statistical methods that improve any part of the process of copula construction and result in more reliable design values of hydrological variables. The scarcity of the data sample mostly in the extreme tail of the joint CDF can be bypassed, e.g., by using a considerably larger amount of simulated data by rainfall-runoff analysis or by including historical information on the variables under study. The latter approach of data extension is used here to make the quantile estimates of the individual marginals of the copula more reliable. In the presented paper it is proposed to use historical information in the frequency analysis of the marginal distributions in the framework of Bayesian Monte Carlo Markov Chain (MCMC) simulations. Generally, a Bayesian approach allows for a straightforward combination of different sources of information on floods (e.g. flood data from systematic measurements and historical flood records, respectively) in terms of a product of the corresponding likelihood functions. On the other hand, the MCMC algorithm is a numerical approach for sampling from the likelihood distributions. The Bayesian MCMC methods therefore provide an attractive way to estimate the uncertainty in parameters and quantile metrics of frequency distributions. The applicability of the method is demonstrated in a case study of the hydroelectric power station Orlík on the Vltava River. This site has a key role in the flood prevention of Prague, the capital city of the Czech Republic. The record length of the available flood data is 126 years from the period 1877-2002, while the flood event observed in 2002 that caused extensive damages and numerous casualties is treated as a historic one. To estimate the joint probabilities of flood peaks and volumes, different copulas are fitted and their goodness-of-fit are evaluated by bootstrap simulations. Finally, selected quantiles of flood volumes conditioned on given flood peaks are derived and compared with those obtained by the traditional method used in the practice of water management specialists of the Vltava River.

  19. Quantile regression of microgeographic variation in population characteristics of an invasive vertebrate predator

    USGS Publications Warehouse

    Siers, Shane R.; Savidge, Julie A.; Reed, Robert

    2017-01-01

    Localized ecological conditions have the potential to induce variation in population characteristics such as size distributions and body conditions. The ability to generalize the influence of ecological characteristics on such population traits may be particularly meaningful when those traits influence prospects for successful management interventions. To characterize variability in invasive Brown Treesnake population attributes within and among habitat types, we conducted systematic and seasonally-balanced surveys, collecting 100 snakes from each of 18 sites: three replicates within each of six major habitat types comprising 95% of Guam’s geographic expanse. Our study constitutes one of the most comprehensive and controlled samplings of any published snake study. Quantile regression on snake size and body condition indicated significant ecological heterogeneity, with a general trend of relative consistency of size classes and body conditions within and among scrub and Leucaena forest habitat types and more heterogeneity among ravine forest, savanna, and urban residential sites. Larger and more robust snakes were found within some savanna and urban habitat replicates, likely due to relative availability of larger prey. Compared to more homogeneous samples in the wet season, variability in size distributions and body conditions was greater during the dry season. Although there is evidence of habitat influencing Brown Treesnake populations at localized scales (e.g., the higher prevalence of larger snakes—particularly males—in savanna and urban sites), the level of variability among sites within habitat types indicates little ability to make meaningful predictions about these traits at unsampled locations. Seasonal variability within sites and habitats indicates that localized population characterization should include sampling in both wet and dry seasons. Extreme values at single replicates occasionally influenced overall habitat patterns, while pooling replicates masked variability among sites. A full understanding of population characteristics should include an assessment of variability both at the site and habitat level.

  20. Quantile regression of microgeographic variation in population characteristics of an invasive vertebrate predator

    PubMed Central

    Siers, Shane R.; Savidge, Julie A.; Reed, Robert N.

    2017-01-01

    Localized ecological conditions have the potential to induce variation in population characteristics such as size distributions and body conditions. The ability to generalize the influence of ecological characteristics on such population traits may be particularly meaningful when those traits influence prospects for successful management interventions. To characterize variability in invasive Brown Treesnake population attributes within and among habitat types, we conducted systematic and seasonally-balanced surveys, collecting 100 snakes from each of 18 sites: three replicates within each of six major habitat types comprising 95% of Guam’s geographic expanse. Our study constitutes one of the most comprehensive and controlled samplings of any published snake study. Quantile regression on snake size and body condition indicated significant ecological heterogeneity, with a general trend of relative consistency of size classes and body conditions within and among scrub and Leucaena forest habitat types and more heterogeneity among ravine forest, savanna, and urban residential sites. Larger and more robust snakes were found within some savanna and urban habitat replicates, likely due to relative availability of larger prey. Compared to more homogeneous samples in the wet season, variability in size distributions and body conditions was greater during the dry season. Although there is evidence of habitat influencing Brown Treesnake populations at localized scales (e.g., the higher prevalence of larger snakes—particularly males—in savanna and urban sites), the level of variability among sites within habitat types indicates little ability to make meaningful predictions about these traits at unsampled locations. Seasonal variability within sites and habitats indicates that localized population characterization should include sampling in both wet and dry seasons. Extreme values at single replicates occasionally influenced overall habitat patterns, while pooling replicates masked variability among sites. A full understanding of population characteristics should include an assessment of variability both at the site and habitat level. PMID:28570632

  1. Quantile regression of microgeographic variation in population characteristics of an invasive vertebrate predator.

    PubMed

    Siers, Shane R; Savidge, Julie A; Reed, Robert N

    2017-01-01

    Localized ecological conditions have the potential to induce variation in population characteristics such as size distributions and body conditions. The ability to generalize the influence of ecological characteristics on such population traits may be particularly meaningful when those traits influence prospects for successful management interventions. To characterize variability in invasive Brown Treesnake population attributes within and among habitat types, we conducted systematic and seasonally-balanced surveys, collecting 100 snakes from each of 18 sites: three replicates within each of six major habitat types comprising 95% of Guam's geographic expanse. Our study constitutes one of the most comprehensive and controlled samplings of any published snake study. Quantile regression on snake size and body condition indicated significant ecological heterogeneity, with a general trend of relative consistency of size classes and body conditions within and among scrub and Leucaena forest habitat types and more heterogeneity among ravine forest, savanna, and urban residential sites. Larger and more robust snakes were found within some savanna and urban habitat replicates, likely due to relative availability of larger prey. Compared to more homogeneous samples in the wet season, variability in size distributions and body conditions was greater during the dry season. Although there is evidence of habitat influencing Brown Treesnake populations at localized scales (e.g., the higher prevalence of larger snakes-particularly males-in savanna and urban sites), the level of variability among sites within habitat types indicates little ability to make meaningful predictions about these traits at unsampled locations. Seasonal variability within sites and habitats indicates that localized population characterization should include sampling in both wet and dry seasons. Extreme values at single replicates occasionally influenced overall habitat patterns, while pooling replicates masked variability among sites. A full understanding of population characteristics should include an assessment of variability both at the site and habitat level.

  2. A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates

    NASA Astrophysics Data System (ADS)

    Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh

    2016-10-01

    We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the average shape parameter is taken as the regional predictive distribution for this parameter. While the index flood method does not provide a straightforward way to consider the uncertainties in the index flood and in the regional parameters, the results obtained here show that the proposed Bayesian method is able to produce adequate credible intervals for flood quantiles that are in accordance with empirical estimates.

  3. A nonparametric method for assessment of interactions in a median regression model for analyzing right censored data.

    PubMed

    Lee, MinJae; Rahbar, Mohammad H; Talebi, Hooshang

    2018-01-01

    We propose a nonparametric test for interactions when we are concerned with investigation of the simultaneous effects of two or more factors in a median regression model with right censored survival data. Our approach is developed to detect interaction in special situations, when the covariates have a finite number of levels with a limited number of observations in each level, and it allows varying levels of variance and censorship at different levels of the covariates. Through simulation studies, we compare the power of detecting an interaction between the study group variable and a covariate using our proposed procedure with that of the Cox Proportional Hazard (PH) model and censored quantile regression model. We also assess the impact of censoring rate and type on the standard error of the estimators of parameters. Finally, we illustrate application of our proposed method to real life data from Prospective Observational Multicenter Major Trauma Transfusion (PROMMTT) study to test an interaction effect between type of injury and study sites using median time for a trauma patient to receive three units of red blood cells. The results from simulation studies indicate that our procedure performs better than both Cox PH model and censored quantile regression model based on statistical power for detecting the interaction, especially when the number of observations is small. It is also relatively less sensitive to censoring rates or even the presence of conditionally independent censoring that is conditional on the levels of covariates.

  4. Identifying Factors That Predict Promotion Time to E-4 and Re-Enlistment Eligibility for U.S. Marine Corps Field Radio Operators

    DTIC Science & Technology

    2014-12-01

    Primary Military Occupational Specialty PRO Proficiency Q-Q Quantile - Quantile RSS Residual Sum of Squares SI Shop Information T&R Training and...construct multivariate linear regression models to estimate Marines’ Computed Tier Score and time to achieve E-4 based on their individual personal...Science (GS) score, ASVAB Mathematics Knowledge (MK) score, ASVAB Paragraph Comprehension (PC) score, weight , and whether a Marine receives a weight

  5. Orosensory responsiveness and alcohol behaviour.

    PubMed

    Thibodeau, Margaret; Bajec, Martha; Pickering, Gary

    2017-08-01

    Consumption of alcoholic beverages is widespread through much of the world, and significantly impacts human health and well-being. We sought to determine the contribution of orosensation ('taste') to several alcohol intake measures by examining general responsiveness to taste and somatosensory stimuli in a convenience sample of 435 adults recruited from six cohorts. Each cohort was divided into quantiles based on their responsiveness to sweet, sour, bitter, salty, umami, metallic, and astringent stimuli, and the resulting quantiles pooled for analysis (Kruskal-Wallis ANOVA). Responsiveness to bitter and astringent stimuli was associated in a non-linear fashion with intake of all alcoholic beverage types, with the highest consumption observed in middle quantiles. Sourness responsiveness tended to be inversely associated with all measures of alcohol consumption. Regardless of sensation, the most responsive quantiles tended to drink less, although sweetness showed little relationship between responsiveness and intake. For wine, increased umami and metallic responsiveness tended to predict lower total consumption and frequency. A limited examination of individuals who abstain from all alcohol indicated a tendency toward higher responsiveness than alcohol consumers to sweetness, sourness, bitterness, and saltiness (biserial correlation), suggesting that broadly-tuned orosensory responsiveness may be protective against alcohol use and possibly misuse. Overall, these findings confirm the importance of orosensory responsiveness in mediating consumption of alcohol, and indicate areas for further research. Copyright © 2017. Published by Elsevier Inc.

  6. Incremental Treatment Costs Attributable to Overweight and Obesity in Patients with Diabetes: Quantile Regression Approach.

    PubMed

    Lee, Seung-Mi; Choi, In-Sun; Han, Euna; Suh, David; Shin, Eun-Kyung; Je, Seyunghe; Lee, Sung Su; Suh, Dong-Churl

    2018-01-01

    This study aimed to estimate treatment costs attributable to overweight and obesity in patients with diabetes who were less than 65 years of age in the United States. This study used data from the Medical Expenditure Panel Survey from 2001 to 2013. Patients with diabetes were identified by using the International Classification of Diseases, Ninth Revision, Clinical Modification code (250), clinical classification codes (049 and 050), or self-reported physician diagnoses. Total treatment costs attributable to overweight and obesity were calculated as the differences in the adjusted costs compared with individuals with diabetes and normal weight. Adjusted costs were estimated by using generalized linear models or unconditional quantile regression models. The mean annual treatment costs attributable to obesity were $1,852 higher than those attributable to normal weight, while costs attributable to overweight were $133 higher. The unconditional quantile regression results indicated that the impact of obesity on total treatment costs gradually became more significant as treatment costs approached the upper quantile. Among patients with diabetes who were less than 65 years of age, patients with diabetes and obesity have significantly higher treatment costs than patients with diabetes and normal weight. The economic burden of diabetes to society will continue to increase unless more proactive preventive measures are taken to effectively treat patients with overweight or obesity. © 2017 The Obesity Society.

  7. Customized Fetal Growth Charts for Parents' Characteristics, Race, and Parity by Quantile Regression Analysis: A Cross-sectional Multicenter Italian Study.

    PubMed

    Ghi, Tullio; Cariello, Luisa; Rizzo, Ludovica; Ferrazzi, Enrico; Periti, Enrico; Prefumo, Federico; Stampalija, Tamara; Viora, Elsa; Verrotti, Carla; Rizzo, Giuseppe

    2016-01-01

    The purpose of this study was to construct fetal biometric charts between 16 and 40 weeks' gestation that were customized for parental characteristics, race, and parity, using quantile regression analysis. In a multicenter cross-sectional study, 8070 sonographic examinations from low-risk pregnancies between 16 and 40 weeks' gestation were analyzed. The fetal measurements obtained were biparietal diameter, head circumference, abdominal circumference, and femur diaphysis length. Quantile regression was used to examine the impact of parental height and weight, parity, and race across biometric percentiles for the fetal measurements considered. Paternal and maternal height were significant covariates for all of the measurements considered (P < .05). Maternal weight significantly influenced head circumference, abdominal circumference, and femur diaphysis length. Parity was significantly associated with biparietal diameter and head circumference. Central African race was associated with head circumference and femur diaphysis length, whereas North African race was only associated with femur diaphysis length. In this study we constructed customized biometric growth charts using quantile regression in a large cohort of low-risk pregnancies. These charts offer the advantage of defining individualized normal ranges of fetal biometric parameters at each specific percentile corrected for parental height and weight, parity, and race. This study supports the importance of including these variables in routine sonographic screening for fetal growth abnormalities.

  8. A Study of Alternative Quantile Estimation Methods in Newsboy-Type Problems

    DTIC Science & Technology

    1980-03-01

    decision maker selects to have on hand. The newsboy cost equation may be formulated as a two-piece continuous linear function in the following manner. C(S...number of observations, some approximations may be possible. Three points which are near each other can be assumed to be linear and some estimator using...respectively. Define the value r as: r = [nq + 0.5] , (6) where [X] denotes the largest integer of X. Let us consider an estimate of X as the linear

  9. An evaluation of two-channel ChIP-on-chip and DNA methylation microarray normalization strategies

    PubMed Central

    2012-01-01

    Background The combination of chromatin immunoprecipitation with two-channel microarray technology enables genome-wide mapping of binding sites of DNA-interacting proteins (ChIP-on-chip) or sites with methylated CpG di-nucleotides (DNA methylation microarray). These powerful tools are the gateway to understanding gene transcription regulation. Since the goals of such studies, the sample preparation procedures, the microarray content and study design are all different from transcriptomics microarrays, the data pre-processing strategies traditionally applied to transcriptomics microarrays may not be appropriate. Particularly, the main challenge of the normalization of "regulation microarrays" is (i) to make the data of individual microarrays quantitatively comparable and (ii) to keep the signals of the enriched probes, representing DNA sequences from the precipitate, as distinguishable as possible from the signals of the un-enriched probes, representing DNA sequences largely absent from the precipitate. Results We compare several widely used normalization approaches (VSN, LOWESS, quantile, T-quantile, Tukey's biweight scaling, Peng's method) applied to a selection of regulation microarray datasets, ranging from DNA methylation to transcription factor binding and histone modification studies. Through comparison of the data distributions of control probes and gene promoter probes before and after normalization, and assessment of the power to identify known enriched genomic regions after normalization, we demonstrate that there are clear differences in performance between normalization procedures. Conclusion T-quantile normalization applied separately on the channels and Tukey's biweight scaling outperform other methods in terms of the conservation of enriched and un-enriched signal separation, as well as in identification of genomic regions known to be enriched. T-quantile normalization is preferable as it additionally improves comparability between microarrays. In contrast, popular normalization approaches like quantile, LOWESS, Peng's method and VSN normalization alter the data distributions of regulation microarrays to such an extent that using these approaches will impact the reliability of the downstream analysis substantially. PMID:22276688

  10. Quantile-Specific Penetrance of Genes Affecting Lipoproteins, Adiposity and Height

    PubMed Central

    Williams, Paul T.

    2012-01-01

    Quantile-dependent penetrance is proposed to occur when the phenotypic expression of a SNP depends upon the population percentile of the phenotype. To illustrate the phenomenon, quantiles of height, body mass index (BMI), and plasma lipids and lipoproteins were compared to genetic risk scores (GRS) derived from single nucleotide polymorphisms (SNP)s having established genome-wide significance: 180 SNPs for height, 32 for BMI, 37 for low-density lipoprotein (LDL)-cholesterol, 47 for high-density lipoprotein (HDL)-cholesterol, 52 for total cholesterol, and 31 for triglycerides in 1930 subjects. Both phenotypes and GRSs were adjusted for sex, age, study, and smoking status. Quantile regression showed that the slope of the genotype-phenotype relationships increased with the percentile of BMI (P = 0.002), LDL-cholesterol (P = 3×10−8), HDL-cholesterol (P = 5×10−6), total cholesterol (P = 2.5×10−6), and triglyceride distribution (P = 7.5×10−6), but not height (P = 0.09). Compared to a GRS's phenotypic effect at the 10th population percentile, its effect at the 90th percentile was 4.2-fold greater for BMI, 4.9-fold greater for LDL-cholesterol, 1.9-fold greater for HDL-cholesterol, 3.1-fold greater for total cholesterol, and 3.3-fold greater for triglycerides. Moreover, the effect of the rs1558902 (FTO) risk allele was 6.7-fold greater at the 90th than the 10th percentile of the BMI distribution, and that of the rs3764261 (CETP) risk allele was 2.4-fold greater at the 90th than the 10th percentile of the HDL-cholesterol distribution. Conceptually, it maybe useful to distinguish environmental effects on the phenotype that in turn alters a gene's phenotypic expression (quantile-dependent penetrance) from environmental effects affecting the gene's phenotypic expression directly (gene-environment interaction). PMID:22235250

  11. A Study on Regional Rainfall Frequency Analysis for Flood Simulation Scenarios

    NASA Astrophysics Data System (ADS)

    Jung, Younghun; Ahn, Hyunjun; Joo, Kyungwon; Heo, Jun-Haeng

    2014-05-01

    Recently, climate change has been observed in Korea as well as in the entire world. The rainstorm has been gradually increased and then the damage has been grown. It is very important to manage the flood control facilities because of increasing the frequency and magnitude of severe rain storm. For managing flood control facilities in risky regions, data sets such as elevation, gradient, channel, land use and soil data should be filed up. Using this information, the disaster situations can be simulated to secure evacuation routes for various rainfall scenarios. The aim of this study is to investigate and determine extreme rainfall quantile estimates in Uijeongbu City using index flood method with L-moments parameter estimation. Regional frequency analysis trades space for time by using annual maximum rainfall data from nearby or similar sites to derive estimates for any given site in a homogeneous region. Regional frequency analysis based on pooled data is recommended for estimation of rainfall quantiles at sites with record lengths less than 5T, where T is return period of interest. Many variables relevant to precipitation can be used for grouping a region in regional frequency analysis. For regionalization of Han River basin, the k-means method is applied for grouping regions by variables of meteorology and geomorphology. The results from the k-means method are compared for each region using various probability distributions. In the final step of the regionalization analysis, goodness-of-fit measure is used to evaluate the accuracy of a set of candidate distributions. And rainfall quantiles by index flood method are obtained based on the appropriate distribution. And then, rainfall quantiles based on various scenarios are used as input data for disaster simulations. Keywords: Regional Frequency Analysis; Scenarios of Rainfall Quantile Acknowledgements This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.

  12. Effect of threatening life experiences and adverse family relations in ulcerative colitis: analysis using structural equation modeling and comparison with Crohn's disease.

    PubMed

    Slonim-Nevo, Vered; Sarid, Orly; Friger, Michael; Schwartz, Doron; Sergienko, Ruslan; Pereg, Avihu; Vardi, Hillel; Singer, Terri; Chernin, Elena; Greenberg, Dan; Odes, Shmuel

    2017-05-01

    We published that threatening life experiences and adverse family relations impact Crohn's disease (CD) adversely. In this study, we examine the influence of these stressors in ulcerative colitis (UC). Patients completed demography, economic status (ES), the Patient-Simple Clinical Colitis Activity Index (P-SCCAI), the Short Inflammatory Bowel Disease Questionnaire (SIBDQ), the Short-Form Health Survey (SF-36), the Brief Symptom Inventory (BSI), the Family Assessment Device (FAD), and the List of Threatening Life Experiences (LTE). Analysis included multiple linear and quantile regressions and structural equation modeling, comparing CD. UC patients (N=148, age 47.55±16.04 years, 50.6% women) had scores [median (interquartile range)] as follows: SCAAI, 2 (0.3-4.8); FAD, 1.8 (1.3-2.2); LTE, 1.0 (0-2.0); SF-36 Physical Health, 49.4 (36.8-55.1); SF-36 Mental Health, 45 (33.6-54.5); Brief Symptom Inventory-Global Severity Index (GSI), 0.5 (0.2-1.0). SIBDQ was 49.76±14.91. There were significant positive associations for LTE and SCAAI (25, 50, 75% quantiles), FAD and SF-36 Mental Health, FAD and LTE with GSI (50, 75, 90% quantiles), and ES with SF-36 and SIBDQ. The negative associations were as follows: LTE with SF-36 Physical/Mental Health, SIBDQ with FAD and LTE, ES with GSI (all quantiles), and P-SCCAI (75, 90% quantiles). In structural equation modeling analysis, LTE impacted ES negatively and ES impacted GSI negatively; LTE impacted GSI positively and GSI impacted P-SCCAI positively. In a split model, ES had a greater effect on GSI in UC than CD, whereas other path magnitudes were similar. Threatening life experiences, adverse family relations, and poor ES make UC patients less healthy both physically and mentally. The impact of ES is worse in UC than CD.

  13. Regional L-Moment-Based Flood Frequency Analysis in the Upper Vistula River Basin, Poland

    NASA Astrophysics Data System (ADS)

    Rutkowska, A.; Żelazny, M.; Kohnová, S.; Łyp, M.; Banasik, K.

    2017-02-01

    The Upper Vistula River basin was divided into pooling groups with similar dimensionless frequency distributions of annual maximum river discharge. The cluster analysis and the Hosking and Wallis (HW) L-moment-based method were used to divide the set of 52 mid-sized catchments into disjoint clusters with similar morphometric, land use, and rainfall variables, and to test the homogeneity within clusters. Finally, three and four pooling groups were obtained alternatively. Two methods for identification of the regional distribution function were used, the HW method and the method of Kjeldsen and Prosdocimi based on a bivariate extension of the HW measure. Subsequently, the flood quantile estimates were calculated using the index flood method. The ordinary least squares (OLS) and the generalised least squares (GLS) regression techniques were used to relate the index flood to catchment characteristics. Predictive performance of the regression scheme for the southern part of the Upper Vistula River basin was improved by using GLS instead of OLS. The results of the study can be recommended for the estimation of flood quantiles at ungauged sites, in flood risk mapping applications, and in engineering hydrology to help design flood protection structures.

  14. An actuarial approach to retrofit savings in buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subbarao, Krishnappa; Etingov, Pavel V.; Reddy, T. A.

    An actuarial method has been developed for determining energy savings from retrofits from energy use data for a number of buildings. This method should be contrasted with the traditional method of using pre- and post-retrofit data on the same building. This method supports the U.S. Department of Energy Building Performance Database of real building performance data and related tools that enable engineering and financial practitioners to evaluate retrofits. The actuarial approach derives, from the database, probability density functions (PDFs) for energy savings from retrofits by creating peer groups for the user’s pre post buildings. From the energy use distribution ofmore » the two groups, the savings PDF is derived. This provides the basis for engineering analysis as well as financial risk analysis leading to investment decisions. Several technical issues are addressed: The savings PDF is obtained from the pre- and post-PDF through a convolution. Smoothing using kernel density estimation is applied to make the PDF more realistic. The low data density problem can be mitigated through a neighborhood methodology. Correlations between pre and post buildings are addressed to improve the savings PDF. Sample size effects are addressed through the Kolmogorov--Smirnov tests and quantile-quantile plots.« less

  15. Finite-sample and asymptotic sign-based tests for parameters of non-linear quantile regression with Markov noise

    NASA Astrophysics Data System (ADS)

    Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.

    2017-01-01

    One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.

  16. Bias and Variance Approximations for Estimators of Extreme Quantiles

    DTIC Science & Technology

    1988-11-01

    r u - g(u). The errors of these approximations are, respectively, O ...The conditions required for this are yrci, yr+ypci. Taking the special cases r -1, r -1 and the limit r -) O , we deduce Jelog g(Y) 6 2folog g(Y) ~ e( 3+2y...a 2 (log g(TipL, o , o )) - I + I- exp-a" a a r - (- + Z - Ze - Z + (Z 2 - z~eZ + Z3 e - Z) + 0(y 2 )) 2 18 and using the formula E[Zre- sz1 - (_-) r r ( r

  17. Socioeconomic and ethnic inequalities in exposure to air and noise pollution in London.

    PubMed

    Tonne, Cathryn; Milà, Carles; Fecht, Daniela; Alvarez, Mar; Gulliver, John; Smith, James; Beevers, Sean; Ross Anderson, H; Kelly, Frank

    2018-06-01

    Transport-related air and noise pollution, exposures linked to adverse health outcomes, varies within cities potentially resulting in exposure inequalities. Relatively little is known regarding inequalities in personal exposure to air pollution or transport-related noise. Our objectives were to quantify socioeconomic and ethnic inequalities in London in 1) air pollution exposure at residence compared to personal exposure; and 2) transport-related noise at residence from different sources. We used individual-level data from the London Travel Demand Survey (n = 45,079) between 2006 and 2010. We modeled residential (CMAQ-urban) and personal (London Hybrid Exposure Model) particulate matter <2.5 μm and nitrogen dioxide (NO 2 ), road-traffic noise at residence (TRANEX) and identified those within 50 dB noise contours of railways and Heathrow airport. We analyzed relationships between household income, area-level income deprivation and ethnicity with air and noise pollution using quantile and logistic regression. We observed inverse patterns in inequalities in air pollution when estimated at residence versus personal exposure with respect to household income (categorical, 8 groups). Compared to the lowest income group (<£10,000), the highest group (>£75,000) had lower residential NO 2 (-1.3 (95% CI -2.1, -0.6) μg/m 3 in the 95th exposure quantile) but higher personal NO 2 exposure (1.9 (95% CI 1.6, 2.3) μg/m 3 in the 95th quantile), which was driven largely by transport mode and duration. Inequalities in residential exposure to NO 2 with respect to area-level deprivation were larger at lower exposure quantiles (e.g. estimate for NO 2 5.1 (95% CI 4.6, 5.5) at quantile 0.15 versus 1.9 (95% CI 1.1, 2.6) at quantile 0.95), reflecting low-deprivation, high residential NO 2 areas in the city centre. Air pollution exposure at residence consistently overestimated personal exposure; this overestimation varied with age, household income, and area-level income deprivation. Inequalities in road traffic noise were generally small. In logistic regression models, the odds of living within a 50 dB contour of aircraft noise were highest in individuals with the highest household income, white ethnicity, and with the lowest area-level income deprivation. Odds of living within a 50 dB contour of rail noise were 19% (95% CI 3, 37) higher for black compared to white individuals. Socioeconomic inequalities in air pollution exposure were different for modeled residential versus personal exposure, which has important implications for environmental justice and confounding in epidemiology studies. Exposure misclassification was dependent on several factors related to health, a potential source of bias in epidemiological studies. Quantile regression revealed that socioeconomic and ethnic inequalities in air pollution are often not uniform across the exposure distribution. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Assessing the pollution risk of a groundwater source field at western Laizhou Bay under seawater intrusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Xiankui; Wu, Jichun; Wang, Dong, E-mail: wangdong@nju.edu.cn

    Coastal areas have great significance for human living, economy and society development in the world. With the rapid increase of pressures from human activities and climate change, the safety of groundwater resource is under the threat of seawater intrusion in coastal areas. The area of Laizhou Bay is one of the most serious seawater intruded areas in China, since seawater intrusion phenomenon was firstly recognized in the middle of 1970s. This study assessed the pollution risk of a groundwater source filed of western Laizhou Bay area by inferring the probability distribution of groundwater Cl{sup −} concentration. The numerical model ofmore » seawater intrusion process is built by using SEAWAT4. The parameter uncertainty of this model is evaluated by Markov Chain Monte Carlo (MCMC) simulation, and DREAM{sub (ZS)} is used as sampling algorithm. Then, the predictive distribution of Cl{sup -} concentration at groundwater source field is inferred by using the samples of model parameters obtained from MCMC. After that, the pollution risk of groundwater source filed is assessed by the predictive quantiles of Cl{sup -} concentration. The results of model calibration and verification demonstrate that the DREAM{sub (ZS)} based MCMC is efficient and reliable to estimate model parameters under current observation. Under the condition of 95% confidence level, the groundwater source point will not be polluted by seawater intrusion in future five years (2015–2019). In addition, the 2.5% and 97.5% predictive quantiles show that the Cl{sup −} concentration of groundwater source field always vary between 175 mg/l and 200 mg/l. - Highlights: • The parameter uncertainty of seawater intrusion model is evaluated by MCMC. • Groundwater source field won’t be polluted by seawater intrusion in future 5 years. • The pollution risk is assessed by the predictive quantiles of Cl{sup −} concentration.« less

  19. Matching a Distribution by Matching Quantiles Estimation

    PubMed Central

    Sgouropoulos, Nikolaos; Yao, Qiwei; Yastremiz, Claudia

    2015-01-01

    Motivated by the problem of selecting representative portfolios for backtesting counterparty credit risks, we propose a matching quantiles estimation (MQE) method for matching a target distribution by that of a linear combination of a set of random variables. An iterative procedure based on the ordinary least-squares estimation (OLS) is proposed to compute MQE. MQE can be easily modified by adding a LASSO penalty term if a sparse representation is desired, or by restricting the matching within certain range of quantiles to match a part of the target distribution. The convergence of the algorithm and the asymptotic properties of the estimation, both with or without LASSO, are established. A measure and an associated statistical test are proposed to assess the goodness-of-match. The finite sample properties are illustrated by simulation. An application in selecting a counterparty representative portfolio with a real dataset is reported. The proposed MQE also finds applications in portfolio tracking, which demonstrates the usefulness of combining MQE with LASSO. PMID:26692592

  20. A method to preserve trends in quantile mapping bias correction of climate modeled temperature

    NASA Astrophysics Data System (ADS)

    Grillakis, Manolis G.; Koutroulis, Aristeidis G.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.

    2017-09-01

    Bias correction of climate variables is a standard practice in climate change impact (CCI) studies. Various methodologies have been developed within the framework of quantile mapping. However, it is well known that quantile mapping may significantly modify the long-term statistics due to the time dependency of the temperature bias. Here, a method to overcome this issue without compromising the day-to-day correction statistics is presented. The methodology separates the modeled temperature signal into a normalized and a residual component relative to the modeled reference period climatology, in order to adjust the biases only for the former and preserve the signal of the later. The results show that this method allows for the preservation of the originally modeled long-term signal in the mean, the standard deviation and higher and lower percentiles of temperature. To illustrate the improvements, the methodology is tested on daily time series obtained from five Euro CORDEX regional climate models (RCMs).

  1. Comparability of a short food frequency questionnaire to assess diet quality: the DISCOVER study.

    PubMed

    Dehghan, Mahshid; Ge, Yipeng; El Sheikh, Wala; Bawor, Monica; Rangarajan, Sumathy; Dennis, Brittany; Vair, Judith; Sholer, Heather; Hutchinson, Nichole; Iordan, Elizabeth; Mackie, Pam; Samaan, Zainab

    2017-09-01

    This study aims to assess comparability of a short food frequency questionnaire (SFFQ) used in the Determinants of Suicide: Conventional and Emergent Risk Study (DISCOVER Study) with a validated comprehensive FFQ (CFFQ). A total of 127 individuals completed SFFQ and CFFQ. Healthy eating was measured using Healthy Eating Score (HES). Estimated food intake and healthy eating assessed by SFFQ was compared with the CFFQ. For most food groups and HES, the highest Spearman's rank correlation coefficients between the two FFQs were r > .60. For macro-nutrients, the correlations exceeded 0.4. Cross-classification of quantile analysis showed that participants were classified between 46% and 81% into the exact same quantiles, while 10% or less were misclassified into opposite quantiles. The Bland-Altman plots showed an acceptable level of agreement between the two dietary measurement methods. The SFFQ can be used for Canadian with psychiatric disorders to rank them based on their dietary intake.

  2. Examining Predictive Validity of Oral Reading Fluency Slope in Upper Elementary Grades Using Quantile Regression.

    PubMed

    Cho, Eunsoo; Capin, Philip; Roberts, Greg; Vaughn, Sharon

    2017-07-01

    Within multitiered instructional delivery models, progress monitoring is a key mechanism for determining whether a child demonstrates an adequate response to instruction. One measure commonly used to monitor the reading progress of students is oral reading fluency (ORF). This study examined the extent to which ORF slope predicts reading comprehension outcomes for fifth-grade struggling readers ( n = 102) participating in an intensive reading intervention. Quantile regression models showed that ORF slope significantly predicted performance on a sentence-level fluency and comprehension assessment, regardless of the students' reading skills, controlling for initial ORF performance. However, ORF slope was differentially predictive of a passage-level comprehension assessment based on students' reading skills when controlling for initial ORF status. Results showed that ORF explained unique variance for struggling readers whose posttest performance was at the upper quantiles at the end of the reading intervention, but slope was not a significant predictor of passage-level comprehension for students whose reading problems were the most difficult to remediate.

  3. Statistical bias correction method applied on CMIP5 datasets over the Indian region during the summer monsoon season for climate change applications

    NASA Astrophysics Data System (ADS)

    Prasanna, V.

    2018-01-01

    This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.

  4. Flood frequency analysis - the challenge of using historical data

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjorn

    2015-04-01

    Estimates of high flood quantiles are needed for many applications, .e.g. dam safety assessments are based on the 1000 years flood, whereas the dimensioning of important infrastructure requires estimates of the 200 year flood. The flood quantiles are estimated by fitting a parametric distribution to a dataset of high flows comprising either annual maximum values or peaks over a selected threshold. Since the record length of data is limited compared to the desired flood quantile, the estimated flood magnitudes are based on a high degree of extrapolation. E.g. the longest time series available in Norway are around 120 years, and as a result any estimation of a 1000 years flood will require extrapolation. One solution is to extend the temporal dimension of a data series by including information about historical floods before the stream flow was systematically gaugeded. Such information could be flood marks or written documentation about flood events. The aim of this study was to evaluate the added value of using historical flood data for at-site flood frequency estimation. The historical floods were included in two ways by assuming: (1) the size of (all) floods above a high threshold within a time interval is known; and (2) the number of floods above a high threshold for a time interval is known. We used a Bayesian model formulation, with MCMC used for model estimation. This estimation procedure allowed us to estimate the predictive uncertainty of flood quantiles (i.e. both sampling and parameter uncertainty is accounted for). We tested the methods using 123 years of systematic data from Bulken in western Norway. In 2014 the largest flood in the systematic record was observed. From written documentation and flood marks we had information from three severe floods in the 18th century and they were likely to exceed the 2014 flood. We evaluated the added value in two ways. First we used the 123 year long streamflow time series and investigated the effect of having several shorter series' which could be supplemented with a limited number of known large flood events. Then we used the three historical floods from the 18th century combined with the whole and subsets of the 123 years of systematic observations. In the latter case several challenges were identified: i) The possibility to transfer water levels to river streamflows due to man made changes in the river profile, (ii) The stationarity of the data might be questioned since the three largest historical floods occurred during the "little ice age" with different climatic conditions compared to today.

  5. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  6. Air pollution and daily mortality in Erfurt, east Germany, 1980-1989.

    PubMed Central

    Spix, C; Heinrich, J; Dockery, D; Schwartz, J; Völksch, G; Schwinkowski, K; Cöllen, C; Wichmann, H E

    1993-01-01

    In Erfurt, Germany, unfavorable geography and emissions from coal burning lead to very high ambient pollution (up to about 4000 micrograms/m3 SO2 in 1980-89). To assess possible health effects of these exposures, total daily mortality was obtained for this same period. A multivariate model was fitted, including corrections for long-term fluctuations, influenza epidemics, and meterology, before analyzing the effect of pollution. The best fit for pollution was obtained for log (SO2 daily mean) with a lag of 2 days. Daily mortality increased by 10% for an increase in SO2 from 23 to 929 micrograms/m3 (5% quantile to 95% quantile). A harvesting effect (fewer people die on a given day if more deaths occurred in the last 15 days) may modify this by +/- 2%. The effect for particulates (SP, 1988-89 only) was stronger than the effect of SO2. Log SP (daily mean) increasing from 15 micrograms/m3 to 331 micrograms/m3 (5% quantile to 95% quantile) was associated with a 22% increase in mortality. Depending on harvesting, the observable effect may lie between 14% and 27%. There is no indication of a threshold or synergism. The effects of air pollution are smaller than the effects of influenza epidemics and are of the same size as meterologic effects. The results for the lower end of the dose range are in agreement with linear models fitted in studies of moderate air pollution and episode studies. Images Figure 1. Figure 2. PMID:8137781

  7. Air pollution and daily mortality in Erfurt, east Germany, 1980-1989.

    PubMed

    Spix, C; Heinrich, J; Dockery, D; Schwartz, J; Völksch, G; Schwinkowski, K; Cöllen, C; Wichmann, H E

    1993-11-01

    In Erfurt, Germany, unfavorable geography and emissions from coal burning lead to very high ambient pollution (up to about 4000 micrograms/m3 SO2 in 1980-89). To assess possible health effects of these exposures, total daily mortality was obtained for this same period. A multivariate model was fitted, including corrections for long-term fluctuations, influenza epidemics, and meterology, before analyzing the effect of pollution. The best fit for pollution was obtained for log (SO2 daily mean) with a lag of 2 days. Daily mortality increased by 10% for an increase in SO2 from 23 to 929 micrograms/m3 (5% quantile to 95% quantile). A harvesting effect (fewer people die on a given day if more deaths occurred in the last 15 days) may modify this by +/- 2%. The effect for particulates (SP, 1988-89 only) was stronger than the effect of SO2. Log SP (daily mean) increasing from 15 micrograms/m3 to 331 micrograms/m3 (5% quantile to 95% quantile) was associated with a 22% increase in mortality. Depending on harvesting, the observable effect may lie between 14% and 27%. There is no indication of a threshold or synergism. The effects of air pollution are smaller than the effects of influenza epidemics and are of the same size as meterologic effects. The results for the lower end of the dose range are in agreement with linear models fitted in studies of moderate air pollution and episode studies.

  8. Long Term Discharge Estimation for Ogoué River Basin

    NASA Astrophysics Data System (ADS)

    Seyler, F.; Linguet, L.; Calmant, S.

    2014-12-01

    Ogoué river basin is one the last preserved tropical rain forest basin in the world. The river basin covers about 75% of Gabon. Results of a study conducted on wall-to wall forest cover map using Landsat images (Fichet et al., 2014) gave a net forest loss of 0,38% from 1990 and 2000 and sensibly the same loss rate between 2000 and 2010. However, the country launched recently an ambitious development plan, with communication infrastructure, agriculture and forestry as well as mining projects. Hydrological cycle response to changes may be expected, in both quantitative and qualitative aspects. Unfortunately monitoring gauging stations have stopped functioning in the seventies, and Gabon will then be unable to evaluate, mitigate and adapt adequately to these environmental challenges. Historical data were registered during 42 years at Lambaréné (from 1929 to 1974) and during 10 to 20 years at 17 other ground stations. The quantile function approach (Tourian et al., 2013) has been tested to estimate discharge from J2 and ERS/Envisat/AltiKa virtual stations. This is an opportunity to assess long term discharge patterns in order to monitor land use change effects and eventual disturbance in runoff. Figure 1: Ogoué River basin: J2 (red) and ERS/ENVISAT/ALTIKa (purple) virtual stations Fichet, L. V., Sannier, C., Massard Makaga, E. K., Seyler, F. (2013) Assessing the accuracy of forest cover map for 1990, 2000 and 2010 at national scale in Gabon. In press IEEE Journal of Selected Topics in Applied Earth Observations and Remote SensingTourian, M. J., Sneeuw, N., & Bárdossy, A. (2013). A quantile function approach to discharge estimation from satellite altimetry (ENVISAT). Water Resources Research, 49(7), 4174-4186. doi:10.1002/wrcr.20348

  9. Relationships between lead biomarkers and diurnal salivary cortisol indices in pregnant women from Mexico City: a cross-sectional study

    PubMed Central

    2014-01-01

    Background Lead (Pb) exposure during pregnancy may increase the risk of adverse maternal, infant, or childhood health outcomes by interfering with hypothalamic-pituitary-adrenal-axis function. We examined relationships between maternal blood or bone Pb concentrations and features of diurnal cortisol profiles in 936 pregnant women from Mexico City. Methods From 2007–11 we recruited women from hospitals/clinics affiliated with the Mexican Social Security System. Pb was measured in blood (BPb) during the second trimester and in mothers’ tibia and patella 1-month postpartum. We characterized maternal HPA-axis function using 10 timed salivary cortisol measurements collected over 2-days (mean: 19.7, range: 14–35 weeks gestation). We used linear mixed models to examine the relationship between Pb biomarkers and cortisol area under the curve (AUC), awakening response (CAR), and diurnal slope. Results After adjustment for confounders, women in the highest quintile of BPb concentrations had a reduced CAR (Ratio: −13%; Confidence Interval [CI]: −24, 1, p-value for trend < 0.05) compared to women in the lowest quintile. Tibia/patella Pb concentrations were not associated with CAR, but diurnal cortisol slopes were suggestively flatter among women in the highest patella Pb quantile compared to women in the lowest quantile (Ratio: 14%; CI: −2, 33). BPb and bone Pb concentrations were not associated with cortisol AUC. Conclusions Concurrent blood Pb levels were associated with cortisol awakening response in these pregnant women and this might explain adverse health outcomes associated with Pb. Further research is needed to confirm these results and determine if other environmental chemicals disrupt hypothalamic-pituitary-adrenal-axis function during pregnancy. PMID:24916609

  10. Improving medium-range ensemble streamflow forecasts through statistical post-processing

    NASA Astrophysics Data System (ADS)

    Mendoza, Pablo; Wood, Andy; Clark, Elizabeth; Nijssen, Bart; Clark, Martyn; Ramos, Maria-Helena; Nowak, Kenneth; Arnold, Jeffrey

    2017-04-01

    Probabilistic hydrologic forecasts are a powerful source of information for decision-making in water resources operations. A common approach is the hydrologic model-based generation of streamflow forecast ensembles, which can be implemented to account for different sources of uncertainties - e.g., from initial hydrologic conditions (IHCs), weather forecasts, and hydrologic model structure and parameters. In practice, hydrologic ensemble forecasts typically have biases and spread errors stemming from errors in the aforementioned elements, resulting in a degradation of probabilistic properties. In this work, we compare several statistical post-processing techniques applied to medium-range ensemble streamflow forecasts obtained with the System for Hydromet Applications, Research and Prediction (SHARP). SHARP is a fully automated prediction system for the assessment and demonstration of short-term to seasonal streamflow forecasting applications, developed by the National Center for Atmospheric Research, University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. The suite of post-processing techniques includes linear blending, quantile mapping, extended logistic regression, quantile regression, ensemble analogs, and the generalized linear model post-processor (GLMPP). We assess and compare these techniques using multi-year hindcasts in several river basins in the western US. This presentation discusses preliminary findings about the effectiveness of the techniques for improving probabilistic skill, reliability, discrimination, sharpness and resolution.

  11. Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments

    USGS Publications Warehouse

    Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.

    2004-01-01

    The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.

  12. Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments

    NASA Astrophysics Data System (ADS)

    Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.

    2004-07-01

    The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.

  13. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    PubMed

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  14. Analysis and trends of precipitation lapse rate and extreme indices over north Sikkim eastern Himalayas under CMIP5ESM-2M RCPs experiments

    NASA Astrophysics Data System (ADS)

    Singh, Vishal; Goyal, Manish Kumar

    2016-01-01

    This paper draws attention to highlight the spatial and temporal variability in precipitation lapse rate (PLR) and precipitation extreme indices (PEIs) through the mesoscale characterization of Teesta river catchment, which corresponds to north Sikkim eastern Himalayas. A PLR rate is an important variable for the snowmelt runoff models. In a mountainous region, the PLR could be varied from lower elevation parts to high elevation parts. In this study, a PLR was computed by accounting elevation differences, which varies from around 1500 m to 7000 m. A precipitation variability and extremity were analysed using multiple mathematical functions viz. quantile regression, spatial mean, spatial standard deviation, Mann-Kendall test and Sen's estimation. For this reason, a daily precipitation, in the historical (years 1980-2005) as measured/observed gridded points and projected experiments for the 21st century (years 2006-2100) simulated by CMIP5 ESM-2 M model (Coupled Model Intercomparison Project Phase 5 Earth System Model 2) employing three different radiative forcing scenarios (Representative Concentration Pathways), utilized for the research work. The outcomes of this study suggest that a PLR is significantly varied from lower elevation to high elevation parts. The PEI based analysis showed that the extreme high intensity events have been increased significantly, especially after 2040s. The PEI based observations also showed that the numbers of wet days are increased for all the RCPs. The quantile regression plots showed significant increments in the upper and lower quantiles of the various extreme indices. The Mann-Kendall test and Sen's estimation tests clearly indicated significant changing patterns in the frequency and intensity of the precipitation indices across all the sub-basins and RCP scenario in an intra-decadal time series domain. The RCP8.5 showed extremity of the projected outcomes.

  15. Identifying the Safety Factors over Traffic Signs in State Roads using a Panel Quantile Regression Approach.

    PubMed

    Šarić, Željko; Xu, Xuecai; Duan, Li; Babić, Darko

    2018-06-20

    This study intended to investigate the interactions between accident rate and traffic signs in state roads located in Croatia, and accommodate the heterogeneity attributed to unobserved factors. The data from 130 state roads between 2012 and 2016 were collected from Traffic Accident Database System maintained by the Republic of Croatia Ministry of the Interior. To address the heterogeneity, a panel quantile regression model was proposed, in which quantile regression model offers a more complete view and a highly comprehensive analysis of the relationship between accident rate and traffic signs, while the panel data model accommodates the heterogeneity attributed to unobserved factors. Results revealed that (1) low visibility of material damage (MD) and death or injured (DI) increased the accident rate; (2) the number of mandatory signs and the number of warning signs were more likely to reduce the accident rate; (3)average speed limit and the number of invalid traffic signs per km exhibited a high accident rate. To our knowledge, it's the first attempt to analyze the interactions between accident consequences and traffic signs by employing a panel quantile regression model; by involving the visibility, the present study demonstrates that the low visibility causes a relatively higher risk of MD and DI; It is noteworthy that average speed limit corresponds with accident rate positively; The number of mandatory signs and the number of warning signs are more likely to reduce the accident rate; The number of invalid traffic signs per km are significant for accident rate, thus regular maintenance should be kept for a safer roadway environment.

  16. Use of Flood Seasonality in Pooling-Group Formation and Quantile Estimation: An Application in Great Britain

    NASA Astrophysics Data System (ADS)

    Formetta, Giuseppe; Bell, Victoria; Stewart, Elizabeth

    2018-02-01

    Regional flood frequency analysis is one of the most commonly applied methods for estimating extreme flood events at ungauged sites or locations with short measurement records. It is based on: (i) the definition of a homogeneous group (pooling-group) of catchments, and on (ii) the use of the pooling-group data to estimate flood quantiles. Although many methods to define a pooling-group (pooling schemes, PS) are based on catchment physiographic similarity measures, in the last decade methods based on flood seasonality similarity have been contemplated. In this paper, two seasonality-based PS are proposed and tested both in terms of the homogeneity of the pooling-groups they generate and in terms of the accuracy in estimating extreme flood events. The method has been applied in 420 catchments in Great Britain (considered as both gauged and ungauged) and compared against the current Flood Estimation Handbook (FEH) PS. Results for gauged sites show that, compared to the current PS, the seasonality-based PS performs better both in terms of homogeneity of the pooling-group and in terms of the accuracy of flood quantile estimates. For ungauged locations, a national-scale hydrological model has been used for the first time to quantify flood seasonality. Results show that in 75% of the tested locations the seasonality-based PS provides an improvement in the accuracy of the flood quantile estimates. The remaining 25% were located in highly urbanized, groundwater-dependent catchments. The promising results support the aspiration that large-scale hydrological models complement traditional methods for estimating design floods.

  17. A probability metric for identifying high-performing facilities: an application for pay-for-performance programs.

    PubMed

    Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan

    2014-12-01

    Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.

  18. Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations

    NASA Astrophysics Data System (ADS)

    Mehran, A.; AghaKouchak, A.; Phillips, T. J.

    2014-02-01

    The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies, and biases for both entire distributions and their upper tails. The results of the volumetric hit index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas but that their replication of observed precipitation over arid regions and certain subcontinental regions (e.g., northern Eurasia, eastern Russia, and central Australia) is problematical. Overall, the VHI of the multimodel ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and Central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g., western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, intermodel variations in bias over Australia and Amazonia are considerable. The quantile bias analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. It is found that a simple mean field bias removal improves the overall B and VHI values but does not make a significant improvement at high quantiles of precipitation.

  19. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  20. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  1. Technical note: Combining quantile forecasts and predictive distributions of streamflows

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano

    2017-11-01

    The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.

  2. Nonparametric Fine Tuning of Mixtures: Application to Non-Life Insurance Claims Distribution Estimation

    NASA Astrophysics Data System (ADS)

    Sardet, Laure; Patilea, Valentin

    When pricing a specific insurance premium, actuary needs to evaluate the claims cost distribution for the warranty. Traditional actuarial methods use parametric specifications to model claims distribution, like lognormal, Weibull and Pareto laws. Mixtures of such distributions allow to improve the flexibility of the parametric approach and seem to be quite well-adapted to capture the skewness, the long tails as well as the unobserved heterogeneity among the claims. In this paper, instead of looking for a finely tuned mixture with many components, we choose a parsimonious mixture modeling, typically a two or three-component mixture. Next, we use the mixture cumulative distribution function (CDF) to transform data into the unit interval where we apply a beta-kernel smoothing procedure. A bandwidth rule adapted to our methodology is proposed. Finally, the beta-kernel density estimate is back-transformed to recover an estimate of the original claims density. The beta-kernel smoothing provides an automatic fine-tuning of the parsimonious mixture and thus avoids inference in more complex mixture models with many parameters. We investigate the empirical performance of the new method in the estimation of the quantiles with simulated nonnegative data and the quantiles of the individual claims distribution in a non-life insurance application.

  3. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    PubMed

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.

  4. Using Quantile and Asymmetric Least Squares Regression for Optimal Risk Adjustment.

    PubMed

    Lorenz, Normann

    2017-06-01

    In this paper, we analyze optimal risk adjustment for direct risk selection (DRS). Integrating insurers' activities for risk selection into a discrete choice model of individuals' health insurance choice shows that DRS has the structure of a contest. For the contest success function (csf) used in most of the contest literature (the Tullock-csf), optimal transfers for a risk adjustment scheme have to be determined by means of a restricted quantile regression, irrespective of whether insurers are primarily engaged in positive DRS (attracting low risks) or negative DRS (repelling high risks). This is at odds with the common practice of determining transfers by means of a least squares regression. However, this common practice can be rationalized for a new csf, but only if positive and negative DRSs are equally important; if they are not, optimal transfers have to be calculated by means of a restricted asymmetric least squares regression. Using data from German and Swiss health insurers, we find considerable differences between the three types of regressions. Optimal transfers therefore critically depend on which csf represents insurers' incentives for DRS and, if it is not the Tullock-csf, whether insurers are primarily engaged in positive or negative DRS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Brief Intervention Decreases Drinking Frequency in HIV-Infected, Heavy Drinking Women: Results of a Randomized Controlled Trial

    PubMed Central

    Chander, Geetanjali; Hutton, Heidi E.; Lau, Bryan; Xu, Xiaoqiang; McCaul, Mary E.

    2015-01-01

    Objective Hazardous alcohol use by HIV-infected women is associated with poor HIV outcomes and HIV transmission risk behaviors. We examined the effectiveness of brief alcohol intervention (BI) among hazardous drinking women receiving care in an urban, HIV clinic. Methods Women were randomized to a 2-session BI or usual care. Outcomes assessed at baseline, 3, 6 and 12 months included 90-day frequency of any alcohol use and heavy/binge drinking (≥4 drinks per occasion), and average drinks per drinking episode. Secondary outcomes included HIV medication and appointment adherence, HIV1-RNA suppression, and days of unprotected vaginal sex. We examined intervention effectiveness using generalized mixed effect models and quantile regression. Results Of 148 eligible women, 74 were randomized to each arm. In mixed effects models, 90-day drinking frequency decreased among intervention group compared to control, with women in the intervention condition less likely to have a drinking day (OR: 0.42 (95% CI: 0.23–0.75). Heavy/binge drinking days and drinks per drinking day did not differ significantly between groups. Quantile regression demonstrated a decrease in drinking frequency in the middle to upper ranges of the distribution of drinking days and heavy/binge drinking days that differed significantly between intervention and control conditions. At follow-up, the intervention group had significantly fewer episodes of unprotected vaginal sex. No intervention effects were observed for other outcomes. Conclusions Brief alcohol intervention reduces frequency of alcohol use and unprotected vaginal sex among HIV-infected women. More intensive services may be needed to lower drinks per drinking day and enhance care for more severely affected drinkers. PMID:25967270

  6. Prenatal Lead Exposure and Fetal Growth: Smaller Infants Have Heightened Susceptibility

    PubMed Central

    Rodosthenous, Rodosthenis S.; Burris, Heather H.; Svensson, Katherine; Amarasiriwardena, Chitra J.; Cantoral, Alejandra; Schnaas, Lourdes; Mercado-García, Adriana; Coull, Brent A.; Wright, Robert O.; Téllez-Rojo, Martha M.; Baccarelli, Andrea A.

    2016-01-01

    Background As population lead levels decrease, the toxic effects of lead may be distributed to more sensitive populations, such as infants with poor fetal growth. Objectives To determine the association of prenatal lead exposure and fetal growth; and to evaluate whether infants with poor fetal growth are more susceptible to lead toxicity than those with normal fetal growth. Methods We examined the association of second trimester maternal blood lead levels (BLL) with birthweight-for-gestational age (BWGA) z-score in 944 mother-infant participants of the PROGRESS cohort. We determined the association between maternal BLL and BWGA z-score by using both linear and quantile regression. We estimated odds ratios for small-for-gestational age (SGA) infants between maternal BLL quartiles using logistic regression. Maternal age, body mass index, socioeconomic status, parity, household smoking exposure, hemoglobin levels, and infant sex were included as confounders. Results While linear regression showed a negative association between maternal BLL and BWGA z-score (β=−0.06 z-score units per log2 BLL increase; 95% CI: −0.13, 0.003; P=0.06), quantile regression revealed larger magnitudes of this association in the <30th percentiles of BWGA z-score (β range [−0.08, −0.13] z-score units per log2 BLL increase; all P values <0.05). Mothers in the highest BLL quartile had an odds ratio of 1.62 (95% CI: 0.99–2.65) for having a SGA infant compared to the lowest BLL quartile. Conclusions While both linear and quantile regression showed a negative association between prenatal lead exposure and birthweight, quantile regression revealed that smaller infants may represent a more susceptible subpopulation. PMID:27923585

  7. Diagnostic Imaging Services in Magnet and Non-Magnet Hospitals: Trends in Utilization and Costs.

    PubMed

    Jayawardhana, Jayani; Welton, John M

    2015-12-01

    The purpose of this study was to better understand trends in utilization and costs of diagnostic imaging services at Magnet hospitals (MHs) and non-Magnet hospitals (NMHs). A data set was created by merging hospital-level data from the American Hospital Association's annual survey and Medicare cost reports, individual-level inpatient data from the Healthcare Cost and Utilization Project, and Magnet recognition status data from the American Nurses Credentialing Center. A descriptive analysis was conducted to evaluate the trends in utilization and costs of CT, MRI, and ultrasound procedures among MHs and NMHs in urban locations between 2000 and 2006 from the following ten states: Arizona, California, Colorado, Florida, Iowa, Maryland, North Carolina, New Jersey, New York, and Washington. When matched by bed size, severity of illness (case mix index), and clinical technological sophistication (Saidin index) quantiles, MHs in higher quantiles indicated higher rates of utilization of imaging services for MRI, CT, and ultrasound in comparison with NMHs in the same quantiles. However, average costs of MRI, CT, and ultrasounds were lower at MHs in comparison with NMHs in the same quantiles. Overall, MHs that are larger in size (number of beds), serve more severely ill patients (case mix index), and are more technologically sophisticated (Saidin index) show higher utilization of diagnostic imaging services, although costs per procedure at MHs are lower in comparison with similar NMHs, indicating possible cost efficiency at MHs. Further research is necessary to understand the relationship between the utilization of diagnostic imaging services among MHs and its impact on patient outcomes. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  8. Topological and canonical kriging for design flood prediction in ungauged catchments: an improvement over a traditional regional regression approach?

    USGS Publications Warehouse

    Archfield, Stacey A.; Pugliese, Alessio; Castellarin, Attilio; Skøien, Jon O.; Kiang, Julie E.

    2013-01-01

    In the United States, estimation of flood frequency quantiles at ungauged locations has been largely based on regional regression techniques that relate measurable catchment descriptors to flood quantiles. More recently, spatial interpolation techniques of point data have been shown to be effective for predicting streamflow statistics (i.e., flood flows and low-flow indices) in ungauged catchments. Literature reports successful applications of two techniques, canonical kriging, CK (or physiographical-space-based interpolation, PSBI), and topological kriging, TK (or top-kriging). CK performs the spatial interpolation of the streamflow statistic of interest in the two-dimensional space of catchment descriptors. TK predicts the streamflow statistic along river networks taking both the catchment area and nested nature of catchments into account. It is of interest to understand how these spatial interpolation methods compare with generalized least squares (GLS) regression, one of the most common approaches to estimate flood quantiles at ungauged locations. By means of a leave-one-out cross-validation procedure, the performance of CK and TK was compared to GLS regression equations developed for the prediction of 10, 50, 100 and 500 yr floods for 61 streamgauges in the southeast United States. TK substantially outperforms GLS and CK for the study area, particularly for large catchments. The performance of TK over GLS highlights an important distinction between the treatments of spatial correlation when using regression-based or spatial interpolation methods to estimate flood quantiles at ungauged locations. The analysis also shows that coupling TK with CK slightly improves the performance of TK; however, the improvement is marginal when compared to the improvement in performance over GLS.

  9. Removing Batch Effects from Longitudinal Gene Expression - Quantile Normalization Plus ComBat as Best Approach for Microarray Transcriptome Data

    PubMed Central

    Müller, Christian; Schillert, Arne; Röthemeier, Caroline; Trégouët, David-Alexandre; Proust, Carole; Binder, Harald; Pfeiffer, Norbert; Beutel, Manfred; Lackner, Karl J.; Schnabel, Renate B.; Tiret, Laurence; Wild, Philipp S.; Blankenberg, Stefan

    2016-01-01

    Technical variation plays an important role in microarray-based gene expression studies, and batch effects explain a large proportion of this noise. It is therefore mandatory to eliminate technical variation while maintaining biological variability. Several strategies have been proposed for the removal of batch effects, although they have not been evaluated in large-scale longitudinal gene expression data. In this study, we aimed at identifying a suitable method for batch effect removal in a large study of microarray-based longitudinal gene expression. Monocytic gene expression was measured in 1092 participants of the Gutenberg Health Study at baseline and 5-year follow up. Replicates of selected samples were measured at both time points to identify technical variability. Deming regression, Passing-Bablok regression, linear mixed models, non-linear models as well as ReplicateRUV and ComBat were applied to eliminate batch effects between replicates. In a second step, quantile normalization prior to batch effect correction was performed for each method. Technical variation between batches was evaluated by principal component analysis. Associations between body mass index and transcriptomes were calculated before and after batch removal. Results from association analyses were compared to evaluate maintenance of biological variability. Quantile normalization, separately performed in each batch, combined with ComBat successfully reduced batch effects and maintained biological variability. ReplicateRUV performed perfectly in the replicate data subset of the study, but failed when applied to all samples. All other methods did not substantially reduce batch effects in the replicate data subset. Quantile normalization plus ComBat appears to be a valuable approach for batch correction in longitudinal gene expression data. PMID:27272489

  10. Use of Quantile Regression to Determine the Impact on Total Health Care Costs of Surgical Site Infections Following Common Ambulatory Procedures.

    PubMed

    Olsen, Margaret A; Tian, Fang; Wallace, Anna E; Nickel, Katelin B; Warren, David K; Fraser, Victoria J; Selvam, Nandini; Hamilton, Barton H

    2017-02-01

    To determine the impact of surgical site infections (SSIs) on health care costs following common ambulatory surgical procedures throughout the cost distribution. Data on costs of SSIs following ambulatory surgery are sparse, particularly variation beyond just mean costs. We performed a retrospective cohort study of persons undergoing cholecystectomy, breast-conserving surgery, anterior cruciate ligament reconstruction, and hernia repair from December 31, 2004 to December 31, 2010 using commercial insurer claims data. SSIs within 90 days post-procedure were identified; infections during a hospitalization or requiring surgery were considered serious. We used quantile regression, controlling for patient, operative, and postoperative factors to examine the impact of SSIs on 180-day health care costs throughout the cost distribution. The incidence of serious and nonserious SSIs was 0.8% and 0.2%, respectively, after 21,062 anterior cruciate ligament reconstruction, 0.5% and 0.3% after 57,750 cholecystectomy, 0.6% and 0.5% after 60,681 hernia, and 0.8% and 0.8% after 42,489 breast-conserving surgery procedures. Serious SSIs were associated with significantly higher costs than nonserious SSIs for all 4 procedures throughout the cost distribution. The attributable cost of serious SSIs increased for both cholecystectomy and hernia repair as the quantile of total costs increased ($38,410 for cholecystectomy with serious SSI vs no SSI at the 70th percentile of costs, up to $89,371 at the 90th percentile). SSIs, particularly serious infections resulting in hospitalization or surgical treatment, were associated with significantly increased health care costs after 4 common surgical procedures. Quantile regression illustrated the differential effect of serious SSIs on health care costs at the upper end of the cost distribution.

  11. Prenatal lead exposure and fetal growth: Smaller infants have heightened susceptibility.

    PubMed

    Rodosthenous, Rodosthenis S; Burris, Heather H; Svensson, Katherine; Amarasiriwardena, Chitra J; Cantoral, Alejandra; Schnaas, Lourdes; Mercado-García, Adriana; Coull, Brent A; Wright, Robert O; Téllez-Rojo, Martha M; Baccarelli, Andrea A

    2017-02-01

    As population lead levels decrease, the toxic effects of lead may be distributed to more sensitive populations, such as infants with poor fetal growth. To determine the association of prenatal lead exposure and fetal growth; and to evaluate whether infants with poor fetal growth are more susceptible to lead toxicity than those with normal fetal growth. We examined the association of second trimester maternal blood lead levels (BLL) with birthweight-for-gestational age (BWGA) z-score in 944 mother-infant participants of the PROGRESS cohort. We determined the association between maternal BLL and BWGA z-score by using both linear and quantile regression. We estimated odds ratios for small-for-gestational age (SGA) infants between maternal BLL quartiles using logistic regression. Maternal age, body mass index, socioeconomic status, parity, household smoking exposure, hemoglobin levels, and infant sex were included as confounders. While linear regression showed a negative association between maternal BLL and BWGA z-score (β=-0.06 z-score units per log 2 BLL increase; 95% CI: -0.13, 0.003; P=0.06), quantile regression revealed larger magnitudes of this association in the <30th percentiles of BWGA z-score (β range [-0.08, -0.13] z-score units per log 2 BLL increase; all P values<0.05). Mothers in the highest BLL quartile had an odds ratio of 1.62 (95% CI: 0.99-2.65) for having a SGA infant compared to the lowest BLL quartile. While both linear and quantile regression showed a negative association between prenatal lead exposure and birthweight, quantile regression revealed that smaller infants may represent a more susceptible subpopulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations

    DOE PAGES

    Mehran, Ali; AghaKouchak, Amir; Phillips, Thomas J.

    2014-02-25

    Numerous studies have emphasized that climate simulations are subject to various biases and uncertainties. The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies and biases for both entire data distributions and their upper tails. The results of the Volumetric Hit Index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas, but that their replication of observed precipitation over arid regions and certain sub-continentalmore » regions (e.g., northern Eurasia, eastern Russia, central Australia) is problematical. Overall, the VHI of the multi-model ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (e.g., the 75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g. western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, inter-model variations in bias over Australia and Amazonia are considerable. The Quantile Bias (QB) analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. Lastly, we found that a simple mean-field bias removal improves the overall B and VHI values, but does not make a significant improvement in these model performance metrics at high quantiles of precipitation.« less

  13. Socio-demographic, clinical characteristics and utilization of mental health care services associated with SF-6D utility scores in patients with mental disorders: contributions of the quantile regression.

    PubMed

    Prigent, Amélie; Kamendje-Tchokobou, Blaise; Chevreul, Karine

    2017-11-01

    Health-related quality of life (HRQoL) is a widely used concept in the assessment of health care. Some generic HRQoL instruments, based on specific algorithms, can generate utility scores which reflect the preferences of the general population for the different health states described by the instrument. This study aimed to investigate the relationships between utility scores and potentially associated factors in patients with mental disorders followed in inpatient and/or outpatient care settings using two statistical methods. Patients were recruited in four psychiatric sectors in France. Patient responses to the SF-36 generic HRQoL instrument were used to calculate SF-6D utility scores. The relationships between utility scores and patient socio-demographic, clinical characteristics, and mental health care utilization, considered as potentially associated factors, were studied using OLS and quantile regressions. One hundred and seventy six patients were included. Women, severely ill patients and those hospitalized full-time tended to report lower utility scores, whereas psychotic disorders (as opposed to mood disorders) and part-time care were associated with higher scores. The quantile regression highlighted that the size of the associations between the utility scores and some patient characteristics varied along with the utility score distribution, and provided more accurate estimated values than OLS regression. The quantile regression may constitute a relevant complement for the analysis of factors associated with utility scores. For policy decision-making, the association of full-time hospitalization with lower utility scores while part-time care was associated with higher scores supports the further development of alternatives to full-time hospitalizations.

  14. Use of Quantile Regression to Determine the Impact on Total Health Care Costs of Surgical Site Infections Following Common Ambulatory Procedures

    PubMed Central

    Olsen, Margaret A.; Tian, Fang; Wallace, Anna E.; Nickel, Katelin B.; Warren, David K.; Fraser, Victoria J.; Selvam, Nandini; Hamilton, Barton H.

    2017-01-01

    Objective To determine the impact of surgical site infections (SSIs) on healthcare costs following common ambulatory surgical procedures throughout the cost distribution. Background Data on costs of SSIs following ambulatory surgery are sparse, particularly variation beyond just mean costs. Methods We performed a retrospective cohort study of persons undergoing cholecystectomy, breast-conserving surgery (BCS), anterior cruciate ligament reconstruction (ACL), and hernia repair from 12/31/2004–12/31/2010 using commercial insurer claims data. SSIs within 90 days post-procedure were identified; infections during a hospitalization or requiring surgery were considered serious. We used quantile regression, controlling for patient, operative, and postoperative factors to examine the impact of SSIs on 180-day healthcare costs throughout the cost distribution. Results The incidence of serious and non-serious SSIs were 0.8% and 0.2% after 21,062 ACL, 0.5% and 0.3% after 57,750 cholecystectomy, 0.6% and 0.5% after 60,681 hernia, and 0.8% and 0.8% after 42,489 BCS procedures. Serious SSIs were associated with significantly higher costs than non-serious SSIs for all 4 procedures throughout the cost distribution. The attributable cost of serious SSIs increased for both cholecystectomy and hernia repair as the quantile of total costs increased ($38,410 for cholecystectomy with serious SSI vs. no SSI at the 70th percentile of costs, up to $89,371 at the 90th percentile). Conclusions SSIs, particularly serious infections resulting in hospitalization or surgical treatment, were associated with significantly increased healthcare costs after 4 common surgical procedures. Quantile regression illustrated the differential effect of serious SSIs on healthcare costs at the upper end of the cost distribution. PMID:28059961

  15. Estimating normative limits of Heidelberg Retina Tomograph optic disc rim area with quantile regression.

    PubMed

    Artes, Paul H; Crabb, David P

    2010-01-01

    To investigate why the specificity of the Moorfields Regression Analysis (MRA) of the Heidelberg Retina Tomograph (HRT) varies with disc size, and to derive accurate normative limits for neuroretinal rim area to address this problem. Two datasets from healthy subjects (Manchester, UK, n = 88; Halifax, Nova Scotia, Canada, n = 75) were used to investigate the physiological relationship between the optic disc and neuroretinal rim area. Normative limits for rim area were derived by quantile regression (QR) and compared with those of the MRA (derived by linear regression). Logistic regression analyses were performed to quantify the association between disc size and positive classifications with the MRA, as well as with the QR-derived normative limits. In both datasets, the specificity of the MRA depended on optic disc size. The odds of observing a borderline or outside-normal-limits classification increased by approximately 10% for each 0.1 mm(2) increase in disc area (P < 0.1). The lower specificity of the MRA with large optic discs could be explained by the failure of linear regression to model the extremes of the rim area distribution (observations far from the mean). In comparison, the normative limits predicted by QR were larger for smaller discs (less specific, more sensitive), and smaller for larger discs, such that false-positive rates became independent of optic disc size. Normative limits derived by quantile regression appear to remove the size-dependence of specificity with the MRA. Because quantile regression does not rely on the restrictive assumptions of standard linear regression, it may be a more appropriate method for establishing normative limits in other clinical applications where the underlying distributions are nonnormal or have nonconstant variance.

  16. SMOS brightness temperature assimilation into the Community Land Model

    NASA Astrophysics Data System (ADS)

    Rains, Dominik; Han, Xujun; Lievens, Hans; Montzka, Carsten; Verhoest, Niko E. C.

    2017-11-01

    SMOS (Soil Moisture and Ocean Salinity mission) brightness temperatures at a single incident angle are assimilated into the Community Land Model (CLM) across Australia to improve soil moisture simulations. Therefore, the data assimilation system DasPy is coupled to the local ensemble transform Kalman filter (LETKF) as well as to the Community Microwave Emission Model (CMEM). Brightness temperature climatologies are precomputed to enable the assimilation of brightness temperature anomalies, making use of 6 years of SMOS data (2010-2015). Mean correlation R with in situ measurements increases moderately from 0.61 to 0.68 (11 %) for upper soil layers if the root zone is included in the updates. A reduced improvement of 5 % is achieved if the assimilation is restricted to the upper soil layers. Root-zone simulations improve by 7 % when updating both the top layers and root zone, and by 4 % when only updating the top layers. Mean increments and increment standard deviations are compared for the experiments. The long-term assimilation impact is analysed by looking at a set of quantiles computed for soil moisture at each grid cell. Within hydrological monitoring systems, extreme dry or wet conditions are often defined via their relative occurrence, adding great importance to assimilation-induced quantile changes. Although still being limited now, longer L-band radiometer time series will become available and make model output improved by assimilating such data that are more usable for extreme event statistics.

  17. Solvency supervision based on a total balance sheet approach

    NASA Astrophysics Data System (ADS)

    Pitselis, Georgios

    2009-11-01

    In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.

  18. Evaluation of normalization methods in mammalian microRNA-Seq data

    PubMed Central

    Garmire, Lana Xia; Subramaniam, Shankar

    2012-01-01

    Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701

  19. Bumps in river profiles: uncertainty assessment and smoothing using quantile regression techniques

    NASA Astrophysics Data System (ADS)

    Schwanghart, Wolfgang; Scherler, Dirk

    2017-12-01

    The analysis of longitudinal river profiles is an important tool for studying landscape evolution. However, characterizing river profiles based on digital elevation models (DEMs) suffers from errors and artifacts that particularly prevail along valley bottoms. The aim of this study is to characterize uncertainties that arise from the analysis of river profiles derived from different, near-globally available DEMs. We devised new algorithms - quantile carving and the CRS algorithm - that rely on quantile regression to enable hydrological correction and the uncertainty quantification of river profiles. We find that globally available DEMs commonly overestimate river elevations in steep topography. The distributions of elevation errors become increasingly wider and right skewed if adjacent hillslope gradients are steep. Our analysis indicates that the AW3D DEM has the highest precision and lowest bias for the analysis of river profiles in mountainous topography. The new 12 m resolution TanDEM-X DEM has a very low precision, most likely due to the combined effect of steep valley walls and the presence of water surfaces in valley bottoms. Compared to the conventional approaches of carving and filling, we find that our new approach is able to reduce the elevation bias and errors in longitudinal river profiles.

  20. Quantification of Uncertainty in the Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Kasiapillai Sudalaimuthu, K.; He, J.; Swami, D.

    2017-12-01

    Flood frequency analysis (FFA) is usually carried out for planning and designing of water resources and hydraulic structures. Owing to the existence of variability in sample representation, selection of distribution and estimation of distribution parameters, the estimation of flood quantile has been always uncertain. Hence, suitable approaches must be developed to quantify the uncertainty in the form of prediction interval as an alternate to deterministic approach. The developed framework in the present study to include uncertainty in the FFA discusses a multi-objective optimization approach to construct the prediction interval using ensemble of flood quantile. Through this approach, an optimal variability of distribution parameters is identified to carry out FFA. To demonstrate the proposed approach, annual maximum flow data from two gauge stations (Bow river at Calgary and Banff, Canada) are used. The major focus of the present study was to evaluate the changes in magnitude of flood quantiles due to the recent extreme flood event occurred during the year 2013. In addition, the efficacy of the proposed method was further verified using standard bootstrap based sampling approaches and found that the proposed method is reliable in modeling extreme floods as compared to the bootstrap methods.

  1. Spatio-temporal characteristics of the extreme precipitation by L-moment-based index-flood method in the Yangtze River Delta region, China

    NASA Astrophysics Data System (ADS)

    Yin, Yixing; Chen, Haishan; Xu, Chong-Yu; Xu, Wucheng; Chen, Changchun; Sun, Shanlei

    2016-05-01

    The regionalization methods, which "trade space for time" by pooling information from different locations in the frequency analysis, are efficient tools to enhance the reliability of extreme quantile estimates. This paper aims at improving the understanding of the regional frequency of extreme precipitation by using regionalization methods, and providing scientific background and practical assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region. To achieve the main goals, L-moment-based index-flood (LMIF) method, one of the most popular regionalization methods, is used in the regional frequency analysis of extreme precipitation with special attention paid to inter-site dependence and its influence on the accuracy of quantile estimates, which has not been considered by most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence, and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, generalized extreme-value (GEV) and generalized normal (GNO) distributions were identified as the best fitted distributions for most of the sub-regions, and estimated quantiles for each region were obtained. Monte Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root-mean-square errors (RMSEs) were bigger and the 90 % error bounds were wider with inter-site dependence than those without inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with a return period of 100 years were finally obtained which indicated that there are two regions with highest precipitation extremes and a large region with low precipitation extremes. However, the regions with low precipitation extremes are the most developed and densely populated regions of the country, and floods will cause great loss of human life and property damage due to the high vulnerability. The study methods and procedure demonstrated in this paper will provide useful reference for frequency analysis of precipitation extremes in large regions, and the findings of the paper will be beneficial in flood control and management in the study area.

  2. Robust Inference of Risks of Large Portfolios

    PubMed Central

    Fan, Jianqing; Han, Fang; Liu, Han; Vickers, Byron

    2016-01-01

    We propose a bootstrap-based robust high-confidence level upper bound (Robust H-CLUB) for assessing the risks of large portfolios. The proposed approach exploits rank-based and quantile-based estimators, and can be viewed as a robust extension of the H-CLUB procedure (Fan et al., 2015). Such an extension allows us to handle possibly misspecified models and heavy-tailed data, which are stylized features in financial returns. Under mixing conditions, we analyze the proposed approach and demonstrate its advantage over H-CLUB. We further provide thorough numerical results to back up the developed theory, and also apply the proposed method to analyze a stock market dataset. PMID:27818569

  3. The Determinants of Federal and State Enforcement of Workplace Safety Regulations: OSHA Inspections 1990-2010*

    PubMed Central

    Jung, Juergen

    2013-01-01

    We explore the determinants of inspection outcomes across 1.6 million Occupational Safety and Health Agency (OSHA) audits from 1990 through 2010. We find that discretion in enforcement differs in state and federally conducted inspections. State agencies are more sensitive to local economic conditions, finding fewer standard violations and fewer serious violations as unemployment increases. Larger companies receive greater lenience in multiple dimensions. Inspector issued fines and final fines, after negotiated reductions, are both smaller during Republican presidencies. Quantile regression analysis reveals that Presidential and Congressional party affiliations have their greatest impact on the largest negotiated reductions in fines. PMID:24659856

  4. (When and where) Do extreme climate events trigger extreme ecosystem responses? - Development and initial results of a holistic analysis framework

    NASA Astrophysics Data System (ADS)

    Hauber, Eva K.; Donner, Reik V.

    2015-04-01

    In the context of ongoing climate change, extremes are likely to increase in magnitude and frequency. One of the most important consequences of these changes is that the associated ecological risks and impacts are potentially rising as well. In order to better anticipate and understand these impacts, it therefore becomes more and more crucial to understand the general connection between climate extremes and the response and functionality of ecosystems. Among other region of the world, Europe presents an excellent test case for studies concerning the interaction between climate and biosphere, since it lies in the transition region between cold polar and warm tropical air masses and thus covers a great variety of different climatic zones and associated terrestrial ecosystems. The large temperature differences across the continent make this region particularly interesting for investigating the effects of climate change on biosphere-climate interactions. However, previously used methods for defining an extreme event typically disregard the necessity of taking seasonality as well as seasonal variance appropriately into account. Furthermore, most studies have focused on the impacts of individual extreme events instead of considering a whole inventory of extremes with their respective spatio-temporal extents. In order to overcome the aforementioned research gaps, this work introduces a new approach to studying climate-biosphere interactions associated with extreme events, which comprises three consecutive steps: (1) Since Europe exhibits climatic conditions characterized by marked seasonality, a novel method is developed to define extreme events taking into account the seasonality in all quantiles of the probability distribution of the respective variable of interest. This is achieved by considering kernel density estimates individually for each observation date during the year, including the properly weighted information from adjacent dates. By this procedure, we obtain a seasonal cycle for each quantile of the distribution, which can be used for a fully data-adaptive definition of extremes as exceedances above this time-dependent quantile function. (2) Having thus identified the extreme events, their distribution is analyzed in both space and time. Following a procedure recently proposed by Lloyd-Hughes (2012) and further exploited by Zscheischler et al. (2013), extremes observed at neighboring points in space and time are considered to form connected sets. Investigating the size distribution of these sets provides novel insights into the development and dynamical characteristics of spatio-temporally extended climate and ecosystem extremes. (3) Finally, the timing of such spatio-temporally extended extremes in different climatic as well as ecological variables is tested pairwise to rule out that co-occurrences of extremes have emerged solely due to chance. For this purpose, the recently developed framework of coincidence analysis (Donges et al., 2011; Rammig et al. 2014) is applied. The corresponding analysis allows identifying potential causal linkages between climatic extremes and extreme ecosystem responses and, thus, to study their mechanisms and spatial as well as seasonal distribution in great detail. In this work, the described method is exemplified by using different climate data from the ERA-Interim reanalysis as well as remote sensing-based land surface temperature data. References: Donges et al., PNAS, 108, 20422, 2011 Lloyd-Hughes, Int. J. Climatol., 32, 406, 2012 Rammig et al., Biogeosc. Disc., 11, 2537, 2014 Zscheischler et al., Ecol. Inform., 15, 66, 2013

  5. Mapping the changing pattern of local climate as an observed distribution

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra; Stainforth, David; Watkins, Nicholas

    2013-04-01

    It is at local scales that the impacts of climate change will be felt directly and at which adaptation planning decisions must be made. This requires quantifying the geographical patterns in trends at specific quantiles in distributions of variables such as daily temperature or precipitation. Here we focus on these local changes and on the way observational data can be analysed to inform us about the pattern of local climate change. We present a method[1] for analysing local climatic timeseries data to assess which quantiles of the local climatic distribution show the greatest and most robust trends. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily temperature from specific locations across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of the sensitivity of different quantiles of the distributions to changing climate. Geographical location and temperature are treated as independent variables, we thus obtain as outputs the pattern of variation in sensitivity with temperature (or occurrence likelihood), and with geographical location. We find as an output many regionally consistent patterns of response of potential value in adaptation planning. We discuss methods to quantify and map the robustness of these observed sensitivities and their statistical likelihood. This also quantifies the level of detail needed from climate models if they are to be used as tools to assess climate change impact. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, in press [2] Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119, doi:10.1029/2008JD10201

  6. Detection of relationships between SUDOSCAN with estimated glomerular filtration rate (eGFR) in Chinese patients with type 2 diabetes.

    PubMed

    Mao, Fei; Zhu, Xiaoming; Lu, Bin; Li, Yiming

    2018-04-01

    SUDOSCAN (Impeto Medical, Paris, France) has been proved to be a new and non-invasive method in detecting renal dysfunction in type 2 diabetes mellitus (T2DM) patients. In this study, we sought to compare the result of diabetic kidney dysfunction score (DKD-score) of SUDOSCAN with estimated glomerular filtration rate (eGFR) by using quantile regression analysis, which was completely different from previous studies. A total number of 223 Chinese T2DM patients were enrolled in the study. SUDOSCAN, renal function test (including blood urea nitrogen, creatinine and uric acid) and 99 mTc-diethylenetriamine pentaacetic acid ( 99 mTc-DTPA) renal dynamic imaging were performed in all T2DM patients. DKD-score of SUDOSCAN was compared with eGFR detected by 99 mTc-DTPA renal dynamic imaging through quantile regression analysis. Its validation and utility was further determined through bias and precision test. The quantile regression analysis demonstrated the relationship with eGFR was inverse and significant for almost all percentiles of DKD-score. The coefficients decreased as the percentile of DKD-score increased. And in validation data set, both the bias and precision were increased with the eGFR (median difference, -21.2 ml/min/1.73 m 2 for all individuals vs. -4.6 ml/min/1.73 m 2 for eGFR between 0 and 59 ml/min/1.73 m 2 ; interquartile range [IQR] for the difference, -25.4 ml/min/1.73 m 2 vs. -14.7 ml/min/1.73 m 2 ). The eGFR category misclassification rate were 10% in eGFR 0-59 ml/min/1.73 m 2 group, 57.3% in 60-90 group, and 87.2% in eGFR > 90 ml/min/1.73 m 2 group. DKD-score of SUDOSCAN could be used to detect renal dysfunction in T2DM patients. A higher prognostic value of DKD-score was detected when eGFR level was lower. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Methods for estimating selected low-flow statistics and development of annual flow-duration statistics for Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Kula, Stephanie P.

    2013-01-01

    This report presents the results of a study to develop methods for estimating selected low-flow statistics and for determining annual flow-duration statistics for Ohio streams. Regression techniques were used to develop equations for estimating 10-year recurrence-interval (10-percent annual-nonexceedance probability) low-flow yields, in cubic feet per second per square mile, with averaging periods of 1, 7, 30, and 90-day(s), and for estimating the yield corresponding to the long-term 80-percent duration flow. These equations, which estimate low-flow yields as a function of a streamflow-variability index, are based on previously published low-flow statistics for 79 long-term continuous-record streamgages with at least 10 years of data collected through water year 1997. When applied to the calibration dataset, average absolute percent errors for the regression equations ranged from 15.8 to 42.0 percent. The regression results have been incorporated into the U.S. Geological Survey (USGS) StreamStats application for Ohio (http://water.usgs.gov/osw/streamstats/ohio.html) in the form of a yield grid to facilitate estimation of the corresponding streamflow statistics in cubic feet per second. Logistic-regression equations also were developed and incorporated into the USGS StreamStats application for Ohio for selected low-flow statistics to help identify occurrences of zero-valued statistics. Quantiles of daily and 7-day mean streamflows were determined for annual and annual-seasonal (September–November) periods for each complete climatic year of streamflow-gaging station record for 110 selected streamflow-gaging stations with 20 or more years of record. The quantiles determined for each climatic year were the 99-, 98-, 95-, 90-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 25-, 20-, 10-, 5-, 2-, and 1-percent exceedance streamflows. Selected exceedance percentiles of the annual-exceedance percentiles were subsequently computed and tabulated to help facilitate consideration of the annual risk of exceedance or nonexceedance of annual and annual-seasonal-period flow-duration values. The quantiles are based on streamflow data collected through climatic year 2008.

  8. Trajectories of HbA1c Levels in Children and Youth with Type 1 Diabetes

    PubMed Central

    Pinhas-Hamiel, Orit; Hamiel, Uri; Boyko, Valentina; Graph-Barel, Chana; Reichman, Brian; Lerner-Geva, Liat

    2014-01-01

    Purpose To illustrate the distribution of Hemoglobin A1c (HbA1c) levels according to age and gender among children, adolescents and youth with type 1 diabetes (T1DM). Methods Consecutive HbA1c measurements of 349 patients, aged 2 to 30 years with T1DM were obtained from 1995 through 2010. Measurement from patients diagnosed with celiac disease (n = 20), eating disorders (n = 41) and hemoglobinopathy (n = 1) were excluded. The study sample comprised 4815 measurements of HbA1c from 287 patients. Regression percentiles of HbA1c were calculated as a function of age and gender by the quantile regression method using the SAS procedure QUANTREG. Results Crude percentiles of HbA1c as a function of age and gender, and the modeled curves produced using quantile regression showed good concordance. The curves show a decline in HbA1c levels from age 2 to 4 years at each percentile. Thereafter, there is a gradual increase during the prepubertal years with a peak at ages 12 to 14 years. HbA1c levels subsequently decline to the lowest values in the third decade. Curves of females and males followed closely, with females having HbA1c levels about 0.1% (1.1 mmol/mol) higher in the 25th 50th and 75th percentiles. Conclusion We constructed age-specific distribution curves for HbA1c levels for patients with T1DM. These percentiles may be used to demonstrate the individual patient's measurements longitudinally compared with age-matched patients. PMID:25275650

  9. Opportunities of probabilistic flood loss models

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Lüdtke, Stefan; Vogel, Kristin; Merz, Bruno

    2016-04-01

    Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. However, reliable flood damage models are a prerequisite for the practical usefulness of the model results. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of sharpness of the predictions the reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The comparison of the uni-variable Stage damage function and the multivariable model approach emphasises the importance to quantify predictive uncertainty. With each explanatory variable, the multi-variable model reveals an additional source of uncertainty. However, the predictive performance in terms of precision (mbe), accuracy (mae) and reliability (HR) is clearly improved in comparison to uni-variable Stage damage function. Overall, Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  10. The importance of hydrological uncertainty assessment methods in climate change impact studies

    NASA Astrophysics Data System (ADS)

    Honti, M.; Scheidegger, A.; Stamm, C.

    2014-08-01

    Climate change impact assessments have become more and more popular in hydrology since the middle 1980s with a recent boost after the publication of the IPCC AR4 report. From hundreds of impact studies a quasi-standard methodology has emerged, to a large extent shaped by the growing public demand for predicting how water resources management or flood protection should change in the coming decades. The "standard" workflow relies on a model cascade from global circulation model (GCM) predictions for selected IPCC scenarios to future catchment hydrology. Uncertainty is present at each level and propagates through the model cascade. There is an emerging consensus between many studies on the relative importance of the different uncertainty sources. The prevailing perception is that GCM uncertainty dominates hydrological impact studies. Our hypothesis was that the relative importance of climatic and hydrologic uncertainty is (among other factors) heavily influenced by the uncertainty assessment method. To test this we carried out a climate change impact assessment and estimated the relative importance of the uncertainty sources. The study was performed on two small catchments in the Swiss Plateau with a lumped conceptual rainfall runoff model. In the climatic part we applied the standard ensemble approach to quantify uncertainty but in hydrology we used formal Bayesian uncertainty assessment with two different likelihood functions. One was a time series error model that was able to deal with the complicated statistical properties of hydrological model residuals. The second was an approximate likelihood function for the flow quantiles. The results showed that the expected climatic impact on flow quantiles was small compared to prediction uncertainty. The choice of uncertainty assessment method actually determined what sources of uncertainty could be identified at all. This demonstrated that one could arrive at rather different conclusions about the causes behind predictive uncertainty for the same hydrological model and calibration data when considering different objective functions for calibration.

  11. The N-shaped environmental Kuznets curve: an empirical evaluation using a panel quantile regression approach.

    PubMed

    Allard, Alexandra; Takman, Johanna; Uddin, Gazi Salah; Ahmed, Ali

    2018-02-01

    We evaluate the N-shaped environmental Kuznets curve (EKC) using panel quantile regression analysis. We investigate the relationship between CO 2 emissions and GDP per capita for 74 countries over the period of 1994-2012. We include additional explanatory variables, such as renewable energy consumption, technological development, trade, and institutional quality. We find evidence for the N-shaped EKC in all income groups, except for the upper-middle-income countries. Heterogeneous characteristics are, however, observed over the N-shaped EKC. Finally, we find a negative relationship between renewable energy consumption and CO 2 emissions, which highlights the importance of promoting greener energy in order to combat global warming.

  12. The Income-Health Relationship 'Beyond the Mean': New Evidence from Biomarkers.

    PubMed

    Carrieri, Vincenzo; Jones, Andrew M

    2017-07-01

    The relationship between income and health is one of the most explored topics in health economics but less is known about this relationship at different points of the health distribution. Analysis based solely on the mean may miss important information in other parts of the distribution. This is especially relevant when clinical concern is focused on the tail of the distribution and when evaluating the income gradient at different points of the distribution and decomposing income-related inequalities in health is of interest. We use the unconditional quantile regression approach to analyse the income gradient across the entire distribution of objectively measured blood-based biomarkers. We apply an Oaxaca-Blinder decomposition at various quantiles of the biomarker distributions to analyse gender differentials in biomarkers and to measure the contribution of income (and other covariates) to these differentials. Using data from the Health Survey for England, we find a non-linear relationship between income and health and a strong gradient with respect to income at the highest quantiles of the biomarker distributions. We find that there is heterogeneity in the association of health to income across genders, which accounts for a substantial percentage of the gender differentials in observed health. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Analysis of regional natural flow for evaluation of flood risk according to RCP climate change scenarios

    NASA Astrophysics Data System (ADS)

    Lee, J. Y.; Chae, B. S.; Wi, S.; KIm, T. W.

    2017-12-01

    Various climate change scenarios expect the rainfall in South Korea to increase by 3-10% in the future. The future increased rainfall has significant effect on the frequency of flood in future as well. This study analyzed the probability of future flood to investigate the stability of existing and new installed hydraulic structures and the possibility of increasing flood damage in mid-sized watersheds in South Korea. To achieve this goal, we first clarified the relationship between flood quantiles acquired from the flood-frequency analysis (FFA) and design rainfall-runoff analysis (DRRA) in gauged watersheds. Then, after synthetically generating the regional natural flow data according to RCP climate change scenarios, we developed mathematical formulas to estimate future flood quantiles based on the regression between DRRA and FFA incorporated with regional natural flows in unguaged watersheds. Finally, we developed a flood risk map to investigate the change of flood risk in terms of the return period for the past, present, and future. The results identified that the future flood quantiles and risks would increase in accordance with the RCP climate change scenarios. Because the regional flood risk was identified to increase in future comparing with the present status, comprehensive flood control will be needed to cope with extreme floods in future.

  14. Regional maximum rainfall analysis using L-moments at the Titicaca Lake drainage, Peru

    NASA Astrophysics Data System (ADS)

    Fernández-Palomino, Carlos Antonio; Lavado-Casimiro, Waldo Sven

    2017-08-01

    The present study investigates the application of the index flood L-moments-based regional frequency analysis procedure (RFA-LM) to the annual maximum 24-h rainfall (AM) of 33 rainfall gauge stations (RGs) to estimate rainfall quantiles at the Titicaca Lake drainage (TL). The study region was chosen because it is characterised by common floods that affect agricultural production and infrastructure. First, detailed quality analyses and verification of the RFA-LM assumptions were conducted. For this purpose, different tests for outlier verification, homogeneity, stationarity, and serial independence were employed. Then, the application of RFA-LM procedure allowed us to consider the TL as a single, hydrologically homogeneous region, in terms of its maximum rainfall frequency. That is, this region can be modelled by a generalised normal (GNO) distribution, chosen according to the Z test for goodness-of-fit, L-moments (LM) ratio diagram, and an additional evaluation of the precision of the regional growth curve. Due to the low density of RG in the TL, it was important to produce maps of the AM design quantiles estimated using RFA-LM. Therefore, the ordinary Kriging interpolation (OK) technique was used. These maps will be a useful tool for determining the different AM quantiles at any point of interest for hydrologists in the region.

  15. Assessment of Planetary-Boundary-Layer Schemes in the Weather Research and Forecasting Model Within and Above an Urban Canopy Layer

    NASA Astrophysics Data System (ADS)

    Ferrero, Enrico; Alessandrini, Stefano; Vandenberghe, Francois

    2018-03-01

    We tested several planetary-boundary-layer (PBL) schemes available in the Weather Research and Forecasting (WRF) model against measured wind speed and direction, temperature and turbulent kinetic energy (TKE) at three levels (5, 9, 25 m). The Urban Turbulence Project dataset, gathered from the outskirts of Turin, Italy and used for the comparison, provides measurements made by sonic anemometers for more than 1 year. In contrast to other similar studies, which have mainly focused on short-time periods, we considered 2 months of measurements (January and July) representing both the seasonal and the daily variabilities. To understand how the WRF-model PBL schemes perform in an urban environment, often characterized by low wind-speed conditions, we first compared six PBL schemes against observations taken by the highest anemometer located in the inertial sub-layer. The availability of the TKE measurements allows us to directly evaluate the performances of the model; results of the model evaluation are presented in terms of quantile versus quantile plots and statistical indices. Secondly, we considered WRF-model PBL schemes that can be coupled to the urban-surface exchange parametrizations and compared the simulation results with measurements from the two lower anemometers located inside the canopy layer. We find that the PBL schemes accounting for TKE are more accurate and the model representation of the roughness sub-layer improves when the urban model is coupled to each PBL scheme.

  16. Assessment of Weighted Quantile Sum Regression for Modeling Chemical Mixtures and Cancer Risk

    PubMed Central

    Czarnota, Jenna; Gennings, Chris; Wheeler, David C

    2015-01-01

    In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case–control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome. PMID:26005323

  17. Assessment of weighted quantile sum regression for modeling chemical mixtures and cancer risk.

    PubMed

    Czarnota, Jenna; Gennings, Chris; Wheeler, David C

    2015-01-01

    In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case-control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome.

  18. Calibration of limited-area ensemble precipitation forecasts for hydrological predictions

    NASA Astrophysics Data System (ADS)

    Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana

    2015-04-01

    The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog-based method on the reduction of missed events for discharge predictions.

  19. Extreme climatic events drive mammal irruptions: regression analysis of 100-year trends in desert rainfall and temperature

    PubMed Central

    Greenville, Aaron C; Wardle, Glenda M; Dickman, Chris R

    2012-01-01

    Extreme climatic events, such as flooding rains, extended decadal droughts and heat waves have been identified increasingly as important regulators of natural populations. Climate models predict that global warming will drive changes in rainfall and increase the frequency and severity of extreme events. Consequently, to anticipate how organisms will respond we need to document how changes in extremes of temperature and rainfall compare to trends in the mean values of these variables and over what spatial scales the patterns are consistent. Using the longest historical weather records available for central Australia – 100 years – and quantile regression methods, we investigate if extreme climate events have changed at similar rates to median events, if annual rainfall has increased in variability, and if the frequency of large rainfall events has increased over this period. Specifically, we compared local (individual weather stations) and regional (Simpson Desert) spatial scales, and quantified trends in median (50th quantile) and extreme weather values (5th, 10th, 90th, and 95th quantiles). We found that median and extreme annual minimum and maximum temperatures have increased at both spatial scales over the past century. Rainfall changes have been inconsistent across the Simpson Desert; individual weather stations showed increases in annual rainfall, increased frequency of large rainfall events or more prolonged droughts, depending on the location. In contrast to our prediction, we found no evidence that intra-annual rainfall had become more variable over time. Using long-term live-trapping records (22 years) of desert small mammals as a case study, we demonstrate that irruptive events are driven by extreme rainfalls (>95th quantile) and that increases in the magnitude and frequency of extreme rainfall events are likely to drive changes in the populations of these species through direct and indirect changes in predation pressure and wildfires. PMID:23170202

  20. Differences in BMI z-Scores between Offspring of Smoking and Nonsmoking Mothers: A Longitudinal Study of German Children from Birth through 14 Years of Age

    PubMed Central

    Fenske, Nora; Müller, Manfred J.; Plachta-Danielzik, Sandra; Keil, Thomas; Grabenhenrich, Linus; von Kries, Rüdiger

    2014-01-01

    Background: Children of mothers who smoked during pregnancy have a lower birth weight but have a higher chance to become overweight during childhood. Objectives: We followed children longitudinally to assess the age when higher body mass index (BMI) z-scores became evident in the children of mothers who smoked during pregnancy, and to evaluate the trajectory of changes until adolescence. Methods: We pooled data from two German cohort studies that included repeated anthropometric measurements until 14 years of age and information on smoking during pregnancy and other risk factors for overweight. We used longitudinal quantile regression to estimate age- and sex-specific associations between maternal smoking and the 10th, 25th, 50th, 75th, and 90th quantiles of the BMI z-score distribution in study participants from birth through 14 years of age, adjusted for potential confounders. We used additive mixed models to estimate associations with mean BMI z-scores. Results: Mean and median (50th quantile) BMI z-scores at birth were smaller in the children of mothers who smoked during pregnancy compared with children of nonsmoking mothers, but BMI z-scores were significantly associated with maternal smoking beginning at the age of 4–5 years, and differences increased over time. For example, the difference in the median BMI z-score between the daughters of smokers versus nonsmokers was 0.12 (95% CI: 0.01, 0.21) at 5 years, and 0.30 (95% CI: 0.08, 0.39) at 14 years of age. For lower BMI z-score quantiles, the association with smoking was more pronounced in girls, whereas in boys the association was more pronounced for higher BMI z-score quantiles. Conclusions: A clear difference in BMI z-score (mean and median) between children of smoking and nonsmoking mothers emerged at 4–5 years of age. The shape and size of age-specific effect estimates for maternal smoking during pregnancy varied by age and sex across the BMI z-score distribution. Citation: Riedel C, Fenske N, Müller MJ, Plachta-Danielzik S, Keil T, Grabenhenrich L, von Kries R. 2014. Differences in BMI z-scores between offspring of smoking and nonsmoking mothers: a longitudinal study of German children from birth through 14 years of age. Environ Health Perspect 122:761–767; http://dx.doi.org/10.1289/ehp.1307139 PMID:24695368

  1. Spatio-temporal analysis of the extreme precipitation by the L-moment-based index-flood method in the Yangtze River Delta region, China

    NASA Astrophysics Data System (ADS)

    Yin, Yixing; Chen, Haishan; Xu, Chongyu; Xu, Wucheng; Chen, Changchun

    2014-05-01

    The regionalization methods which 'trade space for time' by including several at-site data records in the frequency analysis are an efficient tool to improve the reliability of extreme quantile estimates. With the main aims of improving the understanding of the regional frequency of extreme precipitation and providing scientific and practical background and assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region, in this paper, L-moment-based index-flood (LMIF) method, one of the popular regionalization methods, is used in the regional frequency analysis of extreme precipitation; attention was paid to inter-site dependence and its influence on the accuracy of quantile estimates, which hasn't been considered for most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, Generalized extreme-value (GEV) and Generalized Normal (GNO) distributions were identified as the best-fit distributions for most of the sub regions. Estimated quantiles for each region were further obtained. Monte-Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root mean square errors (RMSEs) were bigger and the 90% error bounds were wider with inter-site dependence than those with no inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with return period of 100 years were obtained which indicated that there are two regions with the highest precipitation extremes (southeastern coastal area of Zhejiang Province and the southwest part of Anhui Province) and a large region with low precipitation extremes in the north and middle parts of Zhejiang Province, Shanghai City and Jiangsu Province. However, the central areas with low precipitation extremes are the most developed and densely populated regions in the study area, thus floods will cause great loss of human life and property damage. These findings will contribute to formulating the regional development strategies for policymakers and stakeholders in water resource management against the menaces of frequently emerged floods.

  2. Modelling the behaviour of unemployment rates in the US over time and across space

    NASA Astrophysics Data System (ADS)

    Holmes, Mark J.; Otero, Jesús; Panagiotidis, Theodore

    2013-11-01

    This paper provides evidence that unemployment rates across US states are stationary and therefore behave according to the natural rate hypothesis. We provide new insights by considering the effect of key variables on the speed of adjustment associated with unemployment shocks. A highly-dimensional VAR analysis of the half-lives associated with shocks to unemployment rates in pairs of states suggests that the distance between states and vacancy rates respectively exert a positive and negative influence. We find that higher homeownership rates do not lead to higher half-lives. When the symmetry assumption is relaxed through quantile regression, support for the Oswald hypothesis through a positive relationship between homeownership rates and half-lives is found at the higher quantiles.

  3. Mandatory universal drug plan, access to health care and health: Evidence from Canada.

    PubMed

    Wang, Chao; Li, Qing; Sweetman, Arthur; Hurley, Jeremiah

    2015-12-01

    This paper examines the impacts of a mandatory, universal prescription drug insurance program on health care utilization and health outcomes in a public health care system with free physician and hospital services. Using the Canadian National Population Health Survey from 1994 to 2003 and implementing a difference-in-differences estimation strategy, we find that the mandatory program substantially increased drug coverage among the general population. The program also increased medication use and general practitioner visits but had little effect on specialist visits and hospitalization. Findings from quantile regressions suggest that there was a large improvement in the health status of less healthy individuals. Further analysis by pre-policy drug insurance status and the presence of chronic conditions reveals a marked increase in the probability of taking medication and visiting a general practitioner among the previously uninsured and those with a chronic condition. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Threshold exceedance risk assessment in complex space-time systems

    NASA Astrophysics Data System (ADS)

    Angulo, José M.; Madrid, Ana E.; Romero, José L.

    2015-04-01

    Environmental and health impact risk assessment studies most often involve analysis and characterization of complex spatio-temporal dynamics. Recent developments in this context are addressed, among other objectives, to proper representation of structural heterogeneities, heavy-tailed processes, long-range dependence, intermittency, scaling behavior, etc. Extremal behaviour related to spatial threshold exceedances can be described in terms of geometrical characteristics and distribution patterns of excursion sets, which are the basis for construction of risk-related quantities, such as in the case of evolutionary study of 'hotspots' and long-term indicators of occurrence of extremal episodes. Derivation of flexible techniques, suitable for both the application under general conditions and the interpretation on singularities, is important for practice. Modern risk theory, a developing discipline motivated by the need to establish solid general mathematical-probabilistic foundations for rigorous definition and characterization of risk measures, has led to the introduction of a variety of classes and families, ranging from some conceptually inspired by specific fields of applications, to some intended to provide generality and flexibility to risk analysts under parametric specifications, etc. Quantile-based risk measures, such as Value-at-Risk (VaR), Average Value-at-Risk (AVaR), and generalization to spectral measures, are of particular interest for assessment under very general conditions. In this work, we study the application of quantile-based risk measures in the spatio-temporal context in relation to certain geometrical characteristics of spatial threshold exceedance sets. In particular, we establish a closed-form relationship between VaR, AVaR, and the expected value of threshold exceedance areas and excess volumes. Conditional simulation allows us, by means of empirical global and local spatial cumulative distributions, the derivation of various statistics of practical interest, and subsequent construction of dynamic risk maps. Further, we study the implementation of static and dynamic spatial deformation under this setup, meaningful, among other aspects, for incorporation of heterogeneities and/or covariate effects, or consideration of external factors for risk measurement. We illustrate this approach though Environment and Health applications. This work is partially supported by grant MTM2012-32666 of the Spanish Ministry of Economy and Competitiveness (co-financed by FEDER).

  5. Adjustment of Cell-Type Composition Minimizes Systematic Bias in Blood DNA Methylation Profiles Derived by DNA Collection Protocols

    PubMed Central

    Shiwa, Yuh; Hachiya, Tsuyoshi; Furukawa, Ryohei; Ohmomo, Hideki; Ono, Kanako; Kudo, Hisaaki; Hata, Jun; Hozawa, Atsushi; Iwasaki, Motoki; Matsuda, Koichi; Minegishi, Naoko; Satoh, Mamoru; Tanno, Kozo; Yamaji, Taiki; Wakai, Kenji; Hitomi, Jiro; Kiyohara, Yutaka; Kubo, Michiaki; Tanaka, Hideo; Tsugane, Shoichiro; Yamamoto, Masayuki; Sobue, Kenji; Shimizu, Atsushi

    2016-01-01

    Differences in DNA collection protocols may be a potential confounder in epigenome-wide association studies (EWAS) using a large number of blood specimens from multiple biobanks and/or cohorts. Here we show that pre-analytical procedures involved in DNA collection can induce systematic bias in the DNA methylation profiles of blood cells that can be adjusted by cell-type composition variables. In Experiment 1, whole blood from 16 volunteers was collected to examine the effect of a 24 h storage period at 4°C on DNA methylation profiles as measured using the Infinium HumanMethylation450 BeadChip array. Our statistical analysis showed that the P-value distribution of more than 450,000 CpG sites was similar to the theoretical distribution (in quantile-quantile plot, λ = 1.03) when comparing two control replicates, which was remarkably deviated from the theoretical distribution (λ = 1.50) when comparing control and storage conditions. We then considered cell-type composition as a possible cause of the observed bias in DNA methylation profiles and found that the bias associated with the cold storage condition was largely decreased (λadjusted = 1.14) by taking into account a cell-type composition variable. As such, we compared four respective sample collection protocols used in large-scale Japanese biobanks or cohorts as well as two control replicates. Systematic biases in DNA methylation profiles were observed between control and three of four protocols without adjustment of cell-type composition (λ = 1.12–1.45) and no remarkable biases were seen after adjusting for cell-type composition in all four protocols (λadjusted = 1.00–1.17). These results revealed important implications for comparing DNA methylation profiles between blood specimens from different sources and may lead to discovery of disease-associated DNA methylation markers and the development of DNA methylation profile-based predictive risk models. PMID:26799745

  6. Adjustment of Cell-Type Composition Minimizes Systematic Bias in Blood DNA Methylation Profiles Derived by DNA Collection Protocols.

    PubMed

    Shiwa, Yuh; Hachiya, Tsuyoshi; Furukawa, Ryohei; Ohmomo, Hideki; Ono, Kanako; Kudo, Hisaaki; Hata, Jun; Hozawa, Atsushi; Iwasaki, Motoki; Matsuda, Koichi; Minegishi, Naoko; Satoh, Mamoru; Tanno, Kozo; Yamaji, Taiki; Wakai, Kenji; Hitomi, Jiro; Kiyohara, Yutaka; Kubo, Michiaki; Tanaka, Hideo; Tsugane, Shoichiro; Yamamoto, Masayuki; Sobue, Kenji; Shimizu, Atsushi

    2016-01-01

    Differences in DNA collection protocols may be a potential confounder in epigenome-wide association studies (EWAS) using a large number of blood specimens from multiple biobanks and/or cohorts. Here we show that pre-analytical procedures involved in DNA collection can induce systematic bias in the DNA methylation profiles of blood cells that can be adjusted by cell-type composition variables. In Experiment 1, whole blood from 16 volunteers was collected to examine the effect of a 24 h storage period at 4°C on DNA methylation profiles as measured using the Infinium HumanMethylation450 BeadChip array. Our statistical analysis showed that the P-value distribution of more than 450,000 CpG sites was similar to the theoretical distribution (in quantile-quantile plot, λ = 1.03) when comparing two control replicates, which was remarkably deviated from the theoretical distribution (λ = 1.50) when comparing control and storage conditions. We then considered cell-type composition as a possible cause of the observed bias in DNA methylation profiles and found that the bias associated with the cold storage condition was largely decreased (λ adjusted = 1.14) by taking into account a cell-type composition variable. As such, we compared four respective sample collection protocols used in large-scale Japanese biobanks or cohorts as well as two control replicates. Systematic biases in DNA methylation profiles were observed between control and three of four protocols without adjustment of cell-type composition (λ = 1.12-1.45) and no remarkable biases were seen after adjusting for cell-type composition in all four protocols (λ adjusted = 1.00-1.17). These results revealed important implications for comparing DNA methylation profiles between blood specimens from different sources and may lead to discovery of disease-associated DNA methylation markers and the development of DNA methylation profile-based predictive risk models.

  7. Design Life Level: Quantifying risk in a changing climate

    NASA Astrophysics Data System (ADS)

    Rootzén, Holger; Katz, Richard W.

    2013-09-01

    In the past, the concepts of return levels and return periods have been standard and important tools for engineering design. However, these concepts are based on the assumption of a stationary climate and do not apply to a changing climate, whether local or global. In this paper, we propose a refined concept, Design Life Level, which quantifies risk in a nonstationary climate and can serve as the basis for communication. In current practice, typical hydrologic risk management focuses on a standard (e.g., in terms of a high quantile corresponding to the specified probability of failure for a single year). Nevertheless, the basic information needed for engineering design should consist of (i) the design life period (e.g., the next 50 years, say 2015-2064); and (ii) the probability (e.g., 5% chance) of a hazardous event (typically, in the form of the hydrologic variable exceeding a high level) occurring during the design life period. Capturing both of these design characteristics, the Design Life Level is defined as an upper quantile (e.g., 5%) of the distribution of the maximum value of the hydrologic variable (e.g., water level) over the design life period. We relate this concept and variants of it to existing literature and illustrate how they, and some useful complementary plots, may be computed and used. One practically important consideration concerns quantifying the statistical uncertainty in estimating a high quantile under nonstationarity.

  8. How important are determinants of obesity measured at the individual level for explaining geographic variation in body mass index distributions? Observational evidence from Canada using Quantile Regression and Blinder-Oaxaca Decomposition.

    PubMed

    Dutton, Daniel J; McLaren, Lindsay

    2016-04-01

    Obesity prevalence varies between geographic regions in Canada. The reasons for this variation are unclear but most likely implicate both individual-level and population-level factors. The objective of this study was to examine whether equalising correlates of body mass index (BMI) across these geographic regions could be reasonably expected to reduce differences in BMI distributions between regions. Using data from three cycles of the Canadian Community Health Survey (CCHS) 2001, 2003 and 2007 for males and females, we modelled between-region BMI cross-sectionally using quantile regression and Blinder-Oaxaca decomposition of the quantile regression results. We show that while individual-level variables (ie, age, income, education, physical activity level, fruit and vegetable consumption, smoking status, drinking status, family doctor status, rural status, employment in the past 12 months and marital status) may be Caucasian important correlates of BMI within geographic regions, those variables are not capable of explaining variation in BMI between regions. Equalisation of common correlates of BMI between regions cannot be reasonably expected to reduce differences in the BMI distributions between regions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  9. Moisture availability constraints on the leaf area to sapwood area ratio: analysis of measurements on Australian evergreen angiosperm trees

    NASA Astrophysics Data System (ADS)

    Togashi, Henrique; Prentice, Colin; Evans, Bradley; Forrester, David; Drake, Paul; Feikema, Paul; Brooksbank, Kim; Eamus, Derek; Taylor, Daniel

    2014-05-01

    The leaf area to sapwood area ratio (LA:SA) is a key plant trait that links photosynthesis to transpiration. Pipe model theory states that the sapwood cross-sectional area of a stem or branch at any point should scale isometrically with the area of leaves distal to that point. Optimization theory further suggests that LA:SA should decrease towards drier climates. Although acclimation of LA:SA to climate has been reported within species, much less is known about the scaling of this trait with climate among species. We compiled LA:SA measurements from 184 species of Australian evergreen angiosperm trees. The pipe model was broadly confirmed, based on measurements on branches and trunks of trees from one to 27 years old. We found considerable scatter in LA:SA among species. However quantile regression showed strong (0.2

  10. Morphological and moisture availability controls of the leaf area-to-sapwood area ratio: analysis of measurements on Australian trees.

    PubMed

    Togashi, Henrique Furstenau; Prentice, Iain Colin; Evans, Bradley John; Forrester, David Ian; Drake, Paul; Feikema, Paul; Brooksbank, Kim; Eamus, Derek; Taylor, Daniel

    2015-03-01

    The leaf area-to-sapwood area ratio (LA:SA) is a key plant trait that links photosynthesis to transpiration. The pipe model theory states that the sapwood cross-sectional area of a stem or branch at any point should scale isometrically with the area of leaves distal to that point. Optimization theory further suggests that LA:SA should decrease toward drier climates. Although acclimation of LA:SA to climate has been reported within species, much less is known about the scaling of this trait with climate among species. We compiled LA:SA measurements from 184 species of Australian evergreen angiosperm trees. The pipe model was broadly confirmed, based on measurements on branches and trunks of trees from one to 27 years old. Despite considerable scatter in LA:SA among species, quantile regression showed strong (0.2 < R1 < 0.65) positive relationships between two climatic moisture indices and the lowermost (5%) and uppermost (5-15%) quantiles of log LA:SA, suggesting that moisture availability constrains the envelope of minimum and maximum values of LA:SA typical for any given climate. Interspecific differences in plant hydraulic conductivity are probably responsible for the large scatter of values in the mid-quantile range and may be an important determinant of tree morphology.

  11. Morphological and moisture availability controls of the leaf area-to-sapwood area ratio: analysis of measurements on Australian trees

    PubMed Central

    Togashi, Henrique Furstenau; Prentice, Iain Colin; Evans, Bradley John; Forrester, David Ian; Drake, Paul; Feikema, Paul; Brooksbank, Kim; Eamus, Derek; Taylor, Daniel

    2015-01-01

    The leaf area-to-sapwood area ratio (LA:SA) is a key plant trait that links photosynthesis to transpiration. The pipe model theory states that the sapwood cross-sectional area of a stem or branch at any point should scale isometrically with the area of leaves distal to that point. Optimization theory further suggests that LA:SA should decrease toward drier climates. Although acclimation of LA:SA to climate has been reported within species, much less is known about the scaling of this trait with climate among species. We compiled LA:SA measurements from 184 species of Australian evergreen angiosperm trees. The pipe model was broadly confirmed, based on measurements on branches and trunks of trees from one to 27 years old. Despite considerable scatter in LA:SA among species, quantile regression showed strong (0.2 < R1 < 0.65) positive relationships between two climatic moisture indices and the lowermost (5%) and uppermost (5–15%) quantiles of log LA:SA, suggesting that moisture availability constrains the envelope of minimum and maximum values of LA:SA typical for any given climate. Interspecific differences in plant hydraulic conductivity are probably responsible for the large scatter of values in the mid-quantile range and may be an important determinant of tree morphology. PMID:25859331

  12. The heterogeneous effects of urbanization and income inequality on CO2 emissions in BRICS economies: evidence from panel quantile regression.

    PubMed

    Zhu, Huiming; Xia, Hang; Guo, Yawei; Peng, Cheng

    2018-04-12

    This paper empirically examines the effects of urbanization and income inequality on CO 2 emissions in the BRICS economies (i.e., Brazil, Russia, India, China, and South Africa) during the periods 1994-2013. The method we used is the panel quantile regression, which takes into account the unobserved individual heterogeneity and distributional heterogeneity. Our empirical results indicate that urbanization has a significant and negative impact on carbon emissions, except in the 80 th , 90 th , and 95 th quantiles. We also quantitatively investigate the direct and indirect effect of urbanization on carbon emissions, and the results show that we may underestimate urbanization's effect on carbon emissions if we ignore its indirect effect. In addition, in middle- and high-emission countries, income inequality has a significant and positive impact on carbon emissions. The results of our study indicate that in the BRICS economies, there is an inverted U-shaped environmental Kuznets curve (EKC) between the GDP per capita and carbon emissions. The conclusions of this study have important policy implications for policymakers. Policymakers should try to narrow the income gap between the rich and the poor to improve environmental quality; the BRICS economies can speed up urbanization to reduce carbon emissions, but they must improve energy efficiency and use clean energy to the greatest extent in the process.

  13. Modeling soil organic carbon with Quantile Regression: Dissecting predictors' effects on carbon stocks

    NASA Astrophysics Data System (ADS)

    Lombardo, Luigi; Saia, Sergio; Schillaci, Calogero; Mai, P. Martin; Huser, Raphaël

    2018-05-01

    Soil Organic Carbon (SOC) estimation is crucial to manage both natural and anthropic ecosystems and has recently been put under the magnifying glass after the Paris agreement 2016 due to its relationship with greenhouse gas. Statistical applications have dominated the SOC stock mapping at regional scale so far. However, the community has hardly ever attempted to implement Quantile Regression (QR) to spatially predict the SOC distribution. In this contribution, we test QR to estimate SOC stock (0-30 $cm$ depth) in the agricultural areas of a highly variable semi-arid region (Sicily, Italy, around 25,000 $km2$) by using topographic and remotely sensed predictors. We also compare the results with those from available SOC stock measurement. The QR models produced robust performances and allowed to recognize dominant effects among the predictors with respect to the considered quantile. This information, currently lacking, suggests that QR can discern predictor influences on SOC stock at specific sub-domains of each predictors. In this work, the predictive map generated at the median shows lower errors than those of the Joint Research Centre and International Soil Reference, and Information Centre benchmarks. The results suggest the use of QR as a comprehensive and effective method to map SOC using legacy data in agro-ecosystems. The R code scripted in this study for QR is included.

  14. The role of ensemble post-processing for modeling the ensemble tail

    NASA Astrophysics Data System (ADS)

    Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2016-04-01

    The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol. Soc. 134: 2051-2066.Buizza and Leutbecher, 2015: The forecast skill horizon, Q. J. R. Meteorol. Soc. 141: 3366-3382.Ferro, 2007: A probability model for verifying deterministic forecasts of extreme events. Weather and Forecasting 22 (5), 1089-1100.Friederichs, 2010: Statistical downscaling of extreme precipitation events using extreme value theory. Extremes 13, 109-132.Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.

  15. Factors Associated with Adherence to Adjuvant Endocrine Therapy Among Privately Insured and Newly Diagnosed Breast Cancer Patients: A Quantile Regression Analysis.

    PubMed

    Farias, Albert J; Hansen, Ryan N; Zeliadt, Steven B; Ornelas, India J; Li, Christopher I; Thompson, Beti

    2016-08-01

    Adherence to adjuvant endocrine therapy (AET) for estrogen receptor-positive breast cancer remains suboptimal, which suggests that women are not getting the full benefit of the treatment to reduce breast cancer recurrence and mortality. The majority of studies on adherence to AET focus on identifying factors among those women at the highest levels of adherence and provide little insight on factors that influence medication use across the distribution of adherence. To understand how factors influence adherence among women across low and high levels of adherence. A retrospective evaluation was conducted using the Truven Health MarketScan Commercial Claims and Encounters Database from 2007-2011. Privately insured women aged 18-64 years who were recently diagnosed and treated for breast cancer and who initiated AET within 12 months of primary treatment were assessed. Adherence was measured as the proportion of days covered (PDC) over a 12-month period. Simultaneous multivariable quantile regression was used to assess the association between treatment and demographic factors, use of mail order pharmacies, medication switching, and out-of-pocket costs and adherence. The effect of each variable was examined at the 40th, 60th, 80th, and 95th quantiles. Among the 6,863 women in the cohort, mail order pharmacies had the greatest influence on adherence at the 40th quantile, associated with a 29.6% (95% CI = 22.2-37.0) higher PDC compared with retail pharmacies. Out-of-pocket cost for a 30-day supply of AET greater than $20 was associated with an 8.6% (95% CI = 2.8-14.4) lower PDC versus $0-$9.99. The main factors that influenced adherence at the 95th quantile were mail order pharmacies, associated with a 4.4% higher PDC (95% CI = 3.8-5.0) versus retail pharmacies, and switching AET medication 2 or more times, associated with a 5.6% lower PDC versus not switching (95% CI = 2.3-9.0). Factors associated with adherence differed across quantiles. Addressing the use of mail order pharmacies and out-of-pocket costs for AET may have the greatest influence on improving adherence among those women with low adherence. This research was supported by a Ruth L. Kirschstein National Research Service Award for Individual Predoctoral Fellowship grant from the National Cancer Institute (grant number F31 CA174338), which was awarded to Farias. Additionally, Farias was funded by a Postdoctoral Fellowship at the University of Texas School of Public Health Cancer Education and Career Development Program through the National Cancer Institute (NIH Grant R25 CA57712). The other authors declare no conflicts of interest. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute or the National Institutes of Health. Farias was primarily responsible for the study concept and design, along with Hansen and Zeliadt and with assistance from the other authors. Farias, Hansen, and Zeliadt took the lead in data interpretation, assisted by the other authors. The manuscript was written by Farias, along with Thompson and assisted by the other authors, and was revised by Ornelas, Li, and Farias, with assistance from the other authors.

  16. An evaluation of the effectiveness of a risk-based monitoring approach implemented with clinical trials involving implantable cardiac medical devices.

    PubMed

    Diani, Christopher A; Rock, Angie; Moll, Phil

    2017-12-01

    Background Risk-based monitoring is a concept endorsed by the Food and Drug Administration to improve clinical trial data quality by focusing monitoring efforts on critical data elements and higher risk investigator sites. BIOTRONIK approached this by implementing a comprehensive strategy that assesses risk and data quality through a combination of operational controls and data surveillance. This publication demonstrates the effectiveness of a data-driven risk assessment methodology when used in conjunction with a tailored monitoring plan. Methods We developed a data-driven risk assessment system to rank 133 investigator sites comprising 3442 subjects and identify those sites that pose a potential risk to the integrity of data collected in implantable cardiac device clinical trials. This included identification of specific risk factors and a weighted scoring mechanism. We conducted trend analyses for risk assessment data collected over 1 year to assess the overall impact of our data surveillance process combined with other operational monitoring efforts. Results Trending analyses of key risk factors revealed an improvement in the quality of data collected during the observation period. The three risk factors follow-up compliance rate, unavailability of critical data, and noncompliance rate correspond closely with Food and Drug Administration's risk-based monitoring guidance document. Among these three risk factors, 100% (12/12) of quantiles analyzed showed an increase in data quality. Of these, 67% (8/12) of the improving trends in worst performing quantiles had p-values less than 0.05, and 17% (2/12) had p-values between 0.05 and 0.06. Among the poorest performing site quantiles, there was a statistically significant decrease in subject follow-up noncompliance rates, protocol noncompliance rates, and incidence of missing critical data. Conclusion One year after implementation of a comprehensive strategy for risk-based monitoring, including a data-driven risk assessment methodology to target on-site monitoring visits, statistically significant improvement was seen in a majority of measurable risk factors at the worst performing site quantiles. For the three risk factors which are most critical to the overall compliance of cardiac rhythm management medical device studies: follow-up compliance rate, unavailability of critical data, and noncompliance rate, we measured significant improvement in data quality. Although the worst performing site quantiles improved but not significantly in some risk factors such as subject attrition, the data-driven risk assessment highlighted key areas on which to continue focusing both on-site and centralized monitoring efforts. Data-driven surveillance of clinical trial performance provides actionable observations that can improve site performance. Clinical trials utilizing risk-based monitoring by leveraging a data-driven quality assessment combined with specific operational procedures may lead to an improvement in data quality and resource efficiencies.

  17. Explaining the relation between pathological gambling and depression: Rumination as an underlying common cause.

    PubMed

    Krause, Kristian; Bischof, Anja; Lewin, Silvia; Guertler, Diana; Rumpf, Hans-Jürgen; John, Ulrich; Meyer, Christian

    2018-05-30

    Background and aims Symptoms of pathological gambling (SPG) and depression often co-occur. The nature of this relationship remains unclear. Rumination, which is well known to be associated with depression, might act as a common underlying factor explaining the frequent co-occurrence of both conditions. The aim of this study is to analyze associations between the rumination subfactors brooding and reflection and SPG. Methods Participants aged 14-64 years were recruited within an epidemiological study on pathological gambling in Germany. Cross-sectional data of 506 (80.4% male) individuals with a history of gambling problems were analyzed. The assessment included a standardized clinical interview. To examine the effects of rumination across different levels of problem gambling severity, sequential quantile regression was used to analyze the association between the rumination subfactors and SPG. Results Brooding (p = .005) was positively associated with the severity of problem gambling after adjusting for reflection, depressive symptoms, and sociodemographic variables. Along the distribution of problem gambling severity, findings hold for all but the lowest severity level. Reflection (p = .347) was not associated with the severity of problem gambling at the median. Along the distribution of problem gambling severity, there was an inverse association at only one quantile. Discussion and conclusions Brooding might be important in the development and maintenance of problem gambling. With its relations to depression and problem gambling, it might be crucial when it comes to explaining the high comorbidity rates between SPG and depression. The role of reflection in SPG remains inconclusive.

  18. Simulation of extreme rainfall and projection of future changes using the GLIMCLIM model

    NASA Astrophysics Data System (ADS)

    Rashid, Md. Mamunur; Beecham, Simon; Chowdhury, Rezaul Kabir

    2017-10-01

    In this study, the performance of the Generalized LInear Modelling of daily CLImate sequence (GLIMCLIM) statistical downscaling model was assessed to simulate extreme rainfall indices and annual maximum daily rainfall (AMDR) when downscaled daily rainfall from National Centers for Environmental Prediction (NCEP) reanalysis and Coupled Model Intercomparison Project Phase 5 (CMIP5) general circulation models (GCM) (four GCMs and two scenarios) output datasets and then their changes were estimated for the future period 2041-2060. The model was able to reproduce the monthly variations in the extreme rainfall indices reasonably well when forced by the NCEP reanalysis datasets. Frequency Adapted Quantile Mapping (FAQM) was used to remove bias in the simulated daily rainfall when forced by CMIP5 GCMs, which reduced the discrepancy between observed and simulated extreme rainfall indices. Although the observed AMDR were within the 2.5th and 97.5th percentiles of the simulated AMDR, the model consistently under-predicted the inter-annual variability of AMDR. A non-stationary model was developed using the generalized linear model for local, shape and scale to estimate the AMDR with an annual exceedance probability of 0.01. The study shows that in general, AMDR is likely to decrease in the future. The Onkaparinga catchment will also experience drier conditions due to an increase in consecutive dry days coinciding with decreases in heavy (>long term 90th percentile) rainfall days, empirical 90th quantile of rainfall and maximum 5-day consecutive total rainfall for the future period (2041-2060) compared to the base period (1961-2000).

  19. Overweight and Obesity in Southern Italy: their association with social and life-style characteristics and their effect on levels of biologic markers.

    PubMed

    Osella, Alberto R; Díaz, María Del Pilar; Cozzolongo, Rafaelle; Bonfiglio, Caterina; Franco, Isabella; Abrescia, Daniela Isabel; Bianco, Antonella; Giampiero, Elba Silvana; Petruzzi, José; Elsa, Lanzilota; Mario, Correale; Mastrosimni, Anna María; Giocchino, Leandro

    2014-01-01

    In the last decades, overweight and obesity have been transformed from minor public health issues to a major threat to public health affecting the most affluent societies and also the less developed ones. To estimate overweight-obesity prevalence in adults, their association with some social determinants and to assess the effect of these two conditions on levels of biologic and biochemical characteristics, by means of a population-based study. A random sample of the general population of Putignano was drawn. All participants completed a general pre-coded and a Food Frequency questionnaire; anthropometric measures were taken and a venous blood sample was drawn. All subjects underwent liver ultra-sonography. Data description was done by means of tables and then Quantile Regression was performed. Overall prevalence of overweight and obesity were 34.5% and 16.1% respectively. Both overweight and obesity were more frequent among male, married and low socio-economic position subjects. There were increasing frequencies of normal weight with higher levels of education. Overweight and obese subjects had more frequently Nonalcoholic Fatty Liver Disease, Hypertension and altered biochemical markers. Quantile regression showed a statistically significant association of age with overweight and obesity (maximum about 64.8 yo), gender (female) and low levels of education in both overweight and obesity. More than 10 gr/day of wine intake was associated with overweight. The prevention and treatment of overweight/obesity on a population wide basis are needed. Population-based strategies should also improve social and physical environmental contexts for healthful lifestyles.

  20. Inequities in utilization of reproductive and maternal health services in Ethiopia.

    PubMed

    Bobo, Firew Tekle; Yesuf, Elias Ali; Woldie, Mirkuzie

    2017-06-19

    Disparities in health services utilization within and between regional states of countries with diverse socio-cultural and economic conditions such as Ethiopia is a frequent encounter. Understanding and taking measures to address unnecessary and avoidable differences in the use of reproductive and maternal health services is a key concern in Ethiopia. The aim of the study was to examine degree of equity in reproductive and maternal health services utilization in Ethiopia. Data from Ethiopia demographic health survey 2014 was analyzed. We assessed inequities in utilization of modern contraceptive methods, antenatal care, facility based delivery and postnatal checkup. Four standard equity measurement methods were used; equity gaps, rate-ratios, concertation curve and concentration index. Inequities in service utilization were exhibited favoring women in developed regions, urban residents, most educated and the wealthy. Antenatal care by skilled provider was three times higher among women with post-secondary education than mothers with no education. Women in the highest wealth quantile had about 12 times higher skilled birth attendance than those in lowest wealth quantile. The rate of postnatal care use among urban resident was about 6 times that of women in rural area. Use of modern contraceptive methods was more equitably utilized service while, birth at health facility was less equitable across all economic levels, favoring the wealthy. Considerable inequity between and within regions of Ethiopia in the use of maternal health services was demonstrated. Strategically targeting social determinants of health with special emphasis to women education and economic empowerment will substantially contribute for altering the current situation favorably.

  1. A Data Centred Method to Estimate and Map Changes in the Full Distribution of Daily Precipitation and Its Exceedances

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.

    2014-12-01

    Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles or thresholds in distributions of variables such as daily temperature or precipitation. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by 'heavy tailed' distributed variables such as daily precipitation. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those extreme precipitation days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results identify regionally consistent patterns which, dependent on location, show systematic increase in precipitation on the wettest days, shifts in precipitation patterns to less moderate days and more heavy days, and drying across all days which is of potential value in adaptation planning. [1] S C Chapman, D A Stainforth, N W Watkins, 2013 Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, S. C. Chapman, N. W. Watkins, 2013 Environ. Res. Lett. 8, 034031 [2] Haylock et al. 2008 J. Geophys. Res (Atmospheres), 113, D20119

  2. Incense Burning during Pregnancy and Birth Weight and Head Circumference among Term Births: The Taiwan Birth Cohort Study.

    PubMed

    Chen, Le-Yu; Ho, Christine

    2016-09-01

    Incense burning for rituals or religious purposes is an important tradition in many countries. However, incense smoke contains particulate matter and gas products such as carbon monoxide, sulfur, and nitrogen dioxide, which are potentially harmful to health. We analyzed the relationship between prenatal incense burning and birth weight and head circumference at birth using the Taiwan Birth Cohort Study. We also analyzed whether the associations varied by sex and along the distribution of birth outcomes. We performed ordinary least squares (OLS) and quantile regressions analysis on a sample of 15,773 term births (> 37 gestational weeks; 8,216 boys and 7,557 girls) in Taiwan in 2005. The associations were estimated separately for boys and girls as well as for the population as a whole. We controlled extensively for factors that may be correlated with incense burning and birth weight and head circumference, such as parental religion, demographics, and health characteristics, as well as pregnancy-related variables. Findings from fully adjusted OLS regressions indicated that exposure to incense was associated with lower birth weight in boys (-18 g; 95% CI: -36, -0.94) but not girls (1 g; 95% CI: -17, 19; interaction p-value = 0.31). Associations with head circumference were negative for boys (-0.95 mm; 95% CI: -1.8, -0.16) and girls (-0.71 mm; 95% CI: -1.5, 0.11; interaction p-values = 0.73). Quantile regression results suggested that the negative associations were larger among the lower quantiles of birth outcomes. OLS regressions showed that prenatal incense burning was associated with lower birth weight for boys and smaller head circumference for boys and girls. The associations were more pronounced among the lower quantiles of birth outcomes. Further research is necessary to confirm whether incense burning has differential effects by sex. Chen LY, Ho C. 2016. Incense burning during pregnancy and birth weight and head circumference among term births: The Taiwan Birth Cohort Study. Environ Health Perspect 124:1487-1492; http://dx.doi.org/10.1289/ehp.1509922.

  3. Incense Burning during Pregnancy and Birth Weight and Head Circumference among Term Births: The Taiwan Birth Cohort Study

    PubMed Central

    Chen, Le-Yu; Ho, Christine

    2016-01-01

    Background: Incense burning for rituals or religious purposes is an important tradition in many countries. However, incense smoke contains particulate matter and gas products such as carbon monoxide, sulfur, and nitrogen dioxide, which are potentially harmful to health. Objectives: We analyzed the relationship between prenatal incense burning and birth weight and head circumference at birth using the Taiwan Birth Cohort Study. We also analyzed whether the associations varied by sex and along the distribution of birth outcomes. Methods: We performed ordinary least squares (OLS) and quantile regressions analysis on a sample of 15,773 term births (> 37 gestational weeks; 8,216 boys and 7,557 girls) in Taiwan in 2005. The associations were estimated separately for boys and girls as well as for the population as a whole. We controlled extensively for factors that may be correlated with incense burning and birth weight and head circumference, such as parental religion, demographics, and health characteristics, as well as pregnancy-related variables. Results: Findings from fully adjusted OLS regressions indicated that exposure to incense was associated with lower birth weight in boys (–18 g; 95% CI: –36, –0.94) but not girls (1 g; 95% CI: –17, 19; interaction p-value = 0.31). Associations with head circumference were negative for boys (–0.95 mm; 95% CI: –1.8, –0.16) and girls (–0.71 mm; 95% CI: –1.5, 0.11; interaction p-values = 0.73). Quantile regression results suggested that the negative associations were larger among the lower quantiles of birth outcomes. Conclusions: OLS regressions showed that prenatal incense burning was associated with lower birth weight for boys and smaller head circumference for boys and girls. The associations were more pronounced among the lower quantiles of birth outcomes. Further research is necessary to confirm whether incense burning has differential effects by sex. Citation: Chen LY, Ho C. 2016. Incense burning during pregnancy and birth weight and head circumference among term births: The Taiwan Birth Cohort Study. Environ Health Perspect 124:1487–1492; http://dx.doi.org/10.1289/ehp.1509922 PMID:26967367

  4. Stochastic extreme downscaling model for an assessment of changes in rainfall intensity-duration-frequency curves over South Korea using multiple regional climate models

    NASA Astrophysics Data System (ADS)

    So, Byung-Jin; Kim, Jin-Young; Kwon, Hyun-Han; Lima, Carlos H. R.

    2017-10-01

    A conditional copula function based downscaling model in a fully Bayesian framework is developed in this study to evaluate future changes in intensity-duration frequency (IDF) curves in South Korea. The model incorporates a quantile mapping approach for bias correction while integrated Bayesian inference allows accounting for parameter uncertainties. The proposed approach is used to temporally downscale expected changes in daily rainfall, inferred from multiple CORDEX-RCMs based on Representative Concentration Pathways (RCPs) 4.5 and 8.5 scenarios, into sub-daily temporal scales. Among the CORDEX-RCMs, a noticeable increase in rainfall intensity is observed in the HadGem3-RA (9%), RegCM (28%), and SNU_WRF (13%) on average, whereas no noticeable changes are observed in the GRIMs (-2%) for the period 2020-2050. More specifically, a 5-30% increase in rainfall intensity is expected in all of the CORDEX-RCMs for 50-year return values under the RCP 8.5 scenario. Uncertainty in simulated rainfall intensity gradually decreases toward the longer durations, which is largely associated with the enhanced strength of the relationship with the 24-h annual maximum rainfalls (AMRs). A primary advantage of the proposed model is that projected changes in future rainfall intensities are well preserved.

  5. Estimation of local extreme suspended sediment concentrations in California Rivers.

    PubMed

    Tramblay, Yves; Saint-Hilaire, André; Ouarda, Taha B M J; Moatar, Florentina; Hecht, Barry

    2010-09-01

    The total amount of suspended sediment load carried by a stream during a year is usually transported during one or several extreme events related to high river flow and intense rainfall, leading to very high suspended sediment concentrations (SSCs). In this study quantiles of SSC derived from annual maximums and the 99th percentile of SSC series are considered to be estimated locally in a site-specific approach using regional information. Analyses of relationships between physiographic characteristics and the selected indicators were undertaken using the localities of 5-km radius draining of each sampling site. Multiple regression models were built to test the regional estimation for these indicators of suspended sediment transport. To assess the accuracy of the estimates, a Jack-Knife re-sampling procedure was used to compute the relative bias and root mean square error of the models. Results show that for the 19 stations considered in California, the extreme SSCs can be estimated with 40-60% uncertainty, depending on the presence of flow regulation in the basin. This modelling approach is likely to prove functional in other Mediterranean climate watersheds since they appear useful in California, where geologic, climatic, physiographic, and land-use conditions are highly variable. Copyright 2010 Elsevier B.V. All rights reserved.

  6. North American wintertime temperature anomalies: the role of El Niño diversity and differential teleconnections

    NASA Astrophysics Data System (ADS)

    Beyene, Mussie T.; Jain, Shaleen

    2018-06-01

    El Niño-Southern Oscillation (ENSO) teleconnections induced wintertime surface air temperature (SAT) anomalies over North America show inter-event variability, asymmetry, and nonlinearity. This diagnostic study appraises the assumption that ENSO-induced teleconnections are adequately characterized as symmetric shifts in the SAT probability distributions for North American locations. To this end, a new conditional quantile functional estimation approach presented here incorporates: (a) the detailed nature of location and amplitude of SST anomalies—in particular the Eastern Pacific (EP), Central Pacific (CP) ENSO events—based on its two leading principal components, and (b) over the entire range of SATs, characterize the differential sensitivity to ENSO. Statistical significance is assessed using a wild bootstrap approach. Conditional risk at upper and lower quartile SAT conditioned on archetypical ENSO states is derived. There is marked asymmetry in ENSO effects on the likelihood of upper and lower quartile winter SATs for most North American regions. CP El Niño patterns show 20-80% decrease in the likelihood of lower quartile SATs for Canada and US west coast and a 20-40% increase across southeastern US. However, the upper quartile SAT for large swathes of Canada shows no sensitivity to CP El Niño. Similarly, EP El Niño is linked to a 40-80% increase in the probability of upper quartile winter SATs for Canada and northern US and a 20% decrease for southern US and northern Mexico regions; however, little or no change in the risk of lower quartile winter temperatures for southern parts of North America. Localized estimate of ENSO-related risk are also presented.

  7. Traffic Predictive Control: Case Study and Evaluation

    DOT National Transportation Integrated Search

    2017-06-26

    This project developed a quantile regression method for predicting future traffic flow at a signalized intersection by combining both historical and real-time data. The algorithm exploits nonlinear correlations in historical measurements and efficien...

  8. The effect of smoking habit changes on body weight: Evidence from the UK.

    PubMed

    Pieroni, Luca; Salmasi, Luca

    2016-03-01

    This paper evaluates the causal relationship between smoking and body weight through two waves (2004-2006) of the British Household Panel Survey. We model the effect of changes in smoking habits, such as quitting or reducing, and account for the heterogeneous responses of individuals located at different points of the body mass distribution by quantile regression. We test our results by means of a large set of control groups and investigate their robustness by using the changes-in-changes estimator and accounting for different thresholds to define smoking reductions. Our results reveal the positive effect of quitting smoking on weight changes, which is also found to increase in the highest quantiles, whereas the decision to reduce smoking does not affect body weight. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Using instant messaging to enhance the interpersonal relationships of Taiwanese adolescents: evidence from quantile regression analysis.

    PubMed

    Lee, Yueh-Chiang; Sun, Ya Chung

    2009-01-01

    Even though use of the internet by adolescents has grown exponentially, little is known about the correlation between their interaction via Instant Messaging (IM) and the evolution of their interpersonal relationships in real life. In the present study, 369 junior high school students in Taiwan responded to questions regarding their IM usage and their dispositional measures of real-life interpersonal relationships. Descriptive statistics, factor analysis, and quantile regression methods were used to analyze the data. Results indicate that (1) IM helps define adolescents' self-identity (forming and maintaining individual friendships) and social-identity (belonging to a peer group), and (2) how development of an interpersonal relationship is impacted by the use of IM since it appears that adolescents use IM to improve their interpersonal relationships in real life.

  10. Streamflow trends in the United States

    USGS Publications Warehouse

    Lins, H.F.; Slack, J.R.

    1999-01-01

    Secular trends in streamflow are evaluated for 395 climate-sensitive streamgaging stations in the conterminous United States using the non-parametric Mann-Kendall test. Trends are calculated for selected quantiles of discharge, from the 0th to the 100th percentile, to evaluate differences between low-, medium-, and high-flow regimes during the twentieth century. Two general patterns emerge; trends are most prevalent in the annual minimum (Q0) to median (Q50) flow categories and least prevalent in the annual maximum (Q100) category; and, at all but the highest quantiles, streamflow has increased across broad sections of the United States. Decreases appear only in parts of the Pacific Northwest and the Southeast. Systematic patterns are less apparent in the Q100 flow. Hydrologically, these results indicate that the conterminous U.S. is getting wetter, but less extreme.

  11. A Study on Regional Frequency Analysis using Artificial Neural Network - the Sumjin River Basin

    NASA Astrophysics Data System (ADS)

    Jeong, C.; Ahn, J.; Ahn, H.; Heo, J. H.

    2017-12-01

    Regional frequency analysis means to make up for shortcomings in the at-site frequency analysis which is about a lack of sample size through the regional concept. Regional rainfall quantile depends on the identification of hydrologically homogeneous regions, hence the regional classification based on hydrological homogeneous assumption is very important. For regional clustering about rainfall, multidimensional variables and factors related geographical features and meteorological figure are considered such as mean annual precipitation, number of days with precipitation in a year and average maximum daily precipitation in a month. Self-Organizing Feature Map method which is one of the artificial neural network algorithm in the unsupervised learning techniques solves N-dimensional and nonlinear problems and be shown results simply as a data visualization technique. In this study, for the Sumjin river basin in South Korea, cluster analysis was performed based on SOM method using high-dimensional geographical features and meteorological factor as input data. then, for the results, in order to evaluate the homogeneity of regions, the L-moment based discordancy and heterogeneity measures were used. Rainfall quantiles were estimated as the index flood method which is one of regional rainfall frequency analysis. Clustering analysis using SOM method and the consequential variation in rainfall quantile were analyzed. This research was supported by a grant(2017-MPSS31-001) from Supporting Technology Development Program for Disaster Management funded by Ministry of Public Safety and Security(MPSS) of the Korean government.

  12. On the distortion of elevation dependent warming signals by quantile mapping

    NASA Astrophysics Data System (ADS)

    Jury, Martin W.; Mendlik, Thomas; Maraun, Douglas

    2017-04-01

    Elevation dependent warming (EDW), the amplification of warming under climate change with elevation, is likely to accelerate changes in e.g. cryospheric and hydrological systems. Responsible for EDW is a mixture of processes including snow albedo feedback, cloud formations or the location of aerosols. The degree of incorporation of this processes varies across state of the art climate models. In a recent study we were preparing bias corrected model output of CMIP5 GCMs and CORDEX RCMs over the Himalayan region for the glacier modelling community. In a first attempt we used quantile mapping (QM) to generate this data. A beforehand model evaluation showed that more than two third of the 49 included climate models were able to reproduce positive trend differences between areas of higher and lower elevations in winter, clearly visible in all of our five observational datasets used. Regrettably, we noticed that height dependent trend signals provided by models were distorted, most of the time in the direction of less EDW, sometimes even reversing EDW signals present in the models before the bias correction. As a consequence, we refrained from using quantile mapping for our task, as EDW poses one important factor influencing the climate in high altitudes for the nearer and more distant future, and used a climate change signal preserving bias correction approach. Here we present our findings of the distortion of the EDW temperature change by QM and discuss the influence of QM on different statistical properties as well as their modifications.

  13. The repeatability of mean defect with size III and size V standard automated perimetry.

    PubMed

    Wall, Michael; Doyle, Carrie K; Zamba, K D; Artes, Paul; Johnson, Chris A

    2013-02-15

    The mean defect (MD) of the visual field is a global statistical index used to monitor overall visual field change over time. Our goal was to investigate the relationship of MD and its variability for two clinically used strategies (Swedish Interactive Threshold Algorithm [SITA] standard size III and full threshold size V) in glaucoma patients and controls. We tested one eye, at random, for 46 glaucoma patients and 28 ocularly healthy subjects with Humphrey program 24-2 SITA standard for size III and full threshold for size V each five times over a 5-week period. The standard deviation of MD was regressed against the MD for the five repeated tests, and quantile regression was used to show the relationship of variability and MD. A Wilcoxon test was used to compare the standard deviations of the two testing methods following quantile regression. Both types of regression analysis showed increasing variability with increasing visual field damage. Quantile regression showed modestly smaller MD confidence limits. There was a 15% decrease in SD with size V in glaucoma patients (P = 0.10) and a 12% decrease in ocularly healthy subjects (P = 0.08). The repeatability of size V MD appears to be slightly better than size III SITA testing. When using MD to determine visual field progression, a change of 1.5 to 4 decibels (dB) is needed to be outside the normal 95% confidence limits, depending on the size of the stimulus and the amount of visual field damage.

  14. New PCOS-like phenotype in older infertile women of likely autoimmune adrenal etiology with high AMH but low androgens.

    PubMed

    Gleicher, Norbert; Kushnir, Vitaly A; Darmon, Sarah K; Wang, Qi; Zhang, Lin; Albertini, David F; Barad, David H

    2017-03-01

    How anti-Müllerian hormone (AMH) and testosterone (T) interrelate in infertile women is currently largely unknown. We, therefore, in a retrospective cohort study investigated how infertile women with high-AMH (AMH ≥75th quantile; n=144) and with normal-AMH (25th-75th quantile; n=313), stratified for low-T (total testosterone ≤19.0ng/dL), normal-T (19.0-29.0ng/dL) and high-T (>29.0ng/dL) phenotypically behaved. Patient age, follicle stimulating hormone (FSH), dehyroepiandrosterone (DHEA), DHEA sulphate (DHEAS), cortisol (C), adrenocorticotrophic hormone (ACTH), IVF outcomes, as well as inflammatory and immune panels were then compared between groups, with AMH and T as variables. We identified a previously unknown infertile PCOS-like phenotype, characterized by high-AMH but, atypically, low-T, with predisposition toward autoimmunity. It presents with incompatible high-AMH and low-T (<19.0ng/dL), is restricted to lean PCOS-like patients, presenting delayed for tertiary fertility services. Since also characterized by low DHEAS, low-T is likely of adrenal origina, and consequence of autoimmune adrenal insufficiency since also accompanied by low-C and evidence of autoimmunity. DHEA supplementation in such patients equalizes low- to normal-T and normalizes IVF cycle outcomes. Once recognized, this high-AMH/low-T phenotype is surprisingly common in tertiary fertility centers but, currently, goes unrecognized. Its likely adrenal autoimmune etiology offers interesting new directions for investigations of adrenals control over ovarian function via adrenal androgen production. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A python module to normalize microarray data by the quantile adjustment method.

    PubMed

    Baber, Ibrahima; Tamby, Jean Philippe; Manoukis, Nicholas C; Sangaré, Djibril; Doumbia, Seydou; Traoré, Sekou F; Maiga, Mohamed S; Dembélé, Doulaye

    2011-06-01

    Microarray technology is widely used for gene expression research targeting the development of new drug treatments. In the case of a two-color microarray, the process starts with labeling DNA samples with fluorescent markers (cyanine 635 or Cy5 and cyanine 532 or Cy3), then mixing and hybridizing them on a chemically treated glass printed with probes, or fragments of genes. The level of hybridization between a strand of labeled DNA and a probe present on the array is measured by scanning the fluorescence of spots in order to quantify the expression based on the quality and number of pixels for each spot. The intensity data generated from these scans are subject to errors due to differences in fluorescence efficiency between Cy5 and Cy3, as well as variation in human handling and quality of the sample. Consequently, data have to be normalized to correct for variations which are not related to the biological phenomena under investigation. Among many existing normalization procedures, we have implemented the quantile adjustment method using the python computer language, and produced a module which can be run via an HTML dynamic form. This module is composed of different functions for data files reading, intensity and ratio computations and visualization. The current version of the HTML form allows the user to visualize the data before and after normalization. It also gives the option to subtract background noise before normalizing the data. The output results of this module are in agreement with the results of other normalization tools. Published by Elsevier B.V.

  16. Development of Hydrological Model of Klang River Valley for flood forecasting

    NASA Astrophysics Data System (ADS)

    Mohammad, M.; Andras, B.

    2012-12-01

    This study is to review the impact of climate change and land used on flooding through the Klang River and to compare the changes in the existing river system in Klang River Basin with the Storm water Management and Road Tunnel (SMART) which is now already operating in the city centre of Kuala Lumpur. Klang River Basin is the most urbanized region in Malaysia. More than half of the basin has been urbanized on the land that is prone to flooding. Numerous flood mitigation projects and studies have been carried out to enhance the existing flood forecasting and mitigation project. The objective of this study is to develop a hydrological model for flood forecasting in Klang Basin Malaysia. Hydrological modelling generally requires large set of input data and this is more often a challenge for a developing country. Due to this limitation, the Tropical Rainfall Measuring Mission (TRMM) rainfall measurement, initiated by the US space agency NASA and Japanese space agency JAXA was used in this study. TRMM data was transformed and corrected by quantile to quantile transformation. However, transforming the data based on ground measurement doesn't make any significant improvement and the statistical comparison shows only 10% difference. The conceptual HYMOD model was used in this study and calibrated using ROPE algorithm. But, using the whole time series of the observation period in this area resulted in insufficient performance. The depth function which used in ROPE algorithm are then used to identified and calibrated using only unusual event to observed the improvement and efficiency of the model.

  17. Independent technical review and analysis of hydraulic modeling and hydrology under low-flow conditions of the Des Plaines River near Riverside, Illinois

    USGS Publications Warehouse

    Over, Thomas M.; Straub, Timothy D.; Hortness, Jon E.; Murphy, Elizabeth A.

    2012-01-01

    The U.S. Geological Survey (USGS) has operated a streamgage and published daily flows for the Des Plaines River at Riverside since Oct. 1, 1943. A HEC-RAS model has been developed to estimate the effect of the removal of Hofmann Dam near the gage on low-flow elevations in the reach approximately 3 miles upstream from the dam. The Village of Riverside, the Illinois Department of Natural Resources-Office of Water Resources (IDNR-OWR), and the U. S. Army Corps of Engineers-Chicago District (USACE-Chicago) are interested in verifying the performance of the HEC-RAS model for specific low-flow conditions, and obtaining an estimate of selected daily flow quantiles and other low-flow statistics for a selected period of record that best represents current hydrologic conditions. Because the USGS publishes streamflow records for the Des Plaines River system and provides unbiased analyses of flows and stream hydraulic characteristics, the USGS served as an Independent Technical Reviewer (ITR) for this study.

  18. Characterizing the relationship between health utility and renal function after kidney transplantation in UK and US: a cross-sectional study

    PubMed Central

    2012-01-01

    Background Chronic allograft nephropathy (CAN) occurs in a large share of transplant recipients and it is the leading cause of graft loss despite the introduction of new and effective immunosuppressants. The reduction in renal function secondary to immunologic and non-immunologic CAN leads to several complications, including anemia and calcium-phosphorus metabolism imbalance and may be associated to worsening Health-Related Quality of Life. We sought to evaluate the relationship between kidney function and Euro-Qol 5 Dimension Index (EQ-5Dindex) scores after kidney transplantation and evaluate whether cross-cultural differences exist between UK and US. Methods This study is a secondary analysis of existing data gathered from two cross-sectional studies. We enrolled 233 and 209 subjects aged 18–74 years who received a kidney transplant in US and UK respectively. For the present analysis we excluded recipients with multiple or multi-organ transplantation, creatinine kinase ≥200 U/L, acute renal failure, and without creatinine assessments in 3 months pre-enrollment leaving 281 subjects overall. The questionnaires were administered independently in the two centers. Both packets included the EQ-5Dindex and socio-demographic items. We augmented the analytical dataset with information abstracted from clinical charts and administrative records including selected comorbidities and biochemistry test results. We used ordinary least squares and quantile regression adjusted for socio-demographic and clinical characteristics to assess the association between EQ-5Dindex and severity of chronic kidney disease (CKD). Results CKD severity was negatively associated with EQ-5Dindex in both samples (UK: ρ= −0.20, p=0.02; US: ρ= −0.21, p=0.02). The mean adjusted disutility associated to CKD stage 5 compared to CKD stage 1–2 was Δ= −0.38 in the UK sample, Δ= −0.11 in the US sample and Δ= −0.22 in the whole sample. The adjusted median disutility associated to CKD stage 5 compared to CKD stage 1–2 for the whole sample was 0.18 (p<0.01, quantile regression). Center effect was not statistically significant. Conclusions Impaired renal function is associated with reduced health-related quality of life independent of possible confounders, center-effect and analytic framework. PMID:23173709

  19. Ecological impacts of invasive alien species along temperature gradients: testing the role of environmental matching.

    PubMed

    Iacarella, Josephine C; Dick, Jaimie T A; Alexander, Mhairi E; Ricciardi, Anthony

    2015-04-01

    Invasive alien species (IAS) can cause substantive ecological impacts, and the role of temperature in mediating these impacts may become increasingly significant in a changing climate. Habitat conditions and physiological optima offer predictive information for IAS impacts in novel environments. Here, using meta-analysis and laboratory experiments, we tested the hypothesis that the impacts of IAS in the field are inversely correlated with the difference in their ambient and optimal temperatures. A meta-analysis of 29 studies of consumptive impacts of IAS in inland waters revealed that the impacts of fishes and crustaceans are higher at temperatures that more closely match their thermal growth optima. In particular, the maximum impact potential was constrained by increased differences between ambient and optimal temperatures, as indicated by the steeper slope of a quantile regression on the upper 25th percentile of impact data compared to that of a weighted linear regression on all data with measured variances. We complemented this study with an experimental analysis of the functional response (the relationship between predation rate and prey supply) of two invasive predators (freshwater mysid shrimp, Hemimysis anomala and Mysis diluviana) across. relevant temperature gradients; both of these species have previously been found to exert strong community-level impacts that are corroborated by their functional responses to different prey items. The functional response experiments showed that maximum feeding rates of H. anomala and M. diluviana have distinct peaks near their respective thermal optima. Although variation in impacts may be caused by numerous abiotic or biotic habitat characteristics, both our analyses point to temperature as a key mediator of IAS impact levels in inland waters and suggest that IAS management should prioritize habitats in the invaded range that more closely match the thermal optima of targeted invaders.

  20. CADDIS Volume 4. Data Analysis: Basic Analyses

    EPA Pesticide Factsheets

    Use of statistical tests to determine if an observation is outside the normal range of expected values. Details of CART, regression analysis, use of quantile regression analysis, CART in causal analysis, simplifying or pruning resulting trees.

  1. Detecting Long-term Trend of Water Quality Indices of Dong-gang River, Taiwan Using Quantile Regression

    NASA Astrophysics Data System (ADS)

    Yang, D.; Shiau, J.

    2013-12-01

    ABSTRACT BODY: Abstract Surface water quality is an essential issue in water-supply for human uses and sustaining healthy ecosystem of rivers. However, water quality of rivers is easily influenced by anthropogenic activities such as urban development and wastewater disposal. Long-term monitoring of water quality can assess whether water quality of rivers deteriorates or not. Taiwan is a population-dense area and heavily depends on surface water for domestic, industrial, and agricultural uses. Dong-gang River is one of major resources in southern Taiwan for agricultural requirements. The water-quality data of four monitoring stations of the Dong-gang River for the period of 2000-2012 are selected for trend analysis. The parameters used to characterize water quality of rivers include biochemical oxygen demand (BOD), dissolved oxygen (DO), suspended solids (SS), and ammonia nitrogen (NH3-N). These four water-quality parameters are integrated into an index called river pollution index (RPI) to indicate the pollution level of rivers. Although widely used non-parametric Mann-Kendall test and linear regression exhibit computational efficiency to identify trends of water-quality indices, limitations of such approaches include sensitive to outliers and estimations of conditional mean only. Quantile regression, capable of identifying changes over time of any percentile values, is employed in this study to detect long-term trend of water-quality indices for the Dong-gang River located in southern Taiwan. The results show that Dong-gang River 4 stations from 2000 to 2012 monthly long-term trends in water quality.To analyze s Dong-gang River long-term water quality trends and pollution characteristics. The results showed that the bridge measuring ammonia Long-dong, BOD5 measure in that station on a downward trend, DO, and SS is on the rise, River Pollution Index (RPI) on a downward trend. The results form Chau-Jhou station also ahowed simialar trends .more and more near the upstrean Hing-she station raise vivestok Sing-She stations are that ammonia on a upward trend, BOD5 no significant change in trend, DO, and SS is on the rise, river pollution index (RPI) a slight downward trend. Dong-gang River Basin , but the progress of sewer construction in slow. To reduce pollation in this river effort shoul be made regulatory reform on livestock waste control and acceleration of sewer construction. Keywords: quantile regression analysis, BOD5, RPI

  2. Teaching for All? Teach For America’s Effects across the Distribution of Student Achievement

    PubMed Central

    Penner, Emily K.

    2016-01-01

    This paper examines the effect of Teach For America (TFA) on the distribution of student achievement in elementary school. It extends previous research by estimating quantile treatment effects (QTE) to examine how student achievement in TFA and non-TFA classrooms differs across the broader distribution of student achievement. It also updates prior distributional work on TFA by correcting for previously unidentified missing data and estimating unconditional, rather than conditional QTE. Consistent with previous findings, results reveal a positive impact of TFA teachers across the distribution of math achievement. In reading, however, relative to veteran non-TFA teachers, students at the bottom of the reading distribution score worse in TFA classrooms, and students in the upper half of the distribution perform better. PMID:27668032

  3. [Determinants of equity in financing medicines in Argentina: an empirical study].

    PubMed

    Dondo, Mariana; Monsalvo, Mauricio; Garibaldi, Lucas A

    2016-01-01

    Medicines are an important part of household health spending. A progressive system for financing drugs is thus essential for an equitable health system. Some authors have proposed that the determinants of equity in drug financing are socioeconomic, demographic, and associated with public interventions, but little progress has been made in the empirical evaluation and quantification of their relative importance. The current study estimated quantile regressions at the provincial level in Argentina and found that old age (> 65 years), unemployment, the existence of a public pharmaceutical laboratory, treatment transfers, and a health system orientated to primary care were important predictors of progressive payment schemes. Low income, weak institutions, and insufficient infrastructure and services were associated with the most regressive social responses to health needs, thereby aggravating living conditions and limiting development opportunities.

  4. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE PAGES

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; ...

    2016-11-21

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  5. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  6. CASAS: Cancer Survival Analysis Suite, a web based application

    PubMed Central

    Rupji, Manali; Zhang, Xinyan; Kowalski, Jeanne

    2017-01-01

    We present CASAS, a shiny R based tool for interactive survival analysis and visualization of results. The tool provides a web-based one stop shop to perform the following types of survival analysis:  quantile, landmark and competing risks, in addition to standard survival analysis.  The interface makes it easy to perform such survival analyses and obtain results using the interactive Kaplan-Meier and cumulative incidence plots.  Univariate analysis can be performed on one or several user specified variable(s) simultaneously, the results of which are displayed in a single table that includes log rank p-values and hazard ratios along with their significance. For several quantile survival analyses from multiple cancer types, a single summary grid is constructed. The CASAS package has been implemented in R and is available via http://shinygispa.winship.emory.edu/CASAS/. The developmental repository is available at https://github.com/manalirupji/CASAS/. PMID:28928946

  7. The gender gap reloaded: are school characteristics linked to labor market performance?

    PubMed

    Konstantopoulos, Spyros; Constant, Amelie

    2008-06-01

    This study examines the wage gender gap of young adults in the 1970s, 1980s, and 2000 in the US. Using quantile regression we estimate the gender gap across the entire wage distribution. We also study the importance of high school characteristics in predicting future labor market performance. We conduct analyses for three major racial/ethnic groups in the US: Whites, Blacks, and Hispanics, employing data from two rich longitudinal studies: NLS and NELS. Our results indicate that while some school characteristics are positive and significant predictors of future wages for Whites, they are less so for the two minority groups. We find significant wage gender disparities favoring men across all three surveys in the 1970s, 1980s, and 2000. The wage gender gap is more pronounced in higher paid jobs (90th quantile) for all groups, indicating the presence of a persistent and alarming "glass ceiling."

  8. CASAS: Cancer Survival Analysis Suite, a web based application.

    PubMed

    Rupji, Manali; Zhang, Xinyan; Kowalski, Jeanne

    2017-01-01

    We present CASAS, a shiny R based tool for interactive survival analysis and visualization of results. The tool provides a web-based one stop shop to perform the following types of survival analysis:  quantile, landmark and competing risks, in addition to standard survival analysis.  The interface makes it easy to perform such survival analyses and obtain results using the interactive Kaplan-Meier and cumulative incidence plots.  Univariate analysis can be performed on one or several user specified variable(s) simultaneously, the results of which are displayed in a single table that includes log rank p-values and hazard ratios along with their significance. For several quantile survival analyses from multiple cancer types, a single summary grid is constructed. The CASAS package has been implemented in R and is available via http://shinygispa.winship.emory.edu/CASAS/. The developmental repository is available at https://github.com/manalirupji/CASAS/.

  9. Do Our Means of Inquiry Match our Intentions?

    PubMed Central

    Petscher, Yaacov

    2016-01-01

    A key stage of the scientific method is the analysis of data, yet despite the variety of methods that are available to researchers they are most frequently distilled to a model that focuses on the average relation between variables. Although research questions are frequently conceived with broad inquiry in mind, most regression methods are limited in comprehensively evaluating how observed behaviors are related to each other. Quantile regression is a largely unknown yet well-suited analytic technique similar to traditional regression analysis, but allows for a more systematic approach to understanding complex associations among observed phenomena in the psychological sciences. Data from the National Education Longitudinal Study of 1988/2000 are used to illustrate how quantile regression overcomes the limitations of average associations in linear regression by showing that psychological well-being and sex each differentially relate to reading achievement depending on one’s level of reading achievement. PMID:27486410

  10. Health care expenditures among working-age adults with physical disabilities: variations by disability spans.

    PubMed

    Pumkam, Chaiporn; Probst, Janice C; Bennett, Kevin J; Hardin, James; Xirasagar, Sudha

    2013-10-01

    Data on health care costs for working-age adults with physical disabilities are sparse and the dynamic nature of disability is not captured. To assess the effect of 3 types of disability status (persistent disability, temporary disability, and no disability) on health care expenditures, out-of-pocket (OOP) spending, and financial burden. Data from Medical Expenditure Panel Survey panel 12 (2007-2008) were used. Respondents were classified into 3 groups. Medians of average annual expenditures, OOP expenditures, and financial ratios were weighted. The package R was used for quantile regression analyses. Fifteen percent of the working-age population reported persistent disabilities and 7% had temporary disabilities. The persistent disability group had the greatest unadjusted annual medians for total expenditures ($4234), OOP expenses ($591), and financial burden ratios (1.59), followed by the temporary disability group ($1612, $388, 0.71 respectively). The persistent disability group paid approximately 15% of total health care expenditures out-of-pocket, while the temporary disability group and the no disability group each paid 22% out-of-pocket. After adjusting for other factors, quantile regression shows that the persistent disability group had significantly higher total expenditures, OOP expenses, and financial burden ratios (coefficients 1664, 156, 0.58 respectively) relative to the no disability group at the 50th percentile. Results for the temporary disability group show a similar trend except for OOP expenses. People who have disabling conditions for a longer period have better financial protection against OOP health care expenses but face greater financial burdens because of their higher out-of-pocket expenditures and their socioeconomic disadvantages. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Medicaid Expenditures for Children Remaining at Home After a First Finding of Child Maltreatment

    PubMed Central

    Telford, S. Russell; Cook, Lawrence J.; Waitzman, Norman J.; Keenan, Heather T.

    2016-01-01

    BACKGROUND: Child maltreatment is associated with physical and mental health problems. The objective of this study was to compare Medicaid expenditures based on a first-time finding of child maltreatment by Child Protective Services (CPS). METHODS: This retrospective cohort study included children aged 0 to 14 years enrolled in Utah Medicaid between January 2007 and December 2009. The exposed group included children enrolled in Medicaid during the month of a first-time CPS finding of maltreatment not resulting in out-of-home placement. The unexposed group included children enrolled in Medicaid in the same months without CPS involvement. Quantile regression was used to describe differences in average nonpharmacy Medicaid expenditures per child-year associated with a first-time CPS finding of maltreatment. RESULTS: A total of 6593 exposed children and 39 181 unexposed children contributed 20 670 and 105 982 child-years to this analysis, respectively. In adjusted quantile regression, exposed children at the 50th percentile of health care spending had annual expenditures $78 (95% confidence interval [CI], 65 to 90) higher than unexposed children. This difference increased to $336 (95% CI, 283 to 389) and $1038 (95% CI, 812 to 1264) at the 75th and 90th percentiles of health care spending. Differences were higher among older children, children with mental health diagnoses, and children with repeated episodes of CPS involvement; differences were lower among children with severe chronic health conditions. CONCLUSIONS: Maltreatment is associated with increased health care expenditures, but these costs are not evenly distributed. Better understanding of the reasons for and outcomes associated with differences in health care costs for children with a history of maltreatment is needed. PMID:27511948

  12. Assessing the impact of local meteorological variables on surface ozone in Hong Kong during 2000-2015 using quantile and multiple line regression models

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Fan, Shaojia; Guo, Hai; Gao, Bo; Sun, Jiaren; Chen, Laiguo

    2016-11-01

    The quantile regression (QR) method has been increasingly introduced to atmospheric environmental studies to explore the non-linear relationship between local meteorological conditions and ozone mixing ratios. In this study, we applied QR for the first time, together with multiple linear regression (MLR), to analyze the dominant meteorological parameters influencing the mean, 10th percentile, 90th percentile and 99th percentile of maximum daily 8-h average (MDA8) ozone concentrations in 2000-2015 in Hong Kong. The dominance analysis (DA) was used to assess the relative importance of meteorological variables in the regression models. Results showed that the MLR models worked better at suburban and rural sites than at urban sites, and worked better in winter than in summer. QR models performed better in summer for 99th and 90th percentiles and performed better in autumn and winter for 10th percentile. And QR models also performed better in suburban and rural areas for 10th percentile. The top 3 dominant variables associated with MDA8 ozone concentrations, changing with seasons and regions, were frequently associated with the six meteorological parameters: boundary layer height, humidity, wind direction, surface solar radiation, total cloud cover and sea level pressure. Temperature rarely became a significant variable in any season, which could partly explain the peak of monthly average ozone concentrations in October in Hong Kong. And we found the effect of solar radiation would be enhanced during extremely ozone pollution episodes (i.e., the 99th percentile). Finally, meteorological effects on MDA8 ozone had no significant changes before and after the 2010 Asian Games.

  13. Medicaid Expenditures for Children Remaining at Home After a First Finding of Child Maltreatment.

    PubMed

    Campbell, Kristine A; Telford, S Russell; Cook, Lawrence J; Waitzman, Norman J; Keenan, Heather T

    2016-09-01

    Child maltreatment is associated with physical and mental health problems. The objective of this study was to compare Medicaid expenditures based on a first-time finding of child maltreatment by Child Protective Services (CPS). This retrospective cohort study included children aged 0 to 14 years enrolled in Utah Medicaid between January 2007 and December 2009. The exposed group included children enrolled in Medicaid during the month of a first-time CPS finding of maltreatment not resulting in out-of-home placement. The unexposed group included children enrolled in Medicaid in the same months without CPS involvement. Quantile regression was used to describe differences in average nonpharmacy Medicaid expenditures per child-year associated with a first-time CPS finding of maltreatment. A total of 6593 exposed children and 39 181 unexposed children contributed 20 670 and 105 982 child-years to this analysis, respectively. In adjusted quantile regression, exposed children at the 50th percentile of health care spending had annual expenditures $78 (95% confidence interval [CI], 65 to 90) higher than unexposed children. This difference increased to $336 (95% CI, 283 to 389) and $1038 (95% CI, 812 to 1264) at the 75th and 90th percentiles of health care spending. Differences were higher among older children, children with mental health diagnoses, and children with repeated episodes of CPS involvement; differences were lower among children with severe chronic health conditions. Maltreatment is associated with increased health care expenditures, but these costs are not evenly distributed. Better understanding of the reasons for and outcomes associated with differences in health care costs for children with a history of maltreatment is needed. Copyright © 2016 by the American Academy of Pediatrics.

  14. Impact of Community-Based HIV/AIDS Treatment on Household Incomes in Uganda

    PubMed Central

    Feulefack, Joseph F.; Luckert, Martin K.; Mohapatra, Sandeep; Cash, Sean B.; Alibhai, Arif; Kipp, Walter

    2013-01-01

    Though health benefits to households in developing countries from antiretroviral treatment (ART) programs are widely reported in the literature, specific estimates regarding impacts of treatments on household incomes are rare. This type of information is important to governments and donors, as it is an indication of returns to their ART investments, and to better understand the role of HIV/AIDS in development. The objective of this study is to estimate the impact of a community-based ART program on household incomes in a previously underserved rural region of Uganda. A community-based ART program, based largely on labor contributions from community volunteers, was implemented and evaluated. All households with HIV/AIDS patients enrolled in the treatment programme (n = 134 households) were surveyed five times; once at the beginning of the treatment and every three months thereafter for a period of one year. Data were collected on household income from cash earnings and value of own production. The analysis, using ordinary least squares and quantile regressions, identifies the impact of the ART program on household incomes over the first year of the treatment, while controlling for heterogeneity in household characteristics and temporal changes. As a result of the treatment, health conditions of virtually all patients improved, and household incomes increased by approximately 30% to 40%, regardless of household income quantile. These increases in income, however, varied significantly depending on socio-demographic and socio-economic control variables. Overall, results show large and significant impacts of the ART program on household incomes, suggesting large returns to public investments in ART, and that treating HIV/AIDS is an important precondition for development. Moreover, development programs that invest in human capital and build wealth are important complements that can increase the returns to ART programs. PMID:23840347

  15. Early origins of inflammation: An examination of prenatal and childhood social adversity in a prospective cohort study.

    PubMed

    Slopen, Natalie; Loucks, Eric B; Appleton, Allison A; Kawachi, Ichiro; Kubzansky, Laura D; Non, Amy L; Buka, Stephen; Gilman, Stephen E

    2015-01-01

    Children exposed to social adversity carry a greater risk of poor physical and mental health into adulthood. This increased risk is thought to be due, in part, to inflammatory processes associated with early adversity that contribute to the etiology of many adult illnesses. The current study asks whether aspects of the prenatal social environment are associated with levels of inflammation in adulthood, and whether prenatal and childhood adversity both contribute to adult inflammation. We examined associations of prenatal and childhood adversity assessed through direct interviews of participants in the Collaborative Perinatal Project between 1959 and 1974 with blood levels of C-reactive protein in 355 offspring interviewed in adulthood (mean age=42.2 years). Linear and quantile regression models were used to estimate the effects of prenatal adversity and childhood adversity on adult inflammation, adjusting for age, sex, and race and other potential confounders. In separate linear regression models, high levels of prenatal and childhood adversity were associated with higher CRP in adulthood. When prenatal and childhood adversity were analyzed together, our results support the presence of an effect of prenatal adversity on (log) CRP level in adulthood (β=0.73, 95% CI: 0.26, 1.20) that is independent of childhood adversity and potential confounding factors including maternal health conditions reported during pregnancy. Supplemental analyses revealed similar findings using quantile regression models and logistic regression models that used a clinically-relevant CRP threshold (>3mg/L). In a fully-adjusted model that included childhood adversity, high prenatal adversity was associated with a 3-fold elevated odds (95% CI: 1.15, 8.02) of having a CRP level in adulthood that indicates high risk of cardiovascular disease. Social adversity during the prenatal period is a risk factor for elevated inflammation in adulthood independent of adversities during childhood. This evidence is consistent with studies demonstrating that adverse exposures in the maternal environment during gestation have lasting effects on development of the immune system. If these results reflect causal associations, they suggest that interventions to improve the social and environmental conditions of pregnancy would promote health over the life course. It remains necessary to identify the mechanisms that link maternal conditions during pregnancy to the development of fetal immune and other systems involved in adaptation to environmental stressors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Education and inequalities in risk scores for coronary heart disease and body mass index: evidence for a population strategy.

    PubMed

    Liu, Sze Yan; Kawachi, Ichiro; Glymour, M Maria

    2012-09-01

    Concerns have been raised that education may have greater benefits for persons at high risk of coronary heart disease (CHD) than for those at low risk. We estimated the association of education (less than high school, high school, or college graduates) with 10-year CHD risk and body mass index (BMI), using linear and quantile regression models, in the following two nationally representative datasets: the 2006 wave of the Health and Retirement Survey and the 2003-2008 National Health and Nutrition Examination Survey (NHANES). Higher educational attainment was associated with lower 10-year CHD risk for all groups. However, the magnitude of this association varied considerably across quantiles for some subgroups. For example, among women in NHANES, a high school degree was associated with 4% (95% confidence interval = -9% to 1%) and 17% (-24% to -8%) lower CHD risk in the 10th and 90th percentiles, respectively. For BMI, a college degree was associated with uniform decreases across the distribution for women, but with varying increases for men. Compared with those who had not completed high school, male college graduates in the NHANES sample had a BMI that was 6% greater (2% to 11%) at the 10th percentile of the BMI distribution and 7% lower (-10% to -3%) at the 90th percentile (ie, overweight/obese). Estimates from the Health and Retirement Survey sample and the marginal quantile regression models showed similar patterns. Conventional regression methods may mask important variations in the associations between education and CHD risk.

  17. Flood Change Assessment and Attribution in Austrian alpine Basins

    NASA Astrophysics Data System (ADS)

    Claps, Pierluigi; Allamano, Paola; Como, Anastasia; Viglione, Alberto

    2016-04-01

    The present paper aims to investigate the sensitivity of flood peaks to global warming in the Austrian alpine basins. A group of 97 Austrian watersheds, with areas ranging from 14 to 6000 km2 and with average elevation ranging from 1000 to 2900 m a.s.l. have been considered. Annual maximum floods are available for the basins from 1890 to 2007 with two densities of observation. In a first period, until 1950, an average of 42 records of flood peaks are available. From 1951 to 2007 the density of observation increases to an average amount of contemporary peaks of 85. This information is very important with reference to the statistical tools used for the empirical assessment of change over time, that is linear quantile regressions. Application of this tool to the data set unveils trends in extreme events, confirmed by statistical testing, for the 0.75 and 0.95 empirical quantiles. All applications are made with specific (discharges/area) values . Similarly of what done in a previous approach, multiple quantile regressions have also been applied, confirming the presence of trends even when the possible interference of the specific discharge and morphoclimatic parameters (i.e. mean elevation and catchment area). Application of a geomorphoclimatic model by Allamano et al (2009) can allow to mimic to which extent the empirically available increase in air temperature and annual rainfall can justify the attribution of change derived by the empirical statistical tools. An comparison with data from Swiss alpine basins treated in a previous paper is finally undertaken.

  18. Improving Global Forecast System of extreme precipitation events with regional statistical model: Application of quantile-based probabilistic forecasts

    NASA Astrophysics Data System (ADS)

    Shastri, Hiteshri; Ghosh, Subimal; Karmakar, Subhankar

    2017-02-01

    Forecasting of extreme precipitation events at a regional scale is of high importance due to their severe impacts on society. The impacts are stronger in urban regions due to high flood potential as well high population density leading to high vulnerability. Although significant scientific improvements took place in the global models for weather forecasting, they are still not adequate at a regional scale (e.g., for an urban region) with high false alarms and low detection. There has been a need to improve the weather forecast skill at a local scale with probabilistic outcome. Here we develop a methodology with quantile regression, where the reliably simulated variables from Global Forecast System are used as predictors and different quantiles of rainfall are generated corresponding to that set of predictors. We apply this method to a flood-prone coastal city of India, Mumbai, which has experienced severe floods in recent years. We find significant improvements in the forecast with high detection and skill scores. We apply the methodology to 10 ensemble members of Global Ensemble Forecast System and find a reduction in ensemble uncertainty of precipitation across realizations with respect to that of original precipitation forecasts. We validate our model for the monsoon season of 2006 and 2007, which are independent of the training/calibration data set used in the study. We find promising results and emphasize to implement such data-driven methods for a better probabilistic forecast at an urban scale primarily for an early flood warning.

  19. Patient characteristics associated with differences in radiation exposure from pediatric abdomen-pelvis CT scans: a quantile regression analysis.

    PubMed

    Cooper, Jennifer N; Lodwick, Daniel L; Adler, Brent; Lee, Choonsik; Minneci, Peter C; Deans, Katherine J

    2017-06-01

    Computed tomography (CT) is a widely used diagnostic tool in pediatric medicine. However, due to concerns regarding radiation exposure, it is essential to identify patient characteristics associated with higher radiation burden from CT imaging, in order to more effectively target efforts towards dose reduction. Our objective was to identify the effects of various demographic and clinical patient characteristics on radiation exposure from single abdomen/pelvis CT scans in children. CT scans performed at our institution between January 2013 and August 2015 in patients under 16 years of age were processed using a software tool that estimates patient-specific organ and effective doses and merges these estimates with data from the electronic health record and billing record. Quantile regression models at the 50th, 75th, and 90th percentiles were used to estimate the effects of patients' demographic and clinical characteristics on effective dose. 2390 abdomen/pelvis CT scans (median effective dose 1.52mSv) were included. Of all characteristics examined, only older age, female gender, higher BMI, and whether the scan was a multiphase exam or an exam that required repeating for movement were significant predictors of higher effective dose at each quantile examined (all p<0.05). The effects of obesity and multiphase or repeat scanning on effective dose were magnified in higher dose scans. Older age, female gender, obesity, and multiphase or repeat scanning are all associated with increased effective dose from abdomen/pelvis CT. Targeted efforts to reduce dose from abdominal CT in these groups should be undertaken. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Data-driven modeling of surface temperature anomaly and solar activity trends

    USGS Publications Warehouse

    Friedel, Michael J.

    2012-01-01

    A novel two-step modeling scheme is used to reconstruct and analyze surface temperature and solar activity data at global, hemispheric, and regional scales. First, the self-organizing map (SOM) technique is used to extend annual modern climate data from the century to millennial scale. The SOM component planes are used to identify and quantify strength of nonlinear relations among modern surface temperature anomalies (<150 years), tropical and extratropical teleconnections, and Palmer Drought Severity Indices (0–2000 years). Cross-validation of global sea and land surface temperature anomalies verifies that the SOM is an unbiased estimator with less uncertainty than the magnitude of anomalies. Second, the quantile modeling of SOM reconstructions reveal trends and periods in surface temperature anomaly and solar activity whose timing agrees with published studies. Temporal features in surface temperature anomalies, such as the Medieval Warm Period, Little Ice Age, and Modern Warming Period, appear at all spatial scales but whose magnitudes increase when moving from ocean to land, from global to regional scales, and from southern to northern regions. Some caveats that apply when interpreting these data are the high-frequency filtering of climate signals based on quantile model selection and increased uncertainty when paleoclimatic data are limited. Even so, all models find the rate and magnitude of Modern Warming Period anomalies to be greater than those during the Medieval Warm Period. Lastly, quantile trends among reconstructed equatorial Pacific temperature profiles support the recent assertion of two primary El Niño Southern Oscillation types. These results demonstrate the efficacy of this alternative modeling approach for reconstructing and interpreting scale-dependent climate variables.

  1. A data centred method to estimate and map how the local distribution of daily precipitation is changing

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra; Stainforth, David; Watkins, Nick

    2014-05-01

    Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles in distributions of variables such as daily temperature or precipitation. Here we focus on these local changes and on a method to transform daily observations of precipitation into patterns of local climate change. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by daily precipitation data. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results show regionally consistent patterns of systematic increase in precipitation on the wettest days, and of drying across all days which is of potential value in adaptation planning. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, 2013, S. C. Chapman, N. W. Watkins, Mapping climate change in European temperature distributions, Environ. Res. Lett. 8, 034031 [2] Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119

  2. Development of a local-scale urban stream assessment method using benthic macroinvertebrates: An example from the Santa Clara Basin, California

    USGS Publications Warehouse

    Carter, J.L.; Purcell, A.H.; Fend, S.V.; Resh, V.H.

    2009-01-01

    Research that explores the biological response to urbanization on a site-specific scale is necessary for management of urban basins. Recent studies have proposed a method to characterize the biological response of benthic macroinvertebrates along an urban gradient for several climatic regions in the USA. Our study demonstrates how this general framework can be refined and applied on a smaller scale to an urbanized basin, the Santa Clara Basin (surrounding San Jose, California, USA). Eighty-four sampling sites on 14 streams in the Santa Clara Basin were used for assessing local stream conditions. First, an urban index composed of human population density, road density, and urban land cover was used to determine the extent of urbanization upstream from each sampling site. Second, a multimetric biological index was developed to characterize the response of macroinvertebrate assemblages along the urban gradient. The resulting biological index included metrics from 3 ecological categories: taxonomic composition ( Ephemeroptera, Plecoptera, and Trichoptera), functional feeding group (shredder richness), and habit ( clingers). The 90th-quantile regression line was used to define the best available biological conditions along the urban gradient, which we define as the predicted biological potential. This descriptor was then used to determine the relative condition of sites throughout the basin. Hierarchical partitioning of variance revealed that several site-specific variables (dissolved O2 and temperature) were significantly related to a site's deviation from its predicted biological potential. Spatial analysis of each site's deviation from its biological potential indicated geographic heterogeneity in the distribution of impaired sites. The presence and operation of local dams optimize water use, but modify natural flow regimes, which in turn influence stream habitat, dissolved O2, and temperature. Current dissolved O2 and temperature regimes deviate from natural conditions and appear to affect benthic macroinvertebrate assemblages. The assessment methods presented in our study provide finer-scale assessment tools for managers in urban basins. ?? North American Benthological Society.

  3. Health status convergence at the local level: empirical evidence from Austria

    PubMed Central

    2011-01-01

    Introduction Health is an important dimension of welfare comparisons across individuals, regions and states. Particularly from a long-term perspective, within-country convergence of the health status has rarely been investigated by applying methods well established in other scientific fields. In the following paper we study the relation between initial levels of the health status and its improvement at the local community level in Austria in the time period 1969-2004. Methods We use age standardized mortality rates from 2381 Austrian communities as an indicator for the health status and analyze the convergence/divergence of overall mortality for (i) the whole population, (ii) females, (iii) males and (iv) the gender mortality gap. Convergence/Divergence is studied by applying different concepts of cross-regional inequality (weighted standard deviation, coefficient of variation, Theil-Coefficient of inequality). Various econometric techniques (weighted OLS, Quantile Regression, Kendall's Rank Concordance) are used to test for absolute and conditional beta-convergence in mortality. Results Regarding sigma-convergence, we find rather mixed results. While the weighted standard deviation indicates an increase in equality for all four variables, the picture appears less clear when correcting for the decreasing mean in the distribution. However, we find highly significant coefficients for absolute and conditional beta-convergence between the periods. While these results are confirmed by several robustness tests, we also find evidence for the existence of convergence clubs. Conclusions The highly significant beta-convergence across communities might be caused by (i) the efforts to harmonize and centralize the health policy at the federal level in Austria since the 1970s, (ii) the diminishing returns of the input factors in the health production function, which might lead to convergence, as the general conditions (e.g. income, education etc.) improve over time, and (iii) the mobility of people across regions, as people tend to move to regions/communities which exhibit more favorable living conditions. JEL classification: I10, I12, I18 PMID:21864364

  4. Modelling probabilities of heavy precipitation by regional approaches

    NASA Astrophysics Data System (ADS)

    Gaal, L.; Kysely, J.

    2009-09-01

    Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of the size of the region is linked with a built-in test on regional homogeneity of data. Once a pooling group is delineated, weights based on a dissimilarity measure are assigned to individual sites involved in a pooling group, and all (weighted) data are employed in the estimation of model parameters and high quantiles at a given location. The ROI method is compared with the Hosking-Wallis (HW) regional frequency analysis, which is based on delineating fixed regions (instead of flexible pooling groups) and assigning unit weights to all sites in a region. The comparison of the performance of the individual regional models makes use of data on annual maxima of 1-day precipitation amounts at 209 stations covering the Czech Republic, with altitudes ranging from 150 to 1490 m a.s.l. We conclude that the ROI methodology is superior to the HW analysis, particularly for very high quantiles (100-yr return values). Another advantage of the ROI approach is that subjective decisions - unavoidable when fixed regions in the HW analysis are formed - may efficiently be suppressed, and almost all settings of the ROI method may be justified by results of the simulation experiments. The differences between (any) regional method and single-site analysis are very pronounced and suggest that the at-site estimation is highly unreliable. The ROI method is then applied to estimate high quantiles of precipitation amounts at individual sites. The estimates and their uncertainty are compared with those from a single-site analysis. We focus on the eastern part of the Czech Republic, i.e. an area with complex orography and a particularly pronounced role of Mediterranean cyclones in producing precipitation extremes. The design values are compared with precipitation amounts recorded during the recent heavy precipitation events, including the one associated with the flash flood on June 24, 2009. We also show that the ROI methodology may easily be transferred to the analysis of precipitation extremes in climate model outputs. It efficiently reduces (random) variations in the estimates of parameters of the extreme value distributions in individual gridboxes that result from large spatial variability of heavy precipitation, and represents a straightforward tool for ‘weighting’ data from neighbouring gridboxes within the estimation procedure. The study is supported by the Grant Agency of AS CR under project B300420801.

  5. Evaluating potentials for future generation off-shore wind-power outside Norway

    NASA Astrophysics Data System (ADS)

    Benestad, R. E.; Haugen, J.; Haakenstad, H.

    2012-12-01

    With todays critical need of renewable energy sources, it is naturally to look towards wind power. With the long coast of Norway, there is a large potential for wind farms offshore Norway. Although there are more challenges with offshore wind energy installations compared to wind farms on land, the offshore wind is generally higher, and there is also higher persistence of wind speed values in the power generating classes. I planning offshore wind farms, there is a need of evaluation of the wind resources, the wind climatology and possible future changes. In this aspect, we use data from regional climate model runs performed in the European ENSEMBLE-project (van der Linden and J.F.B. Mitchell, 2009). In spite of increased reliability in RCMs in the recent years, the simulations still suffer from systematic model errors, therefore the data has to be corrected before using them in wind resource analyses. In correcting the wind speeds from the RCMs, we will use wind speeds from a Norwegian high resolution wind- and wave- archive, NORA10 (Reistad et al 2010), to do quantile mapping (Themeβl et. al. 2012). The quantile mapping is performed individually for each regional simulation driven by ERA40-reanalysis from the ENSEMBLE-project corrected against NORA10. The same calibration is then used to the belonging regional climate scenario. The calibration is done for each grid cell in the domain and for each day of the year centered in a +/-15 day window to make an empirical cumulative density function for each day of the year. The quantile mapping of the scenarios provide us with a new wind speed data set for the future, more correct compared to the raw ENSEMBLE scenarios. References: Reistad M., Ø. Breivik, H. Haakenstad, O. J. Aarnes, B. R. Furevik and J-R Bidlo, 2010, A high-resolution hindcast of wind and waves for The North Sea, The Norwegian Sea and The Barents Sea. J. Geophys. Res., 116. doi:10.1029/2010JC006402. Themessl M. J., A. Gobiet and A. Leuprecht, 2012, Empirical-statistical downscaling and error correction of regional climate models and its imipact on the climate change signal. Climatic Change 112: 449-468, DOI 10.1007/s10584-011-0224-4. Van der Linden P. and J.F.B. Mitchell, 2009, ENSEMBLES: Climate Change and its Impacts_ Summary and results from the ENSEMBLES project. Met Office Hadley Centre, FitzRoy Road, Exeter EX1 3PB, UK.

  6. Impact of bias correction and downscaling through quantile mapping on simulated climate change signal: a case study over Central Italy

    NASA Astrophysics Data System (ADS)

    Sangelantoni, Lorenzo; Russo, Aniello; Gennaretti, Fabio

    2018-02-01

    Quantile mapping (QM) represents a common post-processing technique used to connect climate simulations to impact studies at different spatial scales. Depending on the simulation-observation spatial scale mismatch, QM can be used for two different applications. The first application uses only the bias correction component, establishing transfer functions between observations and simulations at similar spatial scales. The second application includes a statistical downscaling component when point-scale observations are considered. However, knowledge of alterations to climate change signal (CCS) resulting from these two applications is limited. This study investigates QM impacts on the original temperature and precipitation CCSs when applied according to a bias correction only (BC-only) and a bias correction plus downscaling (BC + DS) application over reference stations in Central Italy. BC-only application is used to adjust regional climate model (RCM) simulations having the same resolution as the observation grid. QM BC + DS application adjusts the same simulations to point-wise observations. QM applications alter CCS mainly for temperature. BC-only application produces a CCS of the median 1 °C lower than the original ( 4.5 °C). BC + DS application produces CCS closer to the original, except over the summer 95th percentile, where substantial amplification of the original CCS resulted. The impacts of the two applications are connected to the ratio between the observed and the simulated standard deviation (STD) of the calibration period. For the precipitation, original CCS is essentially preserved in both applications. Yet, calibration period STD ratio cannot predict QM impact on the precipitation CCS when simulated STD and mean are similarly misrepresented.

  7. What do we gain with Probabilistic Flood Loss Models?

    NASA Astrophysics Data System (ADS)

    Schroeter, K.; Kreibich, H.; Vogel, K.; Merz, B.; Lüdtke, S.

    2015-12-01

    The reliability of flood loss models is a prerequisite for their practical usefulness. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions which are cast in a probabilistic framework. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  8. A Protein Domain and Family Based Approach to Rare Variant Association Analysis.

    PubMed

    Richardson, Tom G; Shihab, Hashem A; Rivas, Manuel A; McCarthy, Mark I; Campbell, Colin; Timpson, Nicholas J; Gaunt, Tom R

    2016-01-01

    It has become common practice to analyse large scale sequencing data with statistical approaches based around the aggregation of rare variants within the same gene. We applied a novel approach to rare variant analysis by collapsing variants together using protein domain and family coordinates, regarded to be a more discrete definition of a biologically functional unit. Using Pfam definitions, we collapsed rare variants (Minor Allele Frequency ≤ 1%) together in three different ways 1) variants within single genomic regions which map to individual protein domains 2) variants within two individual protein domain regions which are predicted to be responsible for a protein-protein interaction 3) all variants within combined regions from multiple genes responsible for coding the same protein domain (i.e. protein families). A conventional collapsing analysis using gene coordinates was also undertaken for comparison. We used UK10K sequence data and investigated associations between regions of variants and lipid traits using the sequence kernel association test (SKAT). We observed no strong evidence of association between regions of variants based on Pfam domain definitions and lipid traits. Quantile-Quantile plots illustrated that the overall distributions of p-values from the protein domain analyses were comparable to that of a conventional gene-based approach. Deviations from this distribution suggested that collapsing by either protein domain or gene definitions may be favourable depending on the trait analysed. We have collapsed rare variants together using protein domain and family coordinates to present an alternative approach over collapsing across conventionally used gene-based regions. Although no strong evidence of association was detected in these analyses, future studies may still find value in adopting these approaches to detect previously unidentified association signals.

  9. Disentangling the effects of low pH and metal mixture toxicity on macroinvertebrate diversity

    USGS Publications Warehouse

    Fornaroli, Riccardo; Ippolito, Alessio; Tolkkinen, Mari J.; Mykrä, Heikki; Muotka, Timo; Balistrieri, Laurie S.; Schmidt, Travis S.

    2018-01-01

    One of the primary goals of biological assessment of streams is to identify which of a suite of chemical stressors is limiting their ecological potential. Elevated metal concentrations in streams are often associated with low pH, yet the effects of these two potentially limiting factors of freshwater biodiversity are rarely considered to interact beyond the effects of pH on metal speciation. Using a dataset from two continents, a biogeochemical model of the toxicity of metal mixtures (Al, Cd, Cu, Pb, Zn) and quantile regression, we addressed the relative importance of both pH and metals as limiting factors for macroinvertebrate communities. Current environmental quality standards for metals proved to be protective of stream macroinvertebrate communities and were used as a starting point to assess metal mixture toxicity. A model of metal mixture toxicity accounting for metal interactions was a better predictor of macroinvertebrate responses than a model considering individual metal toxicity. We showed that the direct limiting effect of pH on richness was of the same magnitude as that of chronic metal toxicity, independent of its influence on the availability and toxicity of metals. By accounting for the direct effect of pH on macroinvertebrate communities, we were able to determine that acidic streams supported less diverse communities than neutral streams even when metals were below no-effect thresholds. Through a multivariate quantile model, we untangled the limiting effect of both pH and metals and predicted the maximum diversity that could be expected at other sites as a function of these variables. This model can be used to identify which of the two stressors is more limiting to the ecological potential of running waters.

  10. Association between Physical Activity and Teacher-Reported Academic Performance among Fifth-Graders in Shanghai: A Quantile Regression

    PubMed Central

    Zhang, Yunting; Zhang, Donglan; Jiang, Yanrui; Sun, Wanqi; Wang, Yan; Chen, Wenjuan; Li, Shenghui; Shi, Lu; Shen, Xiaoming; Zhang, Jun; Jiang, Fan

    2015-01-01

    Introduction A growing body of literature reveals the causal pathways between physical activity and brain function, indicating that increasing physical activity among children could improve rather than undermine their scholastic performance. However, past studies of physical activity and scholastic performance among students often relied on parent-reported grade information, and did not explore whether the association varied among different levels of scholastic performance. Our study among fifth-grade students in Shanghai sought to determine the association between regular physical activity and teacher-reported academic performance scores (APS), with special attention to the differential associational patterns across different strata of scholastic performance. Method A total of 2,225 students were chosen through a stratified random sampling, and a complete sample of 1470 observations were used for analysis. We used a quantile regression analysis to explore whether the association between physical activity and teacher-reported APS differs by distribution of APS. Results Minimal-intensity physical activity such as walking was positively associated with academic performance scores (β = 0.13, SE = 0.04). The magnitude of the association tends to be larger at the lower end of the APS distribution (β = 0.24, SE = 0.08) than in the higher end of the distribution (β = 0.00, SE = 0.07). Conclusion Based upon teacher-reported student academic performance, there is no evidence that spending time on frequent physical activity would undermine student’s APS. Those students who are below the average in their academic performance could be worse off in academic performance if they give up minimal-intensity physical activity. Therefore, cutting physical activity time in schools could hurt the scholastic performance among those students who were already at higher risk for dropping out due to inadequate APS. PMID:25774525

  11. Disentangling the effects of low pH and metal mixture toxicity on macroinvertebrate diversity.

    PubMed

    Fornaroli, Riccardo; Ippolito, Alessio; Tolkkinen, Mari J; Mykrä, Heikki; Muotka, Timo; Balistrieri, Laurie S; Schmidt, Travis S

    2018-04-01

    One of the primary goals of biological assessment of streams is to identify which of a suite of chemical stressors is limiting their ecological potential. Elevated metal concentrations in streams are often associated with low pH, yet the effects of these two potentially limiting factors of freshwater biodiversity are rarely considered to interact beyond the effects of pH on metal speciation. Using a dataset from two continents, a biogeochemical model of the toxicity of metal mixtures (Al, Cd, Cu, Pb, Zn) and quantile regression, we addressed the relative importance of both pH and metals as limiting factors for macroinvertebrate communities. Current environmental quality standards for metals proved to be protective of stream macroinvertebrate communities and were used as a starting point to assess metal mixture toxicity. A model of metal mixture toxicity accounting for metal interactions was a better predictor of macroinvertebrate responses than a model considering individual metal toxicity. We showed that the direct limiting effect of pH on richness was of the same magnitude as that of chronic metal toxicity, independent of its influence on the availability and toxicity of metals. By accounting for the direct effect of pH on macroinvertebrate communities, we were able to determine that acidic streams supported less diverse communities than neutral streams even when metals were below no-effect thresholds. Through a multivariate quantile model, we untangled the limiting effect of both pH and metals and predicted the maximum diversity that could be expected at other sites as a function of these variables. This model can be used to identify which of the two stressors is more limiting to the ecological potential of running waters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Physical activity and the association with fatigue and sleep in Danish patients with rheumatoid arthritis.

    PubMed

    Løppenthin, K; Esbensen, B A; Østergaard, M; Jennum, P; Tolver, A; Aadahl, M; Thomsen, T; Midtgaard, J

    2015-10-01

    The aim of this study was to examine physical activity behavior in patients with rheumatoid arthritis and to identify potential correlates of regular physical activity including fatigue, sleep, pain, physical function and disease activity. A total of 443 patients were recruited from a rheumatology outpatient clinic and included in this cross-sectional study. Physical activity was assessed by a four-class questionnaire, in addition to the Physical Activity Scale. Other instruments included the Multidimensional Fatigue Inventory (MFI), the Pittsburgh Sleep Quality Index and the Health Assessment Questionnaire. Disease activity was obtained from a nationwide clinical database. Of the included patients, 80 % were female and mean age was 60 (range 21-88 years). Hereof, 22 % (n = 96) were regularly physically active, and 78 % (n = 349) were mainly sedentary or having a low level of physical activity. An inverse univariate association was found between moderate to vigorous physical activity, and fatigue (MFI mental, MFI activity, MFI physical and MFI general), sleep, diabetes, depression, pain, patient global assessment, HAQ and disease activity. The multivariate prediction model demonstrated that fatigue-related reduced activity and physical fatigue were selected in >95 % of the bootstrap samples with median odds ratio 0.89 (2.5-97.5 % quantiles: 0.78-1.00) and 0.91 (2.5-97.5 % quantiles: 0.81-0.97), respectively, while disease activity was selected in 82 % of the bootstrap samples with median odds ratio 0.90. Moderate to vigorous physical activity in patients with rheumatoid arthritis is associated with the absence of several RA-related factors with the most important correlates being reduced activity due to fatigue, physical fatigue and disease activity.

  13. Uncertainty analysis of an inflow forecasting model: extension of the UNEEC machine learning-based method

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri

    2010-05-01

    This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.

  14. Ecoclimatic indicators to study crop suitability in present and future climatic conditionsTIC CONDITIONS

    NASA Astrophysics Data System (ADS)

    Caubel, Julie; Garcia de Cortazar Atauri, Inaki; Huard, Frédéric; Launay, Marie; Ripoche, Dominique; Gouache, David; Bancal, Marie-Odile; Graux, Anne-Isabelle; De Noblet, Nathalie

    2013-04-01

    Climate change is expected to affect both regional and global food production through changes in overall agroclimatic conditions. It is therefore necessary to develop simple tools of crop suitability diagnosis in a given area so that stakeholders can envisage land use adaptations under climate change conditions. The most common way to investigate potential impacts of climate on the evolution of agrosystems is to make use of an array of agroclimatic indicators, which provide synthetic information derived from climatic variables and calculated within fixed periods (i.e. January first - 31th July). However, the information obtained during these periods does not enable to take account of the plant response to climate. In this work, we present some results of the research program ORACLE (Opportunities and Risks of Agrosystems & forests in response to CLimate, socio-economic and policy changEs in France (and Europe). We proposed a suite of relevant ecoclimatic indicators, based on temperature and rainfall, in order to evaluate crop suitability for both present and new climatic conditions. Ecoclimatic indicators are agroclimatic indicators (e.g., grain heat stress) calculated during specific phenological phases so as to take account of the plant response to climate (e.g., the grain filling period, flowering- harvest). These indicators are linked with the ecophysiological processes they characterize (for e.g., the grain filling). To represent this methodology, we studied the suitability of winter wheat in future climatic conditions through three distinct French sites, Toulouse, Dijon and Versailles. Indicators have been calculated using climatic data from 1950 to 2100 simulated by the global climate model ARPEGE forced by a greenhouse effect corresponding to the SRES A1B scenario. The Quantile-Quantile downscaling method was applied to obtain data for the three locations. Phenological stages (emergence, ear 1 cm, flowering, beginning of grain filling and harvest) have been simulated by the STICS, CERES and PANORAMIX crop models with the same input climatic data. Results showed that phenological stages tend to be reached earlier in the future. Significant differences were noted between indicators calculated for invariable calendar periods and indicators calculated during phenological phases. Therefore, ecoclimatic indicators are relevant to provide accurate information about crop suitability in the context of climate change. Whereas most of the indicators do not indicate any significant changes in the future, plant mortality due to frost risks from emergence to ear 1 cm tends to decrease and water supply tends to be more limiting in the future. These indicators do not replace models but represent additional tools for understanding and spatializing some results obtained by models. Their use can provide a spatial distribution of crops according to their suitability in present or future climatic conditions and enable us to minimize the risk of crop failure. It would be interesting to consider the response uncertainties according to the uncertainties we have in future climatic predictions by using different greenhouse emission scenarios and downscaling methods.

  15. A stochastic electricity market clearing formulation with consistent pricing properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavala, Victor M.; Kim, Kibaek; Anitescu, Mihai

    We argue that deterministic market clearing formulations introduce arbitrary distortions between day-ahead and expected real-time prices that bias economic incentives. We extend and analyze a previously proposed stochastic clearing formulation in which the social surplus function induces penalties between day-ahead and real-time quantities. We prove that the formulation yields price bounded price distortions, and we show that adding a similar penalty term to transmission flows and phase angles ensures boundedness throughout the network. We prove that when the price distortions are zero, day-ahead quantities equal a quantile of their real-time counterparts. The undesired effects of price distortions suggest that stochasticmore » settings provide significant benefits over deterministic ones that go beyond social surplus improvements. Finally, we propose additional metrics to evaluate these benefits.« less

  16. A stochastic electricity market clearing formulation with consistent pricing properties

    DOE PAGES

    Zavala, Victor M.; Kim, Kibaek; Anitescu, Mihai; ...

    2017-03-16

    We argue that deterministic market clearing formulations introduce arbitrary distortions between day-ahead and expected real-time prices that bias economic incentives. We extend and analyze a previously proposed stochastic clearing formulation in which the social surplus function induces penalties between day-ahead and real-time quantities. We prove that the formulation yields price bounded price distortions, and we show that adding a similar penalty term to transmission flows and phase angles ensures boundedness throughout the network. We prove that when the price distortions are zero, day-ahead quantities equal a quantile of their real-time counterparts. The undesired effects of price distortions suggest that stochasticmore » settings provide significant benefits over deterministic ones that go beyond social surplus improvements. Finally, we propose additional metrics to evaluate these benefits.« less

  17. Climate change impact on the management of water resources in the Seine River basin, France

    NASA Astrophysics Data System (ADS)

    Dorchies, David; Thirel, Guillaume; Chauveau, Mathilde; Jay-Allemand, Maxime; Perrin, Charles; Dehay, Florine

    2013-04-01

    It is today commonly accepted that adaptation strategies will be needed to cope with the hydrological consequences of projected climate change. The main objective of the IWRM-Net Climaware project is to design adaptation strategies for various socio-economic sectors and evaluate their relevance at the European scale. Within the project, the Seine case study focuses on dam management. The Seine River basin at Paris (43800km²) shows major socio-economic stakes in France. Due to its important and growing demography, the number of industries depending on water resources or located on the river sides, and the developed agricultural sector, the consequences of droughts and floods may be dramatic. To mitigate the extreme hydrological events, a system of four large multi-purpose reservoirs was built in the upstream part of the basin between 1949 and 1990. The IPCC reports indicate modifications of the climate conditions in northern France in the future. An increase of mean temperature is very likely, and the rainfall patterns could be modified: the uncertainty on future trends is still high, but summer periods could experience lower quantities of rainfall. Anticipating these changes are crucial: will the present reservoirs system be adapted to these conditions? Here we propose to evaluate the capacity of the Seine River reservoirs to withstand future projected climate conditions using the current management rules. For this study a modeling chain was designed. We used two hydrological models: GR4J, a lumped model used as a benchmark, and TGR, a semi-distributed model. TGR was tuned to explicitly account for reservoir management rules. Seven climatic models forced by the moderate A1B IPCC scenario and downscaled using a weather-type method (DSCLIM, Pagé et al., 2009), were used. A quantile-quantile type method was applied to correct bias in climate simulations. A model to mimic the way reservoirs are managed was also developed. The evolution of low flows, high flows and annual flows were assessed under natural condition (i.e. without the inclusion of the reservoirs in the models). Then, the impact of reservoirs and their management were accounted for in the modeling chain. Results will be discussed relatively to future hydro-climatic conditions and current mitigation objectives within the basin. Reference: Pagé, C., L. Terray et J. Boé, 2009: dsclim: A software package to downscale climate scenarios at regional scale using a weather-typing based statistical methodology. Technical Report TR/CMGC/09/21, SUC au CERFACS, URA CERFACS/CNRS No1875, Toulouse, France. Link : http://www.cerfacs.fr/~page/dsclim/dsclim_doc-latest.pdf

  18. Censored data treatment using additional information in intelligent medical systems

    NASA Astrophysics Data System (ADS)

    Zenkova, Z. N.

    2015-11-01

    Statistical procedures are a very important and significant part of modern intelligent medical systems. They are used for proceeding, mining and analysis of different types of the data about patients and their diseases; help to make various decisions, regarding the diagnosis, treatment, medication or surgery, etc. In many cases the data can be censored or incomplete. It is a well-known fact that censorship considerably reduces the efficiency of statistical procedures. In this paper the author makes a brief review of the approaches which allow improvement of the procedures using additional information, and describes a modified estimation of an unknown cumulative distribution function involving additional information about a quantile which is known exactly. The additional information is used by applying a projection of a classical estimator to a set of estimators with certain properties. The Kaplan-Meier estimator is considered as an estimator of the unknown cumulative distribution function, the properties of the modified estimator are investigated for a case of a single right censorship by means of simulations.

  19. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  20. Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data.

    PubMed

    Abram, Samantha V; Helwig, Nathaniel E; Moodie, Craig A; DeYoung, Colin G; MacDonald, Angus W; Waller, Niels G

    2016-01-01

    Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks.

  1. Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data

    PubMed Central

    Abram, Samantha V.; Helwig, Nathaniel E.; Moodie, Craig A.; DeYoung, Colin G.; MacDonald, Angus W.; Waller, Niels G.

    2016-01-01

    Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks. PMID:27516732

  2. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F { X ( t ), t } where F (·,·) is an unknown regression function and X ( t ) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F (·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X ( t ) is a signal from diffusion tensor imaging at position, t , along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  3. On the use of high-frequency SCADA data for improved wind turbine performance monitoring

    NASA Astrophysics Data System (ADS)

    Gonzalez, E.; Stephen, B.; Infield, D.; Melero, J. J.

    2017-11-01

    SCADA-based condition monitoring of wind turbines facilitates the move from costly corrective repairs towards more proactive maintenance strategies. In this work, we advocate the use of high-frequency SCADA data and quantile regression to build a cost effective performance monitoring tool. The benefits of the approach are demonstrated through the comparison between state-of-the-art deterministic power curve modelling techniques and the suggested probabilistic model. Detection capabilities are compared for low and high-frequency SCADA data, providing evidence for monitoring at higher resolutions. Operational data from healthy and faulty turbines are used to provide a practical example of usage with the proposed tool, effectively achieving the detection of an incipient gearbox malfunction at a time horizon of more than one month prior to the actual occurrence of the failure.

  4. Restoration of Monotonicity Respecting in Dynamic Regression

    PubMed Central

    Huang, Yijian

    2017-01-01

    Dynamic regression models, including the quantile regression model and Aalen’s additive hazards model, are widely adopted to investigate evolving covariate effects. Yet lack of monotonicity respecting with standard estimation procedures remains an outstanding issue. Advances have recently been made, but none provides a complete resolution. In this article, we propose a novel adaptive interpolation method to restore monotonicity respecting, by successively identifying and then interpolating nearest monotonicity-respecting points of an original estimator. Under mild regularity conditions, the resulting regression coefficient estimator is shown to be asymptotically equivalent to the original. Our numerical studies have demonstrated that the proposed estimator is much more smooth and may have better finite-sample efficiency than the original as well as, when available as only in special cases, other competing monotonicity-respecting estimators. Illustration with a clinical study is provided. PMID:29430068

  5. Prediction of flood quantiles at ungaged watersheds in Louisiana : final report.

    DOT National Transportation Integrated Search

    1989-12-01

    Four popular regional flood frequency methods were compared using Louisiana stream flow series. The state was divided into four homogeneous regions and all undistorted, long-term stream gages were used in the analysis. The GEV, TCEV, regional LP3 and...

  6. Factors associated with the income distribution of full-time physicians: a quantile regression approach.

    PubMed

    Shih, Ya-Chen Tina; Konrad, Thomas R

    2007-10-01

    Physician income is generally high, but quite variable; hence, physicians have divergent perspectives regarding health policy initiatives and market reforms that could affect their incomes. We investigated factors underlying the distribution of income within the physician population. Full-time physicians (N=10,777) from the restricted version of the 1996-1997 Community Tracking Study Physician Survey (CTS-PS), 1996 Area Resource File, and 1996 health maintenance organization penetration data. We conducted separate analyses for primary care physicians (PCPs) and specialists. We employed least square and quantile regression models to examine factors associated with physician incomes at the mean and at various points of the income distribution, respectively. We accounted for the complex survey design for the CTS-PS data using appropriate weighted procedures and explored endogeneity using an instrumental variables method. We detected widespread and subtle effects of many variables on physician incomes at different points (10th, 25th, 75th, and 90th percentiles) in the distribution that were undetected when employing regression estimations focusing on only the means or medians. Our findings show that the effects of managed care penetration are demonstrable at the mean of specialist incomes, but are more pronounced at higher levels. Conversely, a gender gap in earnings occurs at all levels of income of both PCPs and specialists, but is more pronounced at lower income levels. The quantile regression technique offers an analytical tool to evaluate policy effects beyond the means. A longitudinal application of this approach may enable health policy makers to identify winners and losers among segments of the physician workforce and assess how market dynamics and health policy initiatives affect the overall physician income distribution over various time intervals.

  7. Differences in nutrient and energy contents of commonly consumed dishes prepared in restaurants v. at home in Hunan Province, China.

    PubMed

    Jia, Xiaofang; Liu, Jiawu; Chen, Bo; Jin, Donghui; Fu, Zhongxi; Liu, Huilin; Du, Shufa; Popkin, Barry M; Mendez, Michelle A

    2018-05-01

    Eating away from home is associated with poor diet quality, in part due to less healthy food choices and larger portions. However, few studies account for the potential additional contribution of differences in food composition between restaurant- and home-prepared dishes. The present study aimed to investigate differences in nutrients of dishes prepared in restaurants v. at home. Eight commonly consumed dishes were collected in twenty of each of the following types of locations: small and large restaurants, and urban and rural households. In addition, two fast-food items were collected from ten KFC, McDonald's and food stalls. Five samples per dish were randomly pooled from every location. Nutrients were analysed and energy was calculated in composite samples. Differences in nutrients of dishes by preparation location were determined. Hunan Province, China. Na, K, protein, total fat, fatty acids, carbohydrate and energy in dishes. On average, both the absolute and relative fat contents, SFA and Na:K ratio were higher in dishes prepared in restaurants than households (P < 0·05). Protein was 15 % higher in animal food-based dishes prepared in households than restaurants (P<0·05). Quantile regression models found that, at the 90th quantile, restaurant preparation was consistently negatively associated with protein and positively associated with the percentage of energy from fat in all dishes. Moreover, restaurant preparation also positively influenced the SFA content in dishes, except at the highest quantiles. These findings suggest that compared with home preparation, dishes prepared in restaurants in China may differ in concentrations of total fat, SFA, protein and Na:K ratio, which may further contribute, beyond food choices, to less healthy nutrient intakes linked to eating away from home.

  8. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  9. Differences in nutrient and energy content of commonly-consumed dishes prepared in restaurants vs. at home in Hunan province, China

    PubMed Central

    Jia, Xiaofang; Liu, Jiawu; Chen, Bo; Jin, Donghui; Fu, Zhongxi; Liu, Huilin; Du, Shufa; Popkin, Barry M.; Mendez, Michelle A.

    2017-01-01

    Objective Eating away from home is associated with poor diet quality, in part due to less healthy food choices and larger portions. However, few studies take into account the potential additional contribution of differences in food composition between restaurant- and home-prepared dishes. This study aimed to investigate differences in nutrients of dishes prepared in restaurants vs. at home. Design Eight commonly consumed dishes were collected in 20 of each of the following types of locations: small and large restaurants, and urban and rural households. In addition, two fast-food items were collected from 10 KFC’s, McDonald’s, and food stalls. Five samples per dish were randomly pooled from every location. Nutrients were analyzed and energy was calculated in composite samples. Differences in nutrients of dishes by preparation location were determined. Setting Urban and rural. Subjects Sodium, potassium, protein, total fat, fatty acids, carbohydrate, and energy in dishes. Results On average, both the absolute and relative fat content, saturated fatty acid (SFA) and sodium/potassium ratio were higher in dishes prepared in restaurants than households (P<0.05). Protein was 15% higher in animal food-based dishes prepared in households than restaurants (P <0.05). Quantile regression models found that, at the 90th quantile, restaurant preparation was consistently negatively associated with protein and positively associated with the percentage energy from fat in all dishes. Moreover, restaurant preparation also positively influenced the SFA content in dishes, except at the highest quantiles. Conclusions These findings suggest that compared to home preparation, dishes prepared in restaurants in China may differ in concentrations of total fat, SFA, protein, and sodium/potassium ratio, which may further contribute, beyond food choices, to less healthy nutrient intake linked to eating away from home. PMID:29306339

  10. Subjective wellbeing, suicide and socioeconomic factors: an ecological analysis in Hong Kong.

    PubMed

    Hsu, C-Y; Chang, S-S; Yip, P S F

    2018-04-10

    There has recently been an increased interest in mental health indicators for the monitoring of population wellbeing, which is among the targets of Sustainable Development Goals adopted by the United Nations. Levels of subjective wellbeing and suicide rates have been proposed as indicators of population mental health, but prior research is limited. Data on individual happiness and life satisfaction were sourced from a population-based survey in Hong Kong (2011). Suicide data were extracted from Coroner's Court files (2005-2013). Area characteristic variables included local poverty rate and four factors derived from a factor analysis of 21 variables extracted from the 2011 census. The associations between mean happiness and life satisfaction scores and suicide rates were assessed using Pearson correlation coefficient at two area levels: 18 districts and 30 quantiles of large street blocks (LSBs; n = 1620). LSB is a small area unit with a higher level of within-unit homogeneity compared with districts. Partial correlations were used to control for area characteristics. Happiness and life satisfaction demonstrated weak inverse associations with suicide rate at the district level (r = -0.32 and -0.36, respectively) but very strong associations at the LSB quantile level (r = -0.83 and -0.84, respectively). There were generally very weak or weak negative correlations across sex/age groups at the district level but generally moderate to strong correlations at the LSB quantile level. The associations were markedly attenuated or became null after controlling for area characteristics. Subjective wellbeing is strongly associated with suicide at a small area level; socioeconomic factors can largely explain this association. Socioeconomic factors could play an important role in determining the wellbeing of the population, and this could inform policies aimed at enhancing population wellbeing.

  11. Bias correction of surface downwelling longwave and shortwave radiation for the EWEMBI dataset

    NASA Astrophysics Data System (ADS)

    Lange, Stefan

    2018-05-01

    Many meteorological forcing datasets include bias-corrected surface downwelling longwave and shortwave radiation (rlds and rsds). Methods used for such bias corrections range from multi-year monthly mean value scaling to quantile mapping at the daily timescale. An additional downscaling is necessary if the data to be corrected have a higher spatial resolution than the observational data used to determine the biases. This was the case when EartH2Observe (E2OBS; Calton et al., 2016) rlds and rsds were bias-corrected using more coarsely resolved Surface Radiation Budget (SRB; Stackhouse Jr. et al., 2011) data for the production of the meteorological forcing dataset EWEMBI (Lange, 2016). This article systematically compares various parametric quantile mapping methods designed specifically for this purpose, including those used for the production of EWEMBI rlds and rsds. The methods vary in the timescale at which they operate, in their way of accounting for physical upper radiation limits, and in their approach to bridging the spatial resolution gap between E2OBS and SRB. It is shown how temporal and spatial variability deflation related to bilinear interpolation and other deterministic downscaling approaches can be overcome by downscaling the target statistics of quantile mapping from the SRB to the E2OBS grid such that the sub-SRB-grid-scale spatial variability present in the original E2OBS data is retained. Cross validations at the daily and monthly timescales reveal that it is worthwhile to take empirical estimates of physical upper limits into account when adjusting either radiation component and that, overall, bias correction at the daily timescale is more effective than bias correction at the monthly timescale if sampling errors are taken into account.

  12. Environmental influence on mussel (Mytilus edulis) growth - A quantile regression approach

    NASA Astrophysics Data System (ADS)

    Bergström, Per; Lindegarth, Mats

    2016-03-01

    The need for methods for sustainable management and use of coastal ecosystems has increased in the last century. A key aspect for obtaining ecologically and economically sustainable aquaculture in threatened coastal areas is the requirement of geographic information of growth and potential production capacity. Growth varies over time and space and depends on a complex pattern of interactions between the bivalve and a diverse range of environmental factors (e.g. temperature, salinity, food availability). Understanding these processes and modelling the environmental control of bivalve growth has been central in aquaculture. In contrast to the most conventional modelling techniques, quantile regression can handle cases where not all factors are measured and provide the possibility to estimate the effect at different levels of the response distribution and give therefore a more complete picture of the relationship between environmental factors and biological response. Observation of the relationships between environmental factors and growth of the bivalve Mytilus edulis revealed relationships that varied both among level of growth rate and within the range of environmental variables along the Swedish west coast. The strongest patterns were found for water oxygen concentration level which had a negative effect on growth for all oxygen levels and growth levels. However, these patterns coincided with differences in growth among periods and very little of the remaining variability within periods could be explained indicating that interactive processes masked the importance of the individual variables. By using quantile regression and local regression (LOESS) this study was able to provide valuable information on environmental factors influencing the growth of M. edulis and important insight for the development of ecosystem based management tools of aquaculture activities, its use in mitigation efforts and successful management of human use of coastal areas.

  13. Factors Associated with the Income Distribution of Full-Time Physicians: A Quantile Regression Approach

    PubMed Central

    Shih, Ya-Chen Tina; Konrad, Thomas R

    2007-01-01

    Objective Physician income is generally high, but quite variable; hence, physicians have divergent perspectives regarding health policy initiatives and market reforms that could affect their incomes. We investigated factors underlying the distribution of income within the physician population. Data Sources Full-time physicians (N=10,777) from the restricted version of the 1996–1997 Community Tracking Study Physician Survey (CTS-PS), 1996 Area Resource File, and 1996 health maintenance organization penetration data. Study Design We conducted separate analyses for primary care physicians (PCPs) and specialists. We employed least square and quantile regression models to examine factors associated with physician incomes at the mean and at various points of the income distribution, respectively. We accounted for the complex survey design for the CTS-PS data using appropriate weighted procedures and explored endogeneity using an instrumental variables method. Principal Findings We detected widespread and subtle effects of many variables on physician incomes at different points (10th, 25th, 75th, and 90th percentiles) in the distribution that were undetected when employing regression estimations focusing on only the means or medians. Our findings show that the effects of managed care penetration are demonstrable at the mean of specialist incomes, but are more pronounced at higher levels. Conversely, a gender gap in earnings occurs at all levels of income of both PCPs and specialists, but is more pronounced at lower income levels. Conclusions The quantile regression technique offers an analytical tool to evaluate policy effects beyond the means. A longitudinal application of this approach may enable health policy makers to identify winners and losers among segments of the physician workforce and assess how market dynamics and health policy initiatives affect the overall physician income distribution over various time intervals. PMID:17850525

  14. A hybrid hydrologically complemented warning model for shallow landslides induced by extreme rainfall in Korean Mountain

    NASA Astrophysics Data System (ADS)

    Singh Pradhan, Ananta Man; Kang, Hyo-Sub; Kim, Yun-Tae

    2016-04-01

    This study uses a physically based approach to evaluate the factor of safety of the hillslope for different hydrological conditions, in Mt Umyeon, south of Seoul. The hydrological conditions were determined using intensity and duration of whole Korea of known landslide inventory data. Quantile regression statistical method was used to ascertain different probability warning levels on the basis of rainfall thresholds. Physically based models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical probabilistic methods can include other causative factors which influence the slope stability such as forest, soil and geology, but rely on good landslide inventories of the site. In this study a hybrid approach has described that combines the physically-based landslide susceptibility for different hydrological conditions. A presence-only based maximum entropy model was used to hybrid and analyze relation of landslide with conditioning factors. About 80% of the landslides were listed among the unstable sites identified in the proposed model, thereby presenting its effectiveness and accuracy in determining unstable areas and areas that require evacuation. These cumulative rainfall thresholds provide a valuable reference to guide disaster prevention authorities in the issuance of warning levels with the ability to reduce losses and save lives.

  15. Comparison of NEXRAD multisensor precipitation estimates to rain gage observations in and near DuPage County, Illinois, 2002–12

    USGS Publications Warehouse

    Spies, Ryan R.; Over, Thomas M.; Ortel, Terry W.

    2018-05-21

    In this report, precipitation data from 2002 to 2012 from the hourly gridded Next-Generation Radar (NEXRAD)-based Multisensor Precipitation Estimate (MPE) precipitation product are compared to precipitation data from two rain gage networks—an automated tipping bucket network of 25 rain gages operated by the U.S. Geological Survey (USGS) and 51 rain gages from the volunteer-operated Community Collaborative Rain, Hail, and Snow (CoCoRaHS) network—in and near DuPage County, Illinois, at a daily time step to test for long-term differences in space, time, and distribution. The NEXRAD–MPE data that are used are from the fifty 2.5-mile grid cells overlying the rain gages from the other networks. Because of the challenges of measuring of frozen precipitation, the analysis period is separated between days with or without the chance of freezing conditions. The NEXRAD–MPE and tipping-bucket rain gage precipitation data are adjusted to account for undercatch by multiplying by a previously determined factor of 1.14. Under nonfreezing conditions, the three precipitation datasets are broadly similar in cumulative depth and distribution of daily values when the data are combined spatially across the networks. However, the NEXRAD–MPE data indicate a significant trend relative to both rain gage networks as a function of distance from the NEXRAD radar just south of the study area. During freezing conditions, of the USGS network rain gages only the heated gages were considered, and these gages indicate substantial mean undercatch of 50 and 61 percent compared to the NEXRAD–MPE and the CoCoRaHS gages, respectively. The heated USGS rain gages also indicate substantially lower quantile values during freezing conditions, except during the most extreme (highest) events. Because NEXRAD precipitation products are continually evolving, the report concludes with a discussion of recent changes in those products and their potential for improved precipitation estimation. An appendix provides an analysis of spatially combined NEXRAD–MPE precipitation data as a function of temperature at an hourly time scale and indicates, among other results, that most precipitation in the study area occurs at moderate temperatures of 30 to 74 degrees Fahrenheit. However, when precipitation does occur, its intensity increases with temperature to about 86 degrees Fahrenheit.

  16. Inclusion of historical information in flood frequency analysis using a Bayesian MCMC technique: a case study for the power dam Orlík, Czech Republic

    NASA Astrophysics Data System (ADS)

    Gaál, Ladislav; Szolgay, Ján; Kohnová, Silvia; Hlavčová, Kamila; Viglione, Alberto

    2010-01-01

    The paper deals with at-site flood frequency estimation in the case when also information on hydrological events from the past with extraordinary magnitude are available. For the joint frequency analysis of systematic observations and historical data, respectively, the Bayesian framework is chosen, which, through adequately defined likelihood functions, allows for incorporation of different sources of hydrological information, e.g., maximum annual flood peaks, historical events as well as measurement errors. The distribution of the parameters of the fitted distribution function and the confidence intervals of the flood quantiles are derived by means of the Markov chain Monte Carlo simulation (MCMC) technique. The paper presents a sensitivity analysis related to the choice of the most influential parameters of the statistical model, which are the length of the historical period h and the perception threshold X0. These are involved in the statistical model under the assumption that except for the events termed as ‘historical’ ones, none of the (unknown) peak discharges from the historical period h should have exceeded the threshold X0. Both higher values of h and lower values of X0 lead to narrower confidence intervals of the estimated flood quantiles; however, it is emphasized that one should be prudent of selecting those parameters, in order to avoid making inferences with wrong assumptions on the unknown hydrological events having occurred in the past. The Bayesian MCMC methodology is presented on the example of the maximum discharges observed during the warm half year at the station Vltava-Kamýk (Czech Republic) in the period 1877-2002. Although the 2002 flood peak, which is related to the vast flooding that affected a large part of Central Europe at that time, occurred in the near past, in the analysis it is treated virtually as a ‘historical’ event in order to illustrate some crucial aspects of including information on extreme historical floods into at-site flood frequency analyses.

  17. Differential and Long-Term Language Impact on Math

    ERIC Educational Resources Information Center

    Chen, Fang; Chalhoub-Deville, Micheline

    2016-01-01

    Literature provides consistent evidence that there is a strong relationship between language proficiency and math achievement. However, research results show conflicts supporting either an increasing or a decreasing longitudinal relationship between the two. This study explored a longitudinal data and adopted quantile regression analyses to…

  18. Solvency II solvency capital requirement for life insurance companies based on expected shortfall.

    PubMed

    Boonen, Tim J

    2017-01-01

    This paper examines the consequences for a life annuity insurance company if the solvency II solvency capital requirements (SCR) are calibrated based on expected shortfall (ES) instead of value-at-risk (VaR). We focus on the risk modules of the SCRs for the three risk classes equity risk, interest rate risk and longevity risk. The stress scenarios are determined using the calibration method proposed by EIOPA in 2014. We apply the stress-scenarios for these three risk classes to a fictitious life annuity insurance company. We find that for EIOPA's current quantile 99.5% of the VaR, the stress scenarios of the various risk classes based on ES are close to the stress scenarios based on VaR. Might EIOPA choose to calibrate the stress scenarios on a smaller quantile, the longevity SCR is relatively larger and the equity SCR is relatively smaller if ES is used instead of VaR. We derive the same conclusion if stress scenarios are determined with empirical stress scenarios.

  19. Do High Consumers of Sugar-Sweetened Beverages Respond Differently to Price Changes? A Finite Mixture IV-Tobit Approach.

    PubMed

    Etilé, Fabrice; Sharma, Anurag

    2015-09-01

    This study compares the impact of sugar-sweetened beverages (SSBs) tax between moderate and high consumers in Australia. The key methodological contribution is that price response heterogeneity is identified while controlling for censoring of consumption at zero and endogeneity of expenditure by using a finite mixture instrumental variable Tobit model. The SSB price elasticity estimates show a decreasing trend across increasing consumption quantiles, from -2.3 at the median to -0.2 at the 95th quantile. Although high consumers of SSBs have a less elastic demand for SSBs, their very high consumption levels imply that a tax would achieve higher reduction in consumption and higher health gains. Our results also suggest that an SSB tax would represent a small fiscal burden for consumers whatever their pre-policy level of consumption, and that an excise tax should be preferred to an ad valorem tax. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Income elasticity of health expenditures in Iran.

    PubMed

    Zare, Hossein; Trujillo, Antonio J; Leidman, Eva; Buttorff, Christine

    2013-09-01

    Because of its policy implications, the income elasticity of health care expenditures is a subject of much debate. Governments may have an interest in subsidizing the care of those with low income. Using more than two decades of data from the Iran Household Expenditure and Income Survey, this article investigates the relationship between income and health care expenditure in urban and rural areas in Iran, a resource rich, upper-middle-income country. We implemented spline and quantile regression techniques to obtain a more robust description of the relationship of interest. This study finds non-uniform effects of income on health expenditures. Although the results show that health care is a necessity for all income brackets, spline regression estimates indicate that the income elasticity is lowest for the poorest Iranians in urban and rural areas. This suggests that they will show low flexibility in medical expenses as income fluctuates. Further, a quantile regression model assessing the effect of income at different level of medical expenditure suggests that households with lower medical expenses are less elastic.

  1. Structured Additive Quantile Regression for Assessing the Determinants of Childhood Anemia in Rwanda.

    PubMed

    Habyarimana, Faustin; Zewotir, Temesgen; Ramroop, Shaun

    2017-06-17

    Childhood anemia is among the most significant health problems faced by public health departments in developing countries. This study aims at assessing the determinants and possible spatial effects associated with childhood anemia in Rwanda. The 2014/2015 Rwanda Demographic and Health Survey (RDHS) data was used. The analysis was done using the structured spatial additive quantile regression model. The findings of this study revealed that the child's age; the duration of breastfeeding; gender of the child; the nutritional status of the child (whether underweight and/or wasting); whether the child had a fever; had a cough in the two weeks prior to the survey or not; whether the child received vitamin A supplementation in the six weeks before the survey or not; the household wealth index; literacy of the mother; mother's anemia status; mother's age at the birth are all significant factors associated with childhood anemia in Rwanda. Furthermore, significant structured spatial location effects on childhood anemia was found.

  2. Intraday return inefficiency and long memory in the volatilities of forex markets and the role of trading volume

    NASA Astrophysics Data System (ADS)

    Shahzad, Syed Jawad Hussain; Hernandez, Jose Areola; Hanif, Waqas; Kayani, Ghulam Mujtaba

    2018-09-01

    We investigate the dynamics of efficiency and long memory, and the impact of trading volume on the efficiency of returns and volatilities of four major traded currencies, namely, the EUR, GBP, CHF and JPY. We do so by implementing full sample and rolling window multifractal detrended fluctuation analysis (MF-DFA) and a quantile-on-quantile (QQ) approach. This paper sheds new light by employing high frequency (5-min interval) data spanning from Jan 1, 2007 to Dec 31, 2016. Realized volatilities are estimated using Andersen et al.'s (2001) measure, while the QQ method employed is drawn from Sim and Zhou (2015). We find evidence of higher efficiency levels in the JPY and CHF currency markets. The impact of trading volume on efficiency is only significant for the JPY and CHF currencies. The GBP currency appears to be the least efficient, followed by the EUR. Implications of the results are discussed.

  3. Cross-platform normalization of microarray and RNA-seq data for machine learning applications

    PubMed Central

    Thompson, Jeffrey A.; Tan, Jie

    2016-01-01

    Large, publicly available gene expression datasets are often analyzed with the aid of machine learning algorithms. Although RNA-seq is increasingly the technology of choice, a wealth of expression data already exist in the form of microarray data. If machine learning models built from legacy data can be applied to RNA-seq data, larger, more diverse training datasets can be created and validation can be performed on newly generated data. We developed Training Distribution Matching (TDM), which transforms RNA-seq data for use with models constructed from legacy platforms. We evaluated TDM, as well as quantile normalization, nonparanormal transformation, and a simple log2 transformation, on both simulated and biological datasets of gene expression. Our evaluation included both supervised and unsupervised machine learning approaches. We found that TDM exhibited consistently strong performance across settings and that quantile normalization also performed well in many circumstances. We also provide a TDM package for the R programming language. PMID:26844019

  4. Self-Concept Predicts Academic Achievement Across Levels of the Achievement Distribution: Domain Specificity for Math and Reading.

    PubMed

    Susperreguy, Maria Ines; Davis-Kean, Pamela E; Duckworth, Kathryn; Chen, Meichu

    2017-09-18

    This study examines whether self-concept of ability in math and reading predicts later math and reading attainment across different levels of achievement. Data from three large-scale longitudinal data sets, the Avon Longitudinal Study of Parents and Children, National Institute of Child Health and Human Development-Study of Early Child Care and Youth Development, and Panel Study of Income Dynamics-Child Development Supplement, were used to answer this question by employing quantile regression analyses. After controlling for demographic variables, child characteristics, and early ability, the findings indicate that self-concept of ability in math and reading predicts later achievement in each respective domain across all quantile levels of achievement. These results were replicated across the three data sets representing different populations and provide robust evidence for the role of self-concept of ability in understanding achievement from early childhood to adolescence across the spectrum of performance (low to high). © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  5. Statistics of concentrations due to single air pollution sources to be applied in numerical modelling of pollutant dispersion

    NASA Astrophysics Data System (ADS)

    Tumanov, Sergiu

    A test of goodness of fit based on rank statistics was applied to prove the applicability of the Eggenberger-Polya discrete probability law to hourly SO 2-concentrations measured in the vicinity of single sources. With this end in view, the pollutant concentration was considered an integral quantity which may be accepted if one properly chooses the unit of measurement (in this case μg m -3) and if account is taken of the limited accuracy of measurements. The results of the test being satisfactory, even in the range of upper quantiles, the Eggenberger-Polya law was used in association with numerical modelling to estimate statistical parameters, e.g. quantiles, cumulative probabilities of threshold concentrations to be exceeded, and so on, in the grid points of a network covering the area of interest. This only needs accurate estimations of means and variances of the concentration series which can readily be obtained through routine air pollution dispersion modelling.

  6. Observed and predicted sensitivities of extreme surface ozone to meteorological drivers in three US cities

    NASA Astrophysics Data System (ADS)

    Fix, Miranda J.; Cooley, Daniel; Hodzic, Alma; Gilleland, Eric; Russell, Brook T.; Porter, William C.; Pfister, Gabriele G.

    2018-03-01

    We conduct a case study of observed and simulated maximum daily 8-h average (MDA8) ozone (O3) in three US cities for summers during 1996-2005. The purpose of this study is to evaluate the ability of a high resolution atmospheric chemistry model to reproduce observed relationships between meteorology and high or extreme O3. We employ regional coupled chemistry-transport model simulations to make three types of comparisons between simulated and observational data, comparing (1) tails of the O3 response variable, (2) distributions of meteorological predictor variables, and (3) sensitivities of high and extreme O3 to meteorological predictors. This last comparison is made using two methods: quantile regression, for the 0.95 quantile of O3, and tail dependence optimization, which is used to investigate even higher O3 extremes. Across all three locations, we find substantial differences between simulations and observational data in both meteorology and meteorological sensitivities of high and extreme O3.

  7. Ecoclimatic indicators to study crop suitability in present and future climatic conditions

    NASA Astrophysics Data System (ADS)

    Caubel, Julie; Garcia de Cortazar Atauri, Inaki; Huard, Frédéric; Launay, Marie; Ripoche, Dominique; Gouache, David; Bancal, Marie-Odile; Graux, Anne-Isabelle; De Noblet, Nathalie

    2013-04-01

    Climate change is expected to affect both regional and global food production through changes in overall agroclimatic conditions. It is therefore necessary to develop simple tools of crop suitability diagnosis in a given area so that stakeholders can envisage land use adaptations under climate change conditions. The most common way to investigate potential impacts of climate on the evolution of agrosystems is to make use of an array of agroclimatic indicators, which provide synthetic information derived from climatic variables and calculated within fixed periods (i.e. January first - 31th July). However, the information obtained during these periods does not enable to take account of the plant response to climate. In this work, we present some results of the research program ORACLE (Opportunities and Risks of Agrosystems & forests in response to CLimate, socio-economic and policy changEs in France (and Europe). We proposed a suite of relevant ecoclimatic indicators, based on temperature and rainfall, in order to evaluate crop suitability for both present and new climatic conditions. Ecoclimatic indicators are agroclimatic indicators (e.g., grain heat stress) calculated during specific phenological phases so as to take account of the plant response to climate (e.g., the grain filling period, flowering- harvest). These indicators are linked with the ecophysiological processes they characterize (for e.g., the grain filling). To represent this methodology, we studied the suitability of winter wheat in future climatic conditions through three distinct French sites, Toulouse, Dijon and Versailles. Indicators have been calculated using climatic data from 1950 to 2100 simulated by the global climate model ARPEGE forced by a greenhouse effect corresponding to the SRES A1B scenario. The Quantile-Quantile downscaling method was applied to obtain data for the three locations. Phenological stages (emergence, ear 1 cm, flowering, beginning of grain filling and harvest) have been simulated by the STICS, CERES and PANORAMIX crop models with the same input climatic data. Results showed that phenological stages tend to be reached earlier in the future. Significant differences were noted between indicators calculated for invariable calendar periods and indicators calculated during phenological phases. Therefore, ecoclimatic indicators are relevant to provide accurate information about crop suitability in the context of climate change. Whereas most of the indicators do not indicate any significant changes in the future, plant mortality due to frost risks from emergence to ear 1 cm tends to decrease and water supply tends to be more limiting in the future. These indicators do not replace models but represent additional tools for understanding and spatializing some results obtained by models. Their use can provide a spatial distribution of crops according to their suitability in present or future climatic conditions and enable us to minimize the risk of crop failure. It would be interesting to consider the response uncertainties according to the uncertainties we have in future climatic predictions by using different greenhouse emission scenarios and downscaling methods.

  8. Soil Moisture as an Estimator for Crop Yield in Germany

    NASA Astrophysics Data System (ADS)

    Peichl, Michael; Meyer, Volker; Samaniego, Luis; Thober, Stephan

    2015-04-01

    Annual crop yield depends on various factors such as soil properties, management decisions, and meteorological conditions. Unfavorable weather conditions, e.g. droughts, have the potential to drastically diminish crop yield in rain-fed agriculture. For example, the drought in 2003 caused direct losses of 1.5 billion EUR only in Germany. Predicting crop yields allows to mitigate negative effects of weather extremes which are assumed to occur more often in the future due to climate change. A standard approach in economics is to predict the impact of climate change on agriculture as a function of temperature and precipitation. This approach has been developed further using concepts like growing degree days. Other econometric models use nonlinear functions of heat or vapor pressure deficit. However, none of these approaches uses soil moisture to predict crop yield. We hypothesize that soil moisture is a better indicator to explain stress on plant growth than estimations based on precipitation and temperature. This is the case because the latter variables do not explicitly account for the available water content in the root zone, which is the primary source of water supply for plant growth. In this study, a reduced form panel approach is applied to estimate a multivariate econometric production function for the years 1999 to 2010. Annual crop yield data of various crops on the administrative district level serve as depending variables. The explanatory variable of major interest is the Soil Moisture Index (SMI), which quantifies anomalies in root zone soil moisture. The SMI is computed by the mesoscale Hydrological Model (mHM, www.ufz.de/mhm). The index represents the monthly soil water quantile at a 4 km2 grid resolution covering entire Germany. A reduced model approach is suitable because the SMI is the result of a stochastic weather process and therefore can be considered exogenous. For the ease of interpretation a linear functionality is preferred. Meteorological, phenological, geological, agronomic, and socio-economic variables are also considered to extend the model in order to reveal the proper causal relation. First results show that dry as well as wet extremes of SMI have a negative impact on crop yield for winter wheat. This indicates that soil moisture has at least a limiting affect on crop production.

  9. Quantifying how the full local distribution of daily precipitation is changing and its uncertainties

    NASA Astrophysics Data System (ADS)

    Stainforth, David; Chapman, Sandra; Watkins, Nicholas

    2016-04-01

    The study of the consequences of global warming would benefit from quantification of geographical patterns of change at specific thresholds or quantiles, and better understandings of the intrinsic uncertainties in such quantities. For precipitation a range of indices have been developed which focus on high percentiles (e.g. rainfall falling on days above the 99th percentile) and on absolute extremes (e.g. maximum annual one day precipitation) but scientific assessments are best undertaken in the context of changes in the whole climatic distribution. Furthermore, the relevant thresholds for climate-vulnerable policy decisions, adaptation planning and impact assessments, vary according to the specific sector and location of interest. We present a methodology which maintains the flexibility to provide information at different thresholds for different downstream users, both scientists and decision makers. We develop a method[1,2] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes in daily precipitation data. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the amount of precipitation on those days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves not only determining which quantiles and geographical locations show the greatest and smallest changes, but also those at which uncertainty undermines the ability to make confident statements about any change there may be. We demonstrate this approach using E-OBS gridded data[3] which are timeseries of local daily precipitation across Europe over the last 60+ years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the geographical pattern of change at given thresholds of precipitation. This information is model- independent, thus providing data of direct value in model calibration and assessment. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, 2013 [2] S C Chapman, D A Stainforth, N W Watkins, 2015 Limits to the quantification of local climate change, ERL,10, 094018 (2015), ERL,10, 094018 [3] M R Haylock et al . 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119

  10. The quality and value of seasonal precipitation forecasts for an early warning of large-scale droughts and floods in West Africa

    NASA Astrophysics Data System (ADS)

    Bliefernicht, Jan; Seidel, Jochen; Salack, Seyni; Waongo, Moussa; Laux, Patrick; Kunstmann, Harald

    2017-04-01

    Seasonal precipitation forecasts are a crucial source of information for an early warning of hydro-meteorological extremes in West Africa. However, the current seasonal forecasting system used by the West African weather services in the framework of the West African Climate Outlook forum (PRESAO) is limited to probabilistic precipitation forecasts of 1-month lead time. To improve this provision, we use an ensemble-based quantile-quantile transformation for bias correction of precipitation forecasts provided by a global seasonal ensemble prediction system, the Climate Forecast System Version 2 (CFS2). The statistical technique eliminates systematic differences between global forecasts and observations with the potential to preserve the signal from the model. The technique has also the advantage that it can be easily implemented at national weather services with low capacities. The statistical technique is used to generate probabilistic forecasts of monthly and seasonal precipitation amount and other precipitation indices useful for an early warning of large-scale drought and floods in West Africa. The evaluation of the statistical technique is done using CFS hindcasts (1982 to 2009) in a cross-validation mode to determine the performance of the precipitation forecasts for several lead times focusing on drought and flood events depicted over the Volta and Niger basins. In addition, operational forecasts provided by PRESAO are analyzed from 1998 to 2015. The precipitation forecasts are compared to low-skill reference forecasts generated from gridded observations (i.e. GPCC, CHIRPS) and a novel in-situ gauge database from national observation networks (see Poster EGU2017-10271). The forecasts are evaluated using state-of-the-art verification techniques to determine specific quality attributes of probabilistic forecasts such as reliability, accuracy and skill. In addition, cost-loss approaches are used to determine the value of probabilistic forecasts for multiple users in warning situations. The outcomes of the hindcasts experiment for the Volta basin illustrate that the statistical technique can clearly improve the CFS precipitation forecasts with the potential to provide skillful and valuable early precipitation warnings for large-scale drought and flood situations several months in ahead. In this presentation we give a detailed overview about the ensemble-based quantile-quantile-transformation, its validation and verification and the possibilities of this technique to complement PRESAO. We also highlight the performance of this technique for extremes such as the Sahel drought in the 80ties and in comparison to the various reference data sets (e.g. CFS2, PRESAO, observational data sets) used in this study.

  11. Mathematical and Statistical Software Index.

    DTIC Science & Technology

    1986-08-01

    geometric) mean HMEAN - harmonic mean MEDIAN - median MODE - mode QUANT - quantiles OGIVE - distribution curve IQRNG - interpercentile range RANGE ... range mutliphase pivoting algorithm cross-classification multiple discriminant analysis cross-tabul ation mul tipl e-objecti ve model curve fitting...Statistics). .. .. .... ...... ..... ...... ..... .. 21 *RANGEX (Correct Correlations for Curtailment of Range ). .. .. .... ...... ... 21 *RUMMAGE II (Analysis

  12. Differential Language Influence on Math Achievement

    ERIC Educational Resources Information Center

    Chen, Fang

    2010-01-01

    New models are commonly designed to solve certain limitations of other ones. Quantile regression is introduced in this paper because it can provide information that a regular mean regression misses. This research aims to demonstrate its utility in the educational research and measurement field for questions that may not be detected otherwise.…

  13. Effects of Individual Development Accounts (IDAs) on Household Wealth and Saving Taste

    ERIC Educational Resources Information Center

    Huang, Jin

    2010-01-01

    This study examines effects of individual development accounts (IDAs) on household wealth of low-income participants. Methods: This study uses longitudinal survey data from the American Dream Demonstration (ADD) involving experimental design (treatment group = 537, control group = 566). Results: Results from quantile regression analysis indicate…

  14. Yield and yield gaps in central U.S. corn production systems

    USDA-ARS?s Scientific Manuscript database

    The magnitude of yield gaps (YG) (potential yield – farmer yield) provides some indication of the prospects for increasing crop yield. Quantile regression analysis was applied to county maize (Zea mays L.) yields (1972 – 2011) from Kentucky, Iowa and Nebraska (irrigated) (total of 115 counties) to e...

  15. Yield gaps and yield relationships in US soybean production systems

    USDA-ARS?s Scientific Manuscript database

    The magnitude of yield gaps (YG) (potential yield – farmer yield) provides some indication of the prospects for increasing crop yield to meet the food demands of future populations. Quantile regression analysis was applied to county soybean [Glycine max (L.) Merrill] yields (1971 – 2011) from Kentuc...

  16. A Bibliography for the ABLUE.

    DTIC Science & Technology

    1982-06-01

    scale based on two symmetric quantiles. Sankhya A 30, 335-336. [S] Gupta, S. S. and Gnanadesikan , M. (1966). Estimation of the parameters of the logistic...and Cheng (1971, 1972, 1974) Chan, Cheng, Mead and Panjer (1973) Cheng (1975) Eubank (1979, 1981a,b) Gupta and Gnanadesikan (1966) Hassanein (1969b

  17. Teacher Salaries and Teacher Aptitude: An Analysis Using Quantile Regressions

    ERIC Educational Resources Information Center

    Gilpin, Gregory A.

    2012-01-01

    This study investigates the relationship between salaries and scholastic aptitude for full-time public high school humanities and mathematics/sciences teachers. For identification, we rely on variation in salaries between adjacent school districts within the same state. The results indicate that teacher aptitude is positively correlated with…

  18. Geographic variability in elevation and topographic constraints on the distribution of native and nonnative trout in the Great Basin

    USGS Publications Warehouse

    Warren, Dana R.; Dunham, Jason B.; Hockman-Wert, David

    2014-01-01

    Understanding local and geographic factors influencing species distributions is a prerequisite for conservation planning. Our objective in this study was to model local and geographic variability in elevations occupied by native and nonnative trout in the northwestern Great Basin, USA. To this end, we analyzed a large existing data set of trout presence (5,156 observations) to evaluate two fundamental factors influencing occupied elevations: climate-related gradients in geography and local constraints imposed by topography. We applied quantile regression to model upstream and downstream distribution elevation limits for each trout species commonly found in the region (two native and two nonnative species). With these models in hand, we simulated an upstream shift in elevation limits of trout distributions to evaluate potential consequences of habitat loss. Downstream elevation limits were inversely associated with latitude, reflecting regional gradients in temperature. Upstream limits were positively related to maximum stream elevation as expected. Downstream elevation limits were constrained topographically by valley bottom elevations in northern streams but not in southern streams, where limits began well above valley bottoms. Elevation limits were similar among species. Upstream shifts in elevation limits for trout would lead to more habitat loss in the north than in the south, a result attributable to differences in topography. Because downstream distributions of trout in the north extend into valley bottoms with reduced topographic relief, trout in more northerly latitudes are more likely to experience habitat loss associated with an upstream shift in lower elevation limits. By applying quantile regression to relatively simple information (species presence, elevation, geography, topography), we were able to identify elevation limits for trout in the Great Basin and explore the effects of potential shifts in these limits that could occur in response to changing climate conditions that alter streams directly (e.g., through changes in temperature and precipitation) or indirectly (e.g., through changing water use).

  19. Effect of uncertainties on probabilistic-based design capacity of hydrosystems

    NASA Astrophysics Data System (ADS)

    Tung, Yeou-Koung

    2018-02-01

    Hydrosystems engineering designs involve analysis of hydrometric data (e.g., rainfall, floods) and use of hydrologic/hydraulic models, all of which contribute various degrees of uncertainty to the design process. Uncertainties in hydrosystem designs can be generally categorized into aleatory and epistemic types. The former arises from the natural randomness of hydrologic processes whereas the latter are due to knowledge deficiency in model formulation and model parameter specification. This study shows that the presence of epistemic uncertainties induces uncertainty in determining the design capacity. Hence, the designer needs to quantify the uncertainty features of design capacity to determine the capacity with a stipulated performance reliability under the design condition. Using detention basin design as an example, the study illustrates a methodological framework by considering aleatory uncertainty from rainfall and epistemic uncertainties from the runoff coefficient, curve number, and sampling error in design rainfall magnitude. The effects of including different items of uncertainty and performance reliability on the design detention capacity are examined. A numerical example shows that the mean value of the design capacity of the detention basin increases with the design return period and this relation is found to be practically the same regardless of the uncertainty types considered. The standard deviation associated with the design capacity, when subject to epistemic uncertainty, increases with both design frequency and items of epistemic uncertainty involved. It is found that the epistemic uncertainty due to sampling error in rainfall quantiles should not be ignored. Even with a sample size of 80 (relatively large for a hydrologic application) the inclusion of sampling error in rainfall quantiles resulted in a standard deviation about 2.5 times higher than that considering only the uncertainty of the runoff coefficient and curve number. Furthermore, the presence of epistemic uncertainties in the design would result in under-estimation of the annual failure probability of the hydrosystem and has a discounting effect on the anticipated design return period.

  20. Assessment of probabilistic areal reduction factors of precipitations for the entire French territory with gridded rainfall data.

    NASA Astrophysics Data System (ADS)

    Fouchier, Catherine; Maire, Alexis; Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2016-04-01

    The starting point of our study was the availability of maps of rainfall quantiles available for the entire French mainland territory at the spatial resolution of 1 km². These maps display the rainfall amounts estimated for different rainfall durations (from 15 minutes to 72 hours) and different return periods (from 2 years up to 1 000 years). They are provided by a regionalized stochastic hourly point rainfall generator, the SHYREG method which was previously developed by Irstea (Arnaud et al., 2007; Cantet and Arnaud, 2014). Being calibrated independently on numerous raingauges data (with an average density across the country of 1 raingauge per 200 km²), this method suffers from a limitation common to point-process rainfall generators: it can only reproduce point rainfall patterns and has no capacity to generate rainfall fields. It can't hence provide areal rainfall quantiles, the estimation of the latter being however needed for the construction of design rainfall or for the diagnostic of observed events. One means of bridging this gap between our local rainfall quantiles and areal rainfall quantiles is given by the concept of probabilistic areal reduction factors of rainfall (ARF) as defined by Omolayo (1993). This concept enables to estimate areal rainfall of a particular frequency within a certain amount of time from point rainfalls of the same frequency and duration. Assessing such ARF for the whole French territory is of particular interest since it should allow us to compute areal rainfall quantiles, and eventually watershed rainfall quantiles, by using the already available grids of statistical point rainfall of the SHYREG method. Our purpose was then to assess these ARF thanks to long time-series of spatial rainfall data. We have used two sets of rainfall fields: i) hourly rainfall fields from a 10-year reference database of Quantitative Precipitation Estimation (QPE) over France (Tabary et al., 2012), ii) daily rainfall fields resulting from a 53-year high-resolution atmospheric reanalysis over France with the SAFRAN-gauge-based analysis system (Vidal et al., 2010). We have then built samples of maximal rainfalls for each cell location (the "point" rainfalls) and for different areas centered on each cell location (the areal rainfalls) of these gridded data. To compute rainfall quantiles, we have fitted a Gumbel law, with the L-moment method, on each of these samples. Our daily and hourly ARF have then shown four main trends: i) a sensitivity to the return period, with ARF values decreasing when the return period increases; ii) a sensitivity to the rainfall duration, with ARF values decreasing when the rainfall duration decreases; iii) a sensitivity to the season, with ARF values smaller for the summer period than for the winter period; iv) a sensitivity to the geographical location, with low ARF values in the French Mediterranean area and ARF values close to 1 for the climatic zones of Northern and Western France (oceanic to semi-continental climate). The results of this data-intensive study led for the first time on the whole French territory are in agreement with studies led abroad (e.g. Allen and DeGaetano 2005, Overeem et al. 2010) and confirm and widen the results of previous studies that were carried out in France on smaller areas and with fewer rainfall durations (e.g. Ramos et al., 2006, Neppel et al., 2003). References Allen R. J. and DeGaetano A. T. (2005). Areal reduction factors for two eastern United States regions with high rain-gauge density. Journal of Hydrologic Engineering 10(4): 327-335. Arnaud P., Fine J.-A. and Lavabre J. (2007). An hourly rainfall generation model applicable to all types of climate. Atmospheric Research 85(2): 230-242. Cantet, P. and Arnaud, P. (2014). Extreme rainfall analysis by a stochastic model: impact of the copula choice on the sub-daily rainfall generation, Stochastic Environmental Research and Risk Assessment, Springer Berlin Heidelberg, 28(6), 1479-1492. Neppel L., Bouvier C. and Lavabre J. (2003). Areal reduction factor probabilities for rainfall in Languedoc Roussillon. IAHS-AISH Publication (278): 276-283. Omolayo, A. S. (1993). On the transposition of areal reduction factors for rainfall frequency estimation. Journal of Hydrology 145 (1-2): 191-205. Overeem A., Buishand T. A., Holleman I. and Uijlenhoet R. (2010). Extreme value modeling of areal rainfall from weather radar. Water Resources Research 46(9): 10 p. Ramos M.-H., Leblois E., Creutin J.-D. (2006). From point to areal rainfall: Linking the different approaches for the frequency characterisation of rainfalls in urban areas. Water Science and Technology. 54(6-7): 33-40. Tabary P., Dupuy P., L'Henaff G., Gueguen C., Moulin L., Laurantin O., Merlier C., Soubeyroux J. M. (2012). A 10-year (1997-2006) reanalysis of Quantitative Precipitation Estimation over France: methodology and first results. IAHS-AISH Publication (351) : 255-260. Vidal J.-P., Martin E., Franchistéguy L., Baillon M. and Soubeyroux J.-M. (2010). A 50-year high-resolution atmospheric reanalysis over France with the Safran system. International Journal of Climatology 30(11): 1627-1644.

  1. Projections of meteorological and snow conditions in the Pyrenees using adjusted EURO-CORDEX climate projections

    NASA Astrophysics Data System (ADS)

    Verfaillie, Deborah; Déqué, Michel; Morin, Samuel; Soubeyroux, Jean-Michel; Lafaysse, Matthieu

    2017-04-01

    Current and future availability of seasonal snow is a recurring topic in mountain regions such as the Pyrenees, where winter tourism and hydropower production are large contributors to the regional revenues in France, Spain and Andorra. Associated changes in river discharges, their consequences on water storage management, the future vulnerability of Pyrenean ecosystems as well as the occurrence of climate-related hazards such as debris flows and avalanches are also under consideration. However, to generate projections of snow conditions, a traditional dynamical downscaling approach featuring spatial resolutions typically between 10 and 50 km is not sufficient to capture the fine-scale processes and thresholds at play. Indeed, the altitudinal resolution matters, since the phase of precipitation is mainly controlled by the temperature which is altitude-dependent. Moreover, simulations from general circulation models (GCMs) and regional climate models (RCMs) suffer from biases compared to local observations, and often provide outputs at too coarse time resolution to drive impact models. RCM simulations must therefore be adjusted before they can be used to drive specific models such as land surface models. In this study, time series of hourly temperature, precipitation, wind speed, humidity, and short- and longwave radiation were generated over the Pyrenees for the period 1950-2100, by using a new approach (named ADAMONT for ADjustment of RCM outputs to MOuNTain regions) based on quantile mapping applied to daily data, followed by time disaggregation accounting for weather patterns selection. Meteorological observations used for the quantile mapping consist of the regional scale reanalysis SAFRAN, which operates at the scale of homogeneous areas on the order of 1000 km2 within which meteorological conditions vary only with elevation. SAFRAN combines large-scale NWP reanalysis (ERA40, ARPEGE) with in-situ meteorological observations. The SAFRAN reanalysis is available over the entire Pyrenean chain since 1980. Outputs from EURO-CORDEX simulations spanning 6 different RCMs forced by 6 different GCMs under 3 representative concentration pathways scenarios (RCP 2.6, 4.5 and 8.5) over Europe were downscaled at the massif scale and for 300 m elevation bands and statistically adjusted against the SAFRAN reanalysis. These corrected fields were then used to force the SURFEX/ISBA-Crocus land surface model over the Pyrenees. Here we present as an example a reanalysis and future projections (using adjusted EURO-CORDEX data) of meteorological and snow conditions obtained using this method at the site of La Mongie in the French Pyrenees, which we compare to in-situ observations carried out since the 1970s. These results further enable us to identify and apportion the main drivers for changes in snow conditions at the site, and the various uncertainty components at play. This work is a direct contribution of the French GICC ADAMONT project, and of the Interreg project "Clim'Py", aiming to develop the Pyrenean Observatory of Climate Change.

  2. AERMOD performance evaluation for three coal-fired electrical generating units in Southwest Indiana.

    PubMed

    Frost, Kali D

    2014-03-01

    An evaluation of the steady-state dispersion model AERMOD was conducted to determine its accuracy at predicting hourly ground-level concentrations of sulfur dioxide (SO2) by comparing model-predicted concentrations to a full year of monitored SO2 data. The two study sites are comprised of three coal-fired electrical generating units (EGUs) located in southwest Indiana. The sites are characterized by tall, buoyant stacks,flat terrain, multiple SO2 monitors, and relatively isolated locations. AERMOD v12060 and AERMOD v12345 with BETA options were evaluated at each study site. For the six monitor-receptor pairs evaluated, AERMOD showed generally good agreement with monitor values for the hourly 99th percentile SO2 design value, with design value ratios that ranged from 0.92 to 1.99. AERMOD was within acceptable performance limits for the Robust Highest Concentration (RHC) statistic (RHC ratios ranged from 0.54 to 1.71) at all six monitors. Analysis of the top 5% of hourly concentrations at the six monitor-receptor sites, paired in time and space, indicated poor model performance in the upper concentration range. The amount of hourly model predicted data that was within a factor of 2 of observations at these higher concentrations ranged from 14 to 43% over the six sites. Analysis of subsets of data showed consistent overprediction during low wind speed and unstable meteorological conditions, and underprediction during stable, low wind conditions. Hourly paired comparisons represent a stringent measure of model performance; however given the potential for application of hourly model predictions to the SO2 NAAQS design value, this may be appropriate. At these two sites, AERMOD v12345 BETA options do not improve model performance. A regulatory evaluation of AERMOD utilizing quantile-quantile (Q-Q) plots, the RHC statistic, and 99th percentile design value concentrations indicates that model performance is acceptable according to widely accepted regulatory performance limits. However, a scientific evaluation examining hourly paired monitor and model values at concentrations of interest indicates overprediction and underprediction bias that is outside of acceptable model performance measures. Overprediction of 1-hr SO2 concentrations by AERMOD presents major ramifications for state and local permitting authorities when establishing emission limits.

  3. Vegetation Productivity in Natural vs. Cultivated Systems along Water Availability Gradients in the Dry Subtropics.

    PubMed

    Baldi, Germán; Texeira, Marcos; Murray, Francisco; Jobbágy, Esteban G

    2016-01-01

    The dry subtropics are subject to a rapid expansion of crops and pastures over vast areas of natural woodlands and savannas. In this paper, we explored the effect of this transformation on vegetation productivity (magnitude, and seasonal and long-term variability) along aridity gradients which span from semiarid to subhumid conditions, considering exclusively those areas with summer rains (>66%). Vegetation productivity was characterized with the proxy metric "Enhanced Vegetation Index" (EVI) (2000 to 2012 period), on 6186 natural and cultivated sampling points on five continents, and combined with a global climatology database by means of additive models for quantile regressions. Globally and regionally, cultivation amplified the seasonal and inter-annual variability of EVI without affecting its magnitude. Natural and cultivated systems maintained a similar and continuous increase of EVI with increasing water availability, yet achieved through contrasting ways. In natural systems, the productivity peak and the growing season length displayed concurrent steady increases with water availability, while in cultivated systems the productivity peak increased from semiarid to dry-subhumid conditions, and stabilized thereafter giving place to an increase in the growing season length towards wetter conditions. Our results help to understand and predict the ecological impacts of deforestation on vegetation productivity, a key ecosystem process linked to a broad range of services.

  4. Distributional Analysis in Educational Evaluation: A Case Study from the New York City Voucher Program

    ERIC Educational Resources Information Center

    Bitler, Marianne; Domina, Thurston; Penner, Emily; Hoynes, Hilary

    2015-01-01

    We use quantile treatment effects estimation to examine the consequences of the random-assignment New York City School Choice Scholarship Program across the distribution of student achievement. Our analyses suggest that the program had negligible and statistically insignificant effects across the skill distribution. In addition to contributing to…

  5. Body Mass Index, Nutrient Intakes, Health Behaviours and Nutrition Knowledge: A Quantile Regression Application in Taiwan

    ERIC Educational Resources Information Center

    Chen, Shih-Neng; Tseng, Jauling

    2010-01-01

    Objective: To assess various marginal effects of nutrient intakes, health behaviours and nutrition knowledge on the entire distribution of body mass index (BMI) across individuals. Design: Quantitative and distributional study. Setting: Taiwan. Methods: This study applies Becker's (1965) model of health production to construct an individual's BMI…

  6. Decision Making in Education: Returns to Schooling, Uncertainty, and Child-Parent Interactions

    ERIC Educational Resources Information Center

    Giustinelli, Pamela

    2010-01-01

    This dissertation is composed of two related parts. Chapter 1 studies identification of a pre-specified alpha-th quantile of a distribution of potential outcomes under weaker and more credible assumptions than those usually maintained in analogous settings of treatment-response, and obtains results of partial identification. On the theoretical…

  7. Heterogenous Effects of Sports Participation on Education and Labor Market Outcomes

    ERIC Educational Resources Information Center

    Gorry, Devon

    2016-01-01

    This paper analyzes the distribution of education and labor market benefits from sports participation. Results show that effects are similar across gender, but differ on other dimensions. In particular, participants in team sports show greater gains than those in individual sports. Quantile regressions show that educational gains are larger for…

  8. Explaining Variation in Instructional Time: An Application of Quantile Regression

    ERIC Educational Resources Information Center

    Corey, Douglas Lyman; Phelps, Geoffrey; Ball, Deborah Loewenberg; Demonte, Jenny; Harrison, Delena

    2012-01-01

    This research is conducted in the context of a large-scale study of three nationally disseminated comprehensive school reform projects (CSRs) and examines how school- and classroom-level factors contribute to variation in instructional time in English language arts and mathematics. When using mean-based OLS regression techniques such as…

  9. Improving flash flood frequency analyses by using non-systematic dendrogeomorphic data

    NASA Astrophysics Data System (ADS)

    Mediero, Luis; María Bodoque, Jose; Garrote, Julio; Ballesteros-Cánovas, Juan Antonio; Aroca-Jimenez, Estefania

    2017-04-01

    Flash floods have a rapid hydrological response in catchments with short lag times, characterized by ''peaky'' hydrographs. The peak flows are reached within a few hours, thus giving little or no advance warning to prevent and mitigate flood damage. As a result, flash floods may result in a high social risk, as shown for instance by the 1997 Biescas disaster in Spain. The analysis and management of flood risk are clearly conditioned by data availability, especially in mountain areas where usually flash-floods occur. Nevertheless, in mountain basins there is often short data series available that are not accurate in terms of statistical significance. In addition, when flow data is ready for use maximum annual values are generally not as reliable as average flow values, since conventional stream gauge stations may not record the extreme floods, leading to gaps in the time series. Dendrogeomorphology has been shown to be especially useful for improving flood frequency analyses in catchments where short flood series limit the use of conventional hydrological methods. This study presents pros and cons of using a given probability distribution function, such as the Generalized Extreme Value (GEV), and Bayesian Markov Chain Monte Carlo (MCMC) methods to account for non-systematic data provided by dendrogeomorphic techniques, in order to asses flood quantile estimates accuracy. To this end, we have considered a set of locations in Central Spain, where systematic flow available at a gauging site can be extended with non-systematic data obtained from implementation of dendrogeomorphic techniques.

  10. Height-income association in developing countries: Evidence from 14 countries.

    PubMed

    Patel, Pankaj C; Devaraj, Srikant

    2017-12-28

    The purpose of this study was to assess whether the height-income association is positive in developing countries, and whether income differences between shorter and taller individuals in developing countries are explained by differences in endowment (ie, taller individuals have a higher income than shorter individuals because of characteristics such as better social skills) or due to discrimination (ie, shorter individuals have a lower income despite having comparable characteristics). Instrumental variable regression, Oaxaca-Blinder decomposition, quantile regression, and quantile decomposition analyses were applied to a sample of 45 108 respondents from 14 developing countries represented in the Research on Early Life and Aging Trends and Effects (RELATE) study. For a one-centimeter increase in country- and sex-adjusted median height, real income adjusted for purchasing power parity increased by 1.37%. The income differential between shorter and taller individuals was explained by discrimination and not by differences in endowments; however, the effect of discrimination decreased at higher values of country- and sex-adjusted height. Taller individuals in developing countries may realize higher income despite having characteristics similar to those of shorter individuals. © 2017 Wiley Periodicals, Inc.

  11. Public health impacts of ecosystem change in the Brazilian Amazon

    PubMed Central

    Bauch, Simone C.; Birkenbach, Anna M.; Pattanayak, Subhrendu K.; Sills, Erin O.

    2015-01-01

    The claim that nature delivers health benefits rests on a thin empirical evidence base. Even less evidence exists on how specific conservation policies affect multiple health outcomes. We address these gaps in knowledge by combining municipal-level panel data on diseases, public health services, climatic factors, demographics, conservation policies, and other drivers of land-use change in the Brazilian Amazon. To fully exploit this dataset, we estimate random-effects and quantile regression models of disease incidence. We find that malaria, acute respiratory infection (ARI), and diarrhea incidence are significantly and negatively correlated with the area under strict environmental protection. Results vary by disease for other types of protected areas (PAs), roads, and mining. The relationships between diseases and land-use change drivers also vary by quantile of the disease distribution. Conservation scenarios based on estimated regression results suggest that malaria, ARI, and diarrhea incidence would be reduced by expanding strict PAs, and malaria could be further reduced by restricting roads and mining. Although these relationships are complex, we conclude that interventions to preserve natural capital can deliver cobenefits by also increasing human (health) capital. PMID:26082548

  12. Decreased femoral arterial flow during simulated microgravity in the rat

    NASA Technical Reports Server (NTRS)

    Roer, Robert D.; Dillaman, Richard M.

    1994-01-01

    To determine whether the blood supply to the hindlimbs of rats is altered by the tail-suspension model of weightlessness, rats were chronically instrumented for the measurement of femoral artery flow. Ultrasonic transit-time flow probes were implanted into 8-wk-old Wistar-Furth rats under ketamine-xylazine anesthesia, and, after 24 h of recovery, flow was measured in the normal ambulatory posture. Next, rats were suspended and flow was measured immediately and then daily over the next 4-7 days. Rats were subsequently returned to normal posture, and flow was monitored daily for 1-3 days. Mean arterial flow decreased immediately on the rats being suspensed and continued to decrease until a new steady state of approximately 60% of control values was attained at 5 days. On the rats returning to normal posture, flow increased to levels observed before suspension. Quantile-quantile plots of blood flow data revealed a decrease in flow during both systole and diastole. The observed decrease in hindlimb blood flow during suspension suggests a possible role in the etiology of muscular atrophy and bone loss in microgravity.

  13. Automatic coronary artery segmentation based on multi-domains remapping and quantile regression in angiographies.

    PubMed

    Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong

    2016-12-01

    Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Quantile regression analysis of body mass and wages.

    PubMed

    Johar, Meliyanni; Katayama, Hajime

    2012-05-01

    Using the National Longitudinal Survey of Youth 1979, we explore the relationship between body mass and wages. We use quantile regression to provide a broad description of the relationship across the wage distribution. We also allow the relationship to vary by the degree of social skills involved in different jobs. Our results find that for female workers body mass and wages are negatively correlated at all points in their wage distribution. The strength of the relationship is larger at higher-wage levels. For male workers, the relationship is relatively constant across wage distribution but heterogeneous across ethnic groups. When controlling for the endogeneity of body mass, we find that additional body mass has a negative causal impact on the wages of white females earning more than the median wages and of white males around the median wages. Among these workers, the wage penalties are larger for those employed in jobs that require extensive social skills. These findings may suggest that labor markets reward white workers for good physical shape differently, depending on the level of wages and the type of job a worker has. Copyright © 2011 John Wiley & Sons, Ltd.

  15. X-Ray Processing of ChaMPlane Fields: Methods and Initial Results for Selected Anti-Galactic Center Fields

    NASA Astrophysics Data System (ADS)

    Hong, JaeSub; van den Berg, Maureen; Schlegel, Eric M.; Grindlay, Jonathan E.; Koenig, Xavier; Laycock, Silas; Zhao, Ping

    2005-12-01

    We describe the X-ray analysis procedure of the ongoing Chandra Multiwavelength Plane (ChaMPlane) Survey and report the initial results from the analysis of 15 selected anti-Galactic center observations (90deg

  16. The nonlinear relations of the approximate number system and mathematical language to early mathematics development.

    PubMed

    Purpura, David J; Logan, Jessica A R

    2015-12-01

    Both mathematical language and the approximate number system (ANS) have been identified as strong predictors of early mathematics performance. Yet, these relations may be different depending on a child's developmental level. The purpose of this study was to evaluate the relations between these domains across different levels of ability. Participants included 114 children who were assessed in the fall and spring of preschool on a battery of academic and cognitive tasks. Children were 3.12 to 5.26 years old (M = 4.18, SD = .58) and 53.6% were girls. Both mixed-effect and quantile regressions were conducted. The mixed-effect regressions indicated that mathematical language, but not the ANS, nor other cognitive domains, predicted mathematics performance. However, the quantile regression analyses revealed a more nuanced relation among domains. Specifically, it was found that mathematical language and the ANS predicted mathematical performance at different points on the ability continuum. These dual nonlinear relations indicate that different mechanisms may enhance mathematical acquisition dependent on children's developmental abilities. (c) 2015 APA, all rights reserved).

  17. Vegetation role in controlling the ecoenvironmental conditions for sustainable urban environments: a comparison of Beijing and Islamabad

    NASA Astrophysics Data System (ADS)

    Naeem, Shahid; Cao, Chunxiang; Waqar, Mirza Muhammad; Wei, Chen; Acharya, Bipin Kumar

    2018-01-01

    The rapid increase in urbanization due to population growth leads to the degradation of vegetation in major cities. This study investigated the spatial patterns of the ecoenvironmental conditions of inhabitants of two distinct Asian capital cities, Beijing of China and Islamabad of Pakistan, by utilizing Earth observation data products. The significance of urban vegetation for the cooling effect was studied in local climate zones, i.e., urban, suburban, and rural areas within 1-km2 quantiles. Landsat-8 (OLI) and Gaofen-1 satellite imagery were used to assess vegetation cover and land surface temperature, while population datasets were used to evaluate environmental impact. Comparatively, a higher cooling effect of vegetation presence was observed in rural and suburban zones of Beijing as compared to Islamabad, while the urban zone of Islamabad was found comparatively cooler than Beijing's urban zone. The urban thermal field variance index calculated from satellite imagery was ranked into the ecological evaluation index. The worst ecoenvironmental conditions were found in urban zones of both cities where the fraction of vegetation is very low. Meanwhile, this condition is more serious in Beijing, as more than 90% of the total population is living under the worst ecoenvironment conditions, while only 7% of the population is enjoying comfortable conditions. Ecoenvironmental conditions of Islamabad are comparatively better than Beijing where ˜61% of the total population live under the worst ecoenvironmental conditions, and ˜24% are living under good conditions. Thus, Islamabad at this early growth stage can learn from Beijing's ecoenvironmental conditions to improve the quality of living by controlling the associated factors in the future.

  18. Some Probabilistic and Statistical Properties of the Seismic Regime of Zemmouri (Algeria) Seismoactive Zone

    NASA Astrophysics Data System (ADS)

    Baddari, Kamel; Bellalem, Fouzi; Baddari, Ibtihel; Makdeche, Said

    2016-10-01

    Statistical tests have been used to adjust the Zemmouri seismic data using a distribution function. The Pareto law has been used and the probabilities of various expected earthquakes were computed. A mathematical expression giving the quantiles was established. The extreme values limiting law confirmed the accuracy of the adjustment method. Using the moment magnitude scale, a probabilistic model was made to predict the occurrences of strong earthquakes. The seismic structure has been characterized by the slope of the recurrence plot γ, fractal dimension D, concentration parameter K sr, Hurst exponents H r and H t. The values of D, γ, K sr, H r, and H t diminished many months before the principal seismic shock ( M = 6.9) of the studied seismoactive zone has occurred. Three stages of the deformation of the geophysical medium are manifested in the variation of the coefficient G% of the clustering of minor seismic events.

  19. A data centred method to estimate and map changes in the full distribution of daily surface temperature

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra; Stainforth, David; Watkins, Nicholas

    2016-04-01

    Characterizing how our climate is changing includes local information which can inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles or thresholds in distributions of variables such as daily surface temperature. Here we focus on these local changes and on a model independent method to transform daily observations into patterns of local climate change. Our method [1] is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of the distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. For temperature, changes in the distribution itself can yield robust results [2]. We demonstrate how the fundamental timescales of anthropogenic climate change limit the identification of societally relevant aspects of changes. We show that it is nevertheless possible to extract, solely from observations, some confident quantified assessments of change at certain thresholds and locations [3]. We demonstrate this approach using E-OBS gridded data [4] timeseries of local daily surface temperature from specific locations across Europe over the last 60 years. [1] Chapman, S. C., D. A. Stainforth, N. W. Watkins, On estimating long term local climate trends, Phil. Trans. Royal Soc., A,371 20120287 (2013) [2] Stainforth, D. A. S. C. Chapman, N. W. Watkins, Mapping climate change in European temperature distributions, ERL 8, 034031 (2013) [3] Chapman, S. C., Stainforth, D. A., Watkins, N. W. Limits to the quantification of local climate change, ERL 10, 094018 (2015) [4] Haylock M. R. et al ., A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119, (2008)

  20. Profound Effect of Profiling Platform and Normalization Strategy on Detection of Differentially Expressed MicroRNAs – A Comparative Study

    PubMed Central

    Meyer, Swanhild U.; Kaiser, Sebastian; Wagner, Carola; Thirion, Christian; Pfaffl, Michael W.

    2012-01-01

    Background Adequate normalization minimizes the effects of systematic technical variations and is a prerequisite for getting meaningful biological changes. However, there is inconsistency about miRNA normalization performances and recommendations. Thus, we investigated the impact of seven different normalization methods (reference gene index, global geometric mean, quantile, invariant selection, loess, loessM, and generalized procrustes analysis) on intra- and inter-platform performance of two distinct and commonly used miRNA profiling platforms. Methodology/Principal Findings We included data from miRNA profiling analyses derived from a hybridization-based platform (Agilent Technologies) and an RT-qPCR platform (Applied Biosystems). Furthermore, we validated a subset of miRNAs by individual RT-qPCR assays. Our analyses incorporated data from the effect of differentiation and tumor necrosis factor alpha treatment on primary human skeletal muscle cells and a murine skeletal muscle cell line. Distinct normalization methods differed in their impact on (i) standard deviations, (ii) the area under the receiver operating characteristic (ROC) curve, (iii) the similarity of differential expression. Loess, loessM, and quantile analysis were most effective in minimizing standard deviations on the Agilent and TLDA platform. Moreover, loess, loessM, invariant selection and generalized procrustes analysis increased the area under the ROC curve, a measure for the statistical performance of a test. The Jaccard index revealed that inter-platform concordance of differential expression tended to be increased by loess, loessM, quantile, and GPA normalization of AGL and TLDA data as well as RGI normalization of TLDA data. Conclusions/Significance We recommend the application of loess, or loessM, and GPA normalization for miRNA Agilent arrays and qPCR cards as these normalization approaches showed to (i) effectively reduce standard deviations, (ii) increase sensitivity and accuracy of differential miRNA expression detection as well as (iii) increase inter-platform concordance. Results showed the successful adoption of loessM and generalized procrustes analysis to one-color miRNA profiling experiments. PMID:22723911

  1. Bayesian Non-Stationary Flood Frequency Estimation at Ungauged Basins Using Climate Information and a Scaling Model

    NASA Astrophysics Data System (ADS)

    Lima, C. H.; Lall, U.

    2010-12-01

    Flood frequency statistical analysis most often relies on stationary assumptions, where distribution moments (e.g. mean, standard deviation) and associated flood quantiles do not change over time. In this sense, one expects that flood magnitudes and their frequency of occurrence will remain constant as observed in the historical information. However, evidence of inter-annual and decadal climate variability and anthropogenic change as well as an apparent increase in the number and magnitude of flood events across the globe have made the stationary assumption questionable. Here, we show how to estimate flood quantiles (e.g. 100-year flood) at ungauged basins without needing to consider stationarity. A statistical model based on the well known flow-area scaling law is proposed to estimate flood flows at ungauged basins. The slope and intercept scaling law coefficients are assumed time varying and a hierarchical Bayesian model is used to include climate information and reduce parameter uncertainties. Cross-validated results from 34 streamflow gauges located in a nested Basin in Brazil show that the proposed model is able to estimate flood quantiles at ungauged basins with remarkable skills compared with data based estimates using the full record. The model as developed in this work is also able to simulate sequences of flood flows considering global climate changes provided an appropriate climate index developed from the General Circulation Model is used as a predictor. The time varying flood frequency estimates can be used for pricing insurance models, and in a forecast mode for preparations for flooding, and finally, for timing infrastructure investments and location. Non-stationary 95% interval estimation for the 100-year Flood (shaded gray region) and 95% interval for the 100-year flood estimated from data (horizontal dashed and solid lines). The average distribution of the 100-year flood is shown in green in the right side.

  2. Delivery of Essential Medicines to Primary Care Institutions and its Association with Procurement Volume and Price: A Case Study in Hubei Province, China.

    PubMed

    Tang, Yuqing; Liu, Chaojie; Zhang, Xinping

    2017-02-01

    The low availability of essential medicines is a worldwide issue of concern. In 2009, China introduced a National Essential Medicines List (NEML), with NEML medicines being purchased in bulk at contracted prices established by tenders conducted at the provincial level. The availability of essential medicines in the public sector largely relies on commercial supply chains. The objectives of this paper were to analyze the delivery performance of essential medicines under NEML provincial procurement arrangements, and to determine whether the procurement volume and price of medicines are associated with the delivery performance of suppliers. We reviewed 9390 recorded orders of 1099 essential medicines in Hubei province from August 2011 to April 2012. The reliability of medicine delivery in-full and on-time (DIFOT) was considered the performance indicator, and we used Spearman correlation analyses to explore whether there were any associations between DIFOT and procurement price and volume. Quantile regressions were performed to determine such associations. The DIFOT had positive correlations with procurement price and volume. The Spearman rank correlation coefficients between price and DIFOT were 0.114, 0.34 and 0.25 for medicines with low one-third, middle one-third and high one-third procurement volumes, respectively. The quantile regression analysis revealed a positive association between price and DIFOT across all quantiles of DIFOT, and although significant positive associations between volume and DIFOT were only found at the 25th percentile of DIFOT, volume showed significant interactions with price for both the 25th and 50th percentiles of DIFOT. Higher procurement price is associated with better delivery performance of essential medicines; however, it is important to link procurement price with procurement volume. Increasing procurement volume may alleviate the negative effect of low price on delivery performance. Variation in volumes of repeated orders imposes uncertainties and may jeopardize the delivery of essential medicines.

  3. A downscaling method for the assessment of local climate change

    NASA Astrophysics Data System (ADS)

    Bruno, E.; Portoghese, I.; Vurro, M.

    2009-04-01

    The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.

  4. Modelling average maximum daily temperature using r largest order statistics: An application to South African data

    PubMed Central

    2018-01-01

    Natural hazards (events that may cause actual disasters) are established in the literature as major causes of various massive and destructive problems worldwide. The occurrences of earthquakes, floods and heat waves affect millions of people through several impacts. These include cases of hospitalisation, loss of lives and economic challenges. The focus of this study was on the risk reduction of the disasters that occur because of extremely high temperatures and heat waves. Modelling average maximum daily temperature (AMDT) guards against the disaster risk and may also help countries towards preparing for extreme heat. This study discusses the use of the r largest order statistics approach of extreme value theory towards modelling AMDT over the period of 11 years, that is, 2000–2010. A generalised extreme value distribution for r largest order statistics is fitted to the annual maxima. This is performed in an effort to study the behaviour of the r largest order statistics. The method of maximum likelihood is used in estimating the target parameters and the frequency of occurrences of the hottest days is assessed. The study presents a case study of South Africa in which the data for the non-winter season (September–April of each year) are used. The meteorological data used are the AMDT that are collected by the South African Weather Service and provided by Eskom. The estimation of the shape parameter reveals evidence of a Weibull class as an appropriate distribution for modelling AMDT in South Africa. The extreme quantiles for specified return periods are estimated using the quantile function and the best model is chosen through the use of the deviance statistic with the support of the graphical diagnostic tools. The Entropy Difference Test (EDT) is used as a specification test for diagnosing the fit of the models to the data.

  5. Quantifying variability in fast and slow solar wind: From turbulence to extremes

    NASA Astrophysics Data System (ADS)

    Tindale, E.; Chapman, S. C.; Moloney, N.; Watkins, N. W.

    2017-12-01

    Fast and slow solar wind exhibit variability across a wide range of spatiotemporal scales, with evolving turbulence producing fluctuations on sub-hour timescales and the irregular solar cycle modulating the system over many years. Here, we apply the data quantile-quantile (DQQ) method [Tindale and Chapman 2016, 2017] to over 20 years of Wind data, to study the time evolution of the statistical distribution of plasma parameters in fast and slow solar wind. This model-independent method allows us to simultaneously explore the evolution of fluctuations across all scales. We find a two-part functional form for the statistical distributions of the interplanetary magnetic field (IMF) magnitude and its components, with each region of the distribution evolving separately over the solar cycle. Up to a value of 8nT, turbulent fluctuations dominate the distribution of the IMF, generating the approximately lognormal shape found by Burlaga [2001]. The mean of this core-turbulence region tracks solar cycle activity, while its variance remains constant, independent of the fast or slow state of the solar wind. However, when we test the lognormality of this core-turbulence component over time, we find the model provides a poor description of the data at solar maximum, where sharp peaks in the distribution dominate over the lognormal shape. At IMF values higher than 8nT, we find a separate, extremal distribution component, whose moments are sensitive to solar cycle phase, the peak activity of the cycle and the solar wind state. We further investigate these `extremal' values using burst analysis, where a burst is defined as a continuous period of exceedance over a predefined threshold. This form of extreme value statistics allows us to study the stochastic process underlying the time series, potentially supporting a probabilistic forecast of high-energy events. Tindale, E., and S.C. Chapman (2016), Geophys. Res. Lett., 43(11) Tindale, E., and S.C. Chapman (2017), submitted Burlaga, L.F. (2001), J. Geophys. Res., 106(A8)

  6. Classification of Satellite Derived Chlorophyll a Space-Time Series by Means of Quantile Regression: An Application to the Adriatic Sea

    NASA Astrophysics Data System (ADS)

    Girardi, P.; Pastres, R.; Gaetan, C.; Mangin, A.; Taji, M. A.

    2015-12-01

    In this paper, we present the results of a classification of Adriatic waters, based on spatial time series of remotely sensed Chlorophyll type-a. The study was carried out using a clustering procedure combining quantile smoothing and an agglomerative clustering algorithms. The smoothing function includes a seasonal term, thus allowing one to classify areas according to “similar” seasonal evolution, as well as according to “similar” trends. This methodology, which is here applied for the first time to Ocean Colour data, is more robust with respect to other classical methods, as it does not require any assumption on the probability distribution of the data. This approach was applied to the classification of an eleven year long time series, from January 2002 to December 2012, of monthly values of Chlorophyll type-a concentrations covering the whole Adriatic Sea. The data set was made available by ACRI (http://hermes.acri.fr) in the framework of the Glob-Colour Project (http://www.globcolour.info). Data were obtained by calibrating Ocean Colour data provided by different satellite missions, such as MERIS, SeaWiFS and MODIS. The results clearly show the presence of North-South and West-East gradient in the level of Chlorophyll, which is consistent with literature findings. This analysis could provide a sound basis for the identification of “water bodies” and of Chlorophyll type-a thresholds which define their Good Ecological Status, in terms of trophic level, as required by the implementation of the Marine Strategy Framework Directive. The forthcoming availability of Sentinel-3 OLCI data, in continuity of the previous missions, and with perspective of more than a 15-year monitoring system, offers a real opportunity of expansion of our study as a strong support to the implementation of both the EU Marine Strategy Framework Directive and the UNEP-MAP Ecosystem Approach in the Mediterranean.

  7. Change in the Body Mass Index Distribution for Women: Analysis of Surveys from 37 Low- and Middle-Income Countries

    PubMed Central

    Razak, Fahad; Corsi, Daniel J.; SV Subramanian

    2013-01-01

    Background There are well-documented global increases in mean body mass index (BMI) and prevalence of overweight (BMI≥25.0 kg/m2) and obese (BMI≥30.0 kg/m2). Previous analyses, however, have failed to report whether this weight gain is shared equally across the population. We examined the change in BMI across all segments of the BMI distribution in a wide range of countries, and assessed whether the BMI distribution is changing between cross-sectional surveys conducted at different time points. Methods and Findings We used nationally representative surveys of women between 1991–2008, in 37 low- and middle-income countries from the Demographic Health Surveys ([DHS] n = 732,784). There were a total of 96 country-survey cycles, and the number of survey cycles per country varied between two (21/37) and five (1/37). Using multilevel regression models, between countries and within countries over survey cycles, the change in mean BMI was used to predict the standard deviation of BMI, the prevalence of underweight, overweight, and obese. Changes in median BMI were used to predict the 5th and 95th percentile of the BMI distribution. Quantile-quantile plots were used to examine the change in the BMI distribution between surveys conducted at different times within countries. At the population level, increasing mean BMI is related to increasing standard deviation of BMI, with the BMI at the 95th percentile rising at approximately 2.5 times the rate of the 5th percentile. Similarly, there is an approximately 60% excess increase in prevalence of overweight and 40% excess in obese, relative to the decline in prevalence of underweight. Quantile-quantile plots demonstrate a consistent pattern of unequal weight gain across percentiles of the BMI distribution as mean BMI increases, with increased weight gain at high percentiles of the BMI distribution and little change at low percentiles. Major limitations of these results are that repeated population surveys cannot examine weight gain within an individual over time, most of the countries only had data from two surveys and the study sample only contains women in low- and middle-income countries, potentially limiting generalizability of findings. Conclusions Mean changes in BMI, or in single parameters such as percent overweight, do not capture the divergence in the degree of weight gain occurring between BMI at low and high percentiles. Population weight gain is occurring disproportionately among groups with already high baseline BMI levels. Studies that characterize population change should examine patterns of change across the entire distribution and not just average trends or single parameters. Please see later in the article for the Editors' Summary PMID:23335861

  8. Overview of progesterone profiles in dairy cows.

    PubMed

    Blavy, P; Derks, M; Martin, O; Höglund, J K; Friggens, N C

    2016-09-01

    The aim of this study was to gain a better understanding of the variability in shape and features of all progesterone profiles during estrus cycles in cows and to create templates for cycle shapes and features as a base for further research. Milk progesterone data from 1418 estrus cycles, coming from 1009 lactations, was obtained from the Danish Cattle Research Centre in Foulum, Denmark. Milk samples were analyzed daily using a Ridgeway ELISA-kit. Estrus cycles with less than 10 data points or shorter than 4 days were discarded, after which 1006 cycles remained in the analysis. A median kernel of three data points was used to smooth the progesterone time series. The time between start of progesterone rise and end of progesterone decline was identified by fitting a simple model consisting of base length and a quadratic curve to progesterone data, and this luteal-like phase (LLP) was used for further analysis. The data set of 1006 LLP's was divided into five quantiles based on length. Within quantiles, a cluster analysis was performed on the basis of shape distance. Height, upward and downward slope, and progesterone level on Day 5 were compared between quantiles. Also, the ratio of typical versus atypical shapes was described, using a reference curve on the basis of data in Q1-Q4. The main results of this article were that (1) most of the progesterone profiles showed a typical profile, including the ones that exceeded the optimum cycle length of 24 days; (2) cycles in Q2 and Q3 had steeper slopes and higher peak progesterone levels than cycles in Q1 and Q4 but, when normalized, had a similar shape. Results were used to define differences between quantiles that can be used as templates. Compared to Q1, LLP's in Q2 had a shape that is 1.068 times steeper and 1.048 times higher. Luteal-like phases in Q3 were 1.053 times steeper and 1.018 times higher. Luteal-like phases in Q4 were 0.977 times steeper and 0.973 times higher than LLP's in Q1. This article adds to our knowledge about the variability of progesterone profiles and their shape differences. The profile clustering procedure described in this article can be used as a means to classify progesterone profiles without recourse to an a priori set of rules, which arbitrarily segment the natural variability in these profiles. Using data-derived profile shapes may allow a more accurate assessment of the effects of, e.g., nutritional management or breeding system on progesterone profiles. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Quantum Bio-Informatics IV

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi; Freudenberg, Wolfgang; Ohya, Masanori

    2011-01-01

    The QP-DYN algorithms / L. Accardi, M. Regoli and M. Ohya -- Study of transcriptional regulatory network based on Cis module database / S. Akasaka ... [et al.] -- On Lie group-Lie algebra correspondences of unitary groups in finite von Neumann algebras / H. Ando, I. Ojima and Y. Matsuzawa -- On a general form of time operators of a Hamiltonian with purely discrete spectrum / A. Arai -- Quantum uncertainty and decision-making in game theory / M. Asano ... [et al.] -- New types of quantum entropies and additive information capacities / V. P. Belavkin -- Non-Markovian dynamics of quantum systems / D. Chruscinski and A. Kossakowski -- Self-collapses of quantum systems and brain activities / K.-H. Fichtner ... [et al.] -- Statistical analysis of random number generators / L. Accardi and M. Gabler -- Entangled effects of two consecutive pairs in residues and its use in alignment / T. Ham, K. Sato and M. Ohya -- The passage from digital to analogue in white noise analysis and applications / T. Hida -- Remarks on the degree of entanglement / D. Chruscinski ... [et al.] -- A completely discrete particle model derived from a stochastic partial differential equation by point systems / K.-H. Fichtner, K. Inoue and M. Ohya -- On quantum algorithm for exptime problem / S. Iriyama and M. Ohya -- On sufficient algebraic conditions for identification of quantum states / A. Jamiolkowski -- Concurrence and its estimations by entanglement witnesses / J. Jurkowski -- Classical wave model of quantum-like processing in brain / A. Khrennikov -- Entanglement mapping vs. quantum conditional probability operator / D. Chruscinski ... [et al.] -- Constructing multipartite entanglement witnesses / M. Michalski -- On Kadison-Schwarz property of quantum quadratic operators on M[symbol](C) / F. Mukhamedov and A. Abduganiev -- On phase transitions in quantum Markov chains on Cayley Tree / L. Accardi, F. Mukhamedov and M. Saburov -- Space(-time) emergence as symmetry breaking effect / I. Ojima.Use of cryptographic ideas to interpret biological phenomena (and vice versa) / M. Regoli -- Discrete approximation to operators in white noise analysis / Si Si -- Bogoliubov type equations via infinite-dimensional equations for measures / V. V. Kozlov and O. G. Smolyanov -- Analysis of several categorical data using measure of proportional reduction in variation / K. Yamamoto ... [et al.] -- The electron reservoir hypothesis for two-dimensional electron systems / K. Yamada ... [et al.] -- On the correspondence between Newtonian and functional mechanics / E. V. Piskovskiy and I. V. Volovich -- Quantile-quantile plots: An approach for the inter-species comparison of promoter architecture in eukaryotes / K. Feldmeier ... [et al.] -- Entropy type complexities in quantum dynamical processes / N. Watanabe -- A fair sampling test for Ekert protocol / G. Adenier, A. Yu. Khrennikov and N. Watanabe -- Brownian dynamics simulation of macromolecule diffusion in a protocell / T. Ando and J. Skolnick -- Signaling network of environmental sensing and adaptation in plants: Key roles of calcium ion / K. Kuchitsu and T. Kurusu -- NetzCope: A tool for displaying and analyzing complex networks / M. J. Barber, L. Streit and O. Strogan -- Study of HIV-1 evolution by coding theory and entropic chaos degree / K. Sato -- The prediction of botulinum toxin structure based on in silico and in vitro analysis / T. Suzuki and S. Miyazaki -- On the mechanism of D-wave high T[symbol] superconductivity by the interplay of Jahn-Teller physics and Mott physics / H. Ushio, S. Matsuno and H. Kamimura.

  10. Predictor sort sampling and one-sided confidence bounds on quantiles

    Treesearch

    Steve Verrill; Victoria L. Herian; David W. Green

    2002-01-01

    Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

  11. Student Growth Percentiles Based on MIRT: Implications of Calibrated Projection. CRESST Report 842

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li; Choi, Kilchan

    2014-01-01

    This research concerns a new proposal for calculating student growth percentiles (SGP, Betebenner, 2009). In Betebenner (2009), quantile regression (QR) is used to estimate the SGPs. However, measurement error in the score estimates, which always exists in practice, leads to bias in the QR-­based estimates (Shang, 2012). One way to address this…

  12. Extreme Quantile Estimation in Binary Response Models

    DTIC Science & Technology

    1990-03-01

    in Cancer Research," Biometria , VoL 66, pp. 307-316. Hsi, B.P. [1969], ’The Multiple Sample Up-and-Down Method in Bioassay," Journal of the American...New Method of Estimation," Biometria , VoL 53, pp. 439-454. Wetherill, G.B. [1976], Sequential Methods in Statistics, London: Chapman and Hall. Wu, C.FJ

  13. Establishing Normative Reference Values for Handgrip among Hungarian Youth

    ERIC Educational Resources Information Center

    Saint-Maurice, Pedro F.; Laurson, Kelly R.; Karsai, István; Kaj, Mónika; Csányi, Tamás

    2015-01-01

    Purpose: The purpose of this study was to examine age- and sex-related variation in handgrip strength and to determine reference values for the Hungarian population. Method: A sample of 1,086 Hungary youth (aged 11-18 years old; 654 boys and 432 girls) completed a handgrip strength assessment using a handheld dynamometer. Quantile regression was…

  14. Covariate Measurement Error Correction for Student Growth Percentiles Using the SIMEX Method

    ERIC Educational Resources Information Center

    Shang, Yi; VanIwaarden, Adam; Betebenner, Damian W.

    2015-01-01

    In this study, we examined the impact of covariate measurement error (ME) on the estimation of quantile regression and student growth percentiles (SGPs), and find that SGPs tend to be overestimated among students with higher prior achievement and underestimated among those with lower prior achievement, a problem we describe as ME endogeneity in…

  15. Estimation of Return Values of Wave Height: Consequences of Missing Observations

    ERIC Educational Resources Information Center

    Ryden, Jesper

    2008-01-01

    Extreme-value statistics is often used to estimate so-called return values (actually related to quantiles) for environmental quantities like wind speed or wave height. A basic method for estimation is the method of block maxima which consists in partitioning observations in blocks, where maxima from each block could be considered independent.…

  16. The Effect of Marital Breakup on the Income Distribution of Women with Children

    ERIC Educational Resources Information Center

    Ananat, Elizabeth O.; Michaels, Guy

    2008-01-01

    Having a female first-born child significantly increases the probability that a woman's first marriage breaks up. Using this exogenous variation, recent work finds that divorce has little effect on women's mean household income. We further investigate the effect of divorce using Quantile Treatment Effect methodology and find that it increases…

  17. A hydroclimatic model of global fire patterns

    NASA Astrophysics Data System (ADS)

    Boer, Matthias

    2015-04-01

    Satellite-based earth observation is providing an increasingly accurate picture of global fire patterns. The highest fire activity is observed in seasonally dry (sub-)tropical environments of South America, Africa and Australia, but fires occur with varying frequency, intensity and seasonality in almost all biomes on Earth. The particular combination of these fire characteristics, or fire regime, is known to emerge from the combined influences of climate, vegetation, terrain and land use, but has so far proven difficult to reproduce by global models. Uncertainty about the biophysical drivers and constraints that underlie current global fire patterns is propagated in model predictions of how ecosystems, fire regimes and biogeochemical cycles may respond to projected future climates. Here, I present a hydroclimatic model of global fire patterns that predicts the mean annual burned area fraction (F) of 0.25° x 0.25° grid cells as a function of the climatic water balance. Following Bradstock's four-switch model, long-term fire activity levels were assumed to be controlled by fuel productivity rates and the likelihood that the extant fuel is dry enough to burn. The frequency of ignitions and favourable fire weather were assumed to be non-limiting at long time scales. Fundamentally, fuel productivity and fuel dryness are a function of the local water and energy budgets available for the production and desiccation of plant biomass. The climatic water balance summarizes the simultaneous availability of biologically usable energy and water at a site, and may therefore be expected to explain a significant proportion of global variation in F. To capture the effect of the climatic water balance on fire activity I focused on the upper quantiles of F, i.e. the maximum level of fire activity for a given climatic water balance. Analysing GFED4 data for annual burned area together with gridded climate data, I found that nearly 80% of the global variation in the 0.99 quantile of F (i.e. F_0.99 ) was explained by two terms of the climatic water balance: i) mean annual actual evapotranspiration (AET), which is a proxy for fuel productivity, and ii) mean annual water deficit (D=PET-AET, where PET is mean annual potential evapotranspiration), which is a measure of fuel drying potential. As expected, F_0.99 was close to zero in environments of low AET (e.g. deserts) or low D (e.g. wet forests), due to strong fuel productivity or fuel dryness constraints, and maximum for environments of intermediate AET and D (e.g. tropical savannas). The topography of the F_0.99 response surface was analysed to explore how the relative importance of fuel productivity and fuel dryness constraints varied with the climatic water balance, and geographically across the continents. Consistent with current understanding of global pyrogeography, the hydroclimatic fire model predicted that fire activity is mostly constrained by fuel productivity in arid environments with grassy fuels and by fuel dryness in humid environments with litter fuels derived from woody shrubs and trees. The model provides a simple, yet biophysically-based, approach to evaluating potential for incremental change in fire activity or transformational change in fire types under future climate conditions.

  18. Gender differences in the impact of mental disorders and chronic physical conditions on health-related quality of life among non-demented primary care elderly patients.

    PubMed

    Baladón, Luisa; Rubio-Valera, Maria; Serrano-Blanco, Antoni; Palao, Diego J; Fernández, Ana

    2016-06-01

    This paper aims to estimate the comorbidity of mental disorders and chronic physical conditions and to describe the impact of these conditions on health-related quality of life (HRQoL) in a sample of older primary care (PC) attendees by gender. Cross-sectional survey, conducted in 77 PC centres in Catalonia (Spain) on 1192 patients over 65 years old. Using face-to-face interviews, we assessed HRQoL (SF-12), mental disorders (SCID and MINI structured clinical interviews), chronic physical conditions (checklist), and disability (Sheehan disability scale). We used multivariate quantile regressions to model which factors were associated with the physical component summary-short form 12 and mental component summary-short form 12. The most frequent comorbidity in both men and women was mood disorder with chronic pain and arthrosis. Mental disorders mainly affected 'mental' QoL, while physical disorders affected 'physical' QoL. Mental disorders had a greater impact on HRQoL than chronic physical conditions, with mood and adjustment disorders being the most disabling conditions. There were some gender differences in the impact of mental and chronic physical conditions on HRQoL. Anxiety disorders and pain had an impact on HRQoL but only in women. Respiratory diseases had an effect on the MCS in women, but only affected the PCS in men. Mood and adjustment disorders had the greatest impact on HRQoL. The impact profile of mental and chronic physical conditions differs between genders. Our results reinforce the need for screening for mental disorders (mainly depression) in older patients in PC.

  19. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  20. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  1. Reference charts for young stands — a quantitative methodology for assessing tree performance

    Treesearch

    Lance A. Vickers; David R. Larsen; Benjamin O. Knapp; John M. Kabrick; Daniel C. Dey

    2017-01-01

    Reference charts have long been used in the medical field for quantitative clinical assessment of juvenile development by plotting distribution quantiles for a selected attribute (e.g., height) against age for specified peer populations.We propose that early stand dynamics is an area of study that could benefit from the descriptions and analyses offered by similar...

  2. Estimating tree crown widths for the primary Acadian species in Maine

    Treesearch

    Matthew B. Russell; Aaron R. Weiskittel

    2012-01-01

    In this analysis, data for seven conifer and eight hardwood species were gathered from across the state of Maine for estimating tree crown widths. Maximum and largest crown width equations were developed using tree diameter at breast height as the primary predicting variable. Quantile regression techniques were used to estimate the maximum crown width and a constrained...

  3. The Economic Returns to Field of Study and Competencies among Higher Education Graduates in Ireland

    ERIC Educational Resources Information Center

    Kelly, Elish; O'Connell, Philip J.; Smyth, Emer

    2010-01-01

    This paper looks at the economic returns to different fields of study in Ireland in 2004 and also the value placed on various job-related competencies, accumulated on completion of higher education, in the Irish labour market. In examining these issues, the paper also analyses, through quantile regression, how the returns vary across the earnings…

  4. The Dynamics of the Evolution of the Black-White Test Score Gap

    ERIC Educational Resources Information Center

    Sohn, Kitae

    2012-01-01

    We apply a quantile version of the Oaxaca-Blinder decomposition to estimate the counterfactual distribution of the test scores of Black students. In the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (ECLS-K), we find that the gap initially appears only at the top of the distribution of test scores. As children age, however,…

  5. Has the Bologna Process Been Worthwhile? An Analysis of the Learning Society-Adapted Outcome Index through Quantile Regression

    ERIC Educational Resources Information Center

    Fernandez-Sainz, A.; García-Merino, J. D.; Urionabarrenetxea, S.

    2016-01-01

    This paper seeks to discover whether the performance of university students has improved in the wake of the changes in higher education introduced by the Bologna Declaration of 1999 and the construction of the European Higher Education Area. A principal component analysis is used to construct a multi-dimensional performance variable called the…

  6. The Mean Is Not Enough: Using Quantile Regression to Examine Trends in Asian-White Differences across the Entire Achievement Distribution

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Background: In recent years, Asian Americans have been consistently described as a model minority. The high levels of educational achievement and educational attainment are the main determinants for identifying Asian Americans as a model minority. Nonetheless, only a few studies have examined empirically the accomplishments of Asian Americans, and…

  7. Declining annual streamflow distributions in the Pacific Northwest United States, 1948-2006

    Treesearch

    C. H. Luce; Z. A. Holden

    2009-01-01

    Much of the discussion on climate change and water in the western United States centers on decreased snowpack and earlier spring runoff. Although increasing variability in annual flows has been noted, the nature of those changes is largely unexplored. We tested for trends in the distribution of annual runoff using quantile regression at 43 gages in the Pacific...

  8. Leader Perceptions and Student Achievement: An Examination of Reading and Mathematics International Test Results in Korea and the USA

    ERIC Educational Resources Information Center

    Shin, Seon-Hi; Slater, Charles L.; Ortiz, Steve

    2017-01-01

    Purpose: The purpose of this paper is to examine what factors affect student achievement in reading and mathematics. The research questions addressed the perceptions of school principals and background characteristics related to student achievement in Korea and the USA with respect to differences among students in low, middle and high quantiles.…

  9. The Effect of Family Background, University Quality and Educational Mismatch on Wage: An Analysis Using a Young Cohort of Italian Graduates

    ERIC Educational Resources Information Center

    Ordine, Patrizia; Rose, Giuseppe

    2015-01-01

    This paper analyzes the impact of university quality, family background and mismatch on the wages of young Italian graduates. An empirical analysis is undertaken using a representative sample of graduates merged with a dataset containing information on the characteristics of universities. By utilizing quantile regression techniques, some evidence…

  10. Obesity inequality in Malaysia: decomposing differences by gender and ethnicity using quantile regression.

    PubMed

    Dunn, Richard A; Tan, Andrew K G; Nayga, Rodolfo M

    2012-01-01

    Obesity prevalence is unequally distributed across gender and ethnic group in Malaysia. In this paper, we examine the role of socioeconomic inequality in explaining these disparities. The body mass index (BMI) distributions of Malays and Chinese, the two largest ethnic groups in Malaysia, are estimated through the use of quantile regression. The differences in the BMI distributions are then decomposed into two parts: attributable to differences in socioeconomic endowments and attributable to differences in responses to endowments. For both males and females, the BMI distribution of Malays is shifted toward the right of the distribution of Chinese, i.e., Malays exhibit higher obesity rates. In the lower 75% of the distribution, differences in socioeconomic endowments explain none of this difference. At the 90th percentile, differences in socioeconomic endowments account for no more than 30% of the difference in BMI between ethnic groups. Our results demonstrate that the higher levels of income and education that accrue with economic development will likely not eliminate obesity inequality. This leads us to conclude that reduction of obesity inequality, as well the overall level of obesity, requires increased efforts to alter the lifestyle behaviors of Malaysians.

  11. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    NASA Astrophysics Data System (ADS)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  12. The effect of fetal sex on customized fetal growth charts.

    PubMed

    Rizzo, Giuseppe; Prefumo, Federico; Ferrazzi, Enrico; Zanardini, Cristina; Di Martino, Daniela; Boito, Simona; Aiello, Elisa; Ghi, Tullio

    2016-12-01

    To evaluate the effect of fetal sex on singleton pregnancy growth charts customized for parental characteristics, race, and parity Methods: In a multicentric cross-sectional study, 8070 ultrasonographic examinations from low-risk singleton pregnancies between 16 and 40 weeks of gestation were considered. The fetal measurements obtained were biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), and femur length (FL). Quantile regression was used to examine the impact of fetal sex across the biometric percentiles of the fetal measurements considered together with parents' height, weight, parity, and race. Fetal gender resulted to be a significant covariate for BDP, HC, and AC with higher values for male fetuses (p ≤ 0.0009). Minimal differences were found among sexes for FL. Parity, maternal race, paternal height and maternal height, and weight resulted significantly related to the fetal biometric parameters considered independently from fetal gender. In this study, we constructed customized biometric growth charts for fetal sex, parental, and obstetrical characteristics using quantile regression. The use of gender-specific charts offers the advantage to define individualized normal ranges of fetal biometric parameters at each specific centile. This approach may improve the antenatal identification of abnormal fetal growth.

  13. Can adherence to dietary guidelines address excess caloric intake? An empirical assessment for the UK.

    PubMed

    Srinivasan, C S

    2013-12-01

    The facilitation of healthier dietary choices by consumers is a key element of government strategies to combat the rising incidence of obesity in developed and developing countries. Public health campaigns to promote healthier eating often target compliance with recommended dietary guidelines for consumption of individual nutrients such as fats and added sugars. This paper examines the association between improved compliance with dietary guidelines for individual nutrients and excess calorie intake, the most proximate determinant of obesity risk. We apply quantile regressions and counterfactual decompositions to cross-sectional data from the National Diet and Nutrition Survey (2000-01) to assess how excess calorie consumption patterns in the UK are likely to change with improved compliance with dietary guidelines. We find that the effects of compliance vary significantly across different quantiles of calorie consumption. Our results show that compliance with dietary guidelines for individual nutrients, even if successfully achieved, is likely to be associated with only modest shifts in excess calorie consumption patterns. Consequently, public health campaigns that target compliance with dietary guidelines for specific nutrients in isolation are unlikely to have a significant effect on the obesity risk faced by the population. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Evaluation of three statistical prediction models for forensic age prediction based on DNA methylation.

    PubMed

    Smeers, Inge; Decorte, Ronny; Van de Voorde, Wim; Bekaert, Bram

    2018-05-01

    DNA methylation is a promising biomarker for forensic age prediction. A challenge that has emerged in recent studies is the fact that prediction errors become larger with increasing age due to interindividual differences in epigenetic ageing rates. This phenomenon of non-constant variance or heteroscedasticity violates an assumption of the often used method of ordinary least squares (OLS) regression. The aim of this study was to evaluate alternative statistical methods that do take heteroscedasticity into account in order to provide more accurate, age-dependent prediction intervals. A weighted least squares (WLS) regression is proposed as well as a quantile regression model. Their performances were compared against an OLS regression model based on the same dataset. Both models provided age-dependent prediction intervals which account for the increasing variance with age, but WLS regression performed better in terms of success rate in the current dataset. However, quantile regression might be a preferred method when dealing with a variance that is not only non-constant, but also not normally distributed. Ultimately the choice of which model to use should depend on the observed characteristics of the data. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    DOE PAGES

    Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.

    2016-02-16

    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less

  16. Modeling Longitudinal Data Containing Non-Normal Within Subject Errors

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan; Glenn, Nancy L.

    2013-01-01

    The mission of the National Aeronautics and Space Administration’s (NASA) human research program is to advance safe human spaceflight. This involves conducting experiments, collecting data, and analyzing data. The data are longitudinal and result from a relatively few number of subjects; typically 10 – 20. A longitudinal study refers to an investigation where participant outcomes and possibly treatments are collected at multiple follow-up times. Standard statistical designs such as mean regression with random effects and mixed–effects regression are inadequate for such data because the population is typically not approximately normally distributed. Hence, more advanced data analysis methods are necessary. This research focuses on four such methods for longitudinal data analysis: the recently proposed linear quantile mixed models (lqmm) by Geraci and Bottai (2013), quantile regression, multilevel mixed–effects linear regression, and robust regression. This research also provides computational algorithms for longitudinal data that scientists can directly use for human spaceflight and other longitudinal data applications, then presents statistical evidence that verifies which method is best for specific situations. This advances the study of longitudinal data in a broad range of applications including applications in the sciences, technology, engineering and mathematics fields.

  17. Hospital ownership and drug utilization under a global budget: a quantile regression analysis.

    PubMed

    Zhang, Jing Hua; Chou, Shin-Yi; Deily, Mary E; Lien, Hsien-Ming

    2014-03-01

    A global budgeting system helps control the growth of healthcare spending by setting expenditure ceilings. However, the hospital global budget implemented in Taiwan in 2002 included a special provision: drug expenditures are reimbursed at face value, while other expenditures are subject to discounting. That gives hospitals, particularly those that are for-profit, an incentive to increase drug expenditures in treating patients. We calculated monthly drug expenditures by hospital departments from January 1997 to June 2006, using a sample of 348 193 patient claims to Taiwan National Health Insurance. To allow for variation among responses by departments with differing reliance on drugs and among hospitals of different ownerships, we used quantile regression to identify the effect of the hospital global budget on drug expenditures. Although drug expenditure increased in all hospital departments after the enactment of the hospital global budget, departments in for-profit hospitals that rely more heavily on drug treatments increased drug spending more, relative to public hospitals. Our findings suggest that a global budgeting system with special reimbursement provisions for certain treatment categories may alter treatment decisions and may undermine cost-containment goals, particularly among for-profit hospitals.

  18. A random walk rule for phase I clinical trials.

    PubMed

    Durham, S D; Flournoy, N; Rosenberger, W F

    1997-06-01

    We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.

  19. Nuclear morphology for the detection of alterations in bronchial cells from lung cancer: an attempt to improve sensitivity and specificity.

    PubMed

    Fafin-Lefevre, Mélanie; Morlais, Fabrice; Guittet, Lydia; Clin, Bénédicte; Launoy, Guy; Galateau-Sallé, Françoise; Plancoulaine, Benoît; Herlin, Paulette; Letourneux, Marc

    2011-08-01

    To identify which morphologic or densitometric parameters are modified in cell nuclei from bronchopulmonary cancer based on 18 parameters involving shape, intensity, chromatin, texture, and DNA content and develop a bronchopulmonary cancer screening method relying on analysis of sputum sample cell nuclei. A total of 25 sputum samples from controls and 22 bronchial aspiration samples from patients presenting with bronchopulmonary cancer who were professionally exposed to cancer were used. After Feulgen staining, 18 morphologic and DNA content parameters were measured on cell nuclei, via image cytom- etry. A method was developed for analyzing distribution quantiles, compared with simply interpreting mean values, to characterize morphologic modifications in cell nuclei. Distribution analysis of parameters enabled us to distinguish 13 of 18 parameters that demonstrated significant differences between controls and cancer cases. These parameters, used alone, enabled us to distinguish two population types, with both sensitivity and specificity > 70%. Three parameters offered 100% sensitivity and specificity. When mean values offered high sensitivity and specificity, comparable or higher sensitivity and specificity values were observed for at least one of the corresponding quantiles. Analysis of modification in morphologic parameters via distribution analysis proved promising for screening bronchopulmonary cancer from sputum.

  20. Evaluation of uncertainties in mean and extreme precipitation under climate change for northwestern Mediterranean watersheds from high-resolution Med and Euro-CORDEX ensembles

    NASA Astrophysics Data System (ADS)

    Colmet-Daage, Antoine; Sanchez-Gomez, Emilia; Ricci, Sophie; Llovel, Cécile; Borrell Estupina, Valérie; Quintana-Seguí, Pere; Llasat, Maria Carmen; Servat, Eric

    2018-01-01

    The climate change impact on mean and extreme precipitation events in the northern Mediterranean region is assessed using high-resolution EuroCORDEX and MedCORDEX simulations. The focus is made on three regions, Lez and Aude located in France, and Muga located in northeastern Spain, and eight pairs of global and regional climate models are analyzed with respect to the SAFRAN product. First the model skills are evaluated in terms of bias for the precipitation annual cycle over historical period. Then future changes in extreme precipitation, under two emission scenarios, are estimated through the computation of past/future change coefficients of quantile-ranked model precipitation outputs. Over the 1981-2010 period, the cumulative precipitation is overestimated for most models over the mountainous regions and underestimated over the coastal regions in autumn and higher-order quantile. The ensemble mean and the spread for future period remain unchanged under RCP4.5 scenario and decrease under RCP8.5 scenario. Extreme precipitation events are intensified over the three catchments with a smaller ensemble spread under RCP8.5 revealing more evident changes, especially in the later part of the 21st century.

Top