Sample records for probit log skew-normal

  1. Categorical Data Analysis Using a Skewed Weibull Regression Model

    NASA Astrophysics Data System (ADS)

    Caron, Renault; Sinha, Debajyoti; Dey, Dipak; Polpo, Adriano

    2018-03-01

    In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log-log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in details. The analysis of two data sets to show the efficiency of the proposed model is performed.

  2. Observed, unknown distributions of clinical chemical quantities should be considered to be log-normal: a proposal.

    PubMed

    Haeckel, Rainer; Wosniok, Werner

    2010-10-01

    The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.

  3. Apparent Transition in the Human Height Distribution Caused by Age-Dependent Variation during Puberty Period

    NASA Astrophysics Data System (ADS)

    Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto

    2013-08-01

    In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.

  4. Log-Normal Turbulence Dissipation in Global Ocean Models

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor

    2018-03-01

    Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.

  5. Experimental and statistical study on fracture boundary of non-irradiated Zircaloy-4 cladding tube under LOCA conditions

    NASA Astrophysics Data System (ADS)

    Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki

    2018-02-01

    For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.

  6. A log-sinh transformation for data normalization and variance stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  7. Parametric modelling of cost data in medical studies.

    PubMed

    Nixon, R M; Thompson, S G

    2004-04-30

    The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.

  8. Scoring in genetically modified organism proficiency tests based on log-transformed results.

    PubMed

    Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P

    2006-01-01

    The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.

  9. Frequency distribution of lithium in leaves of Lycium andersonii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romney, E.M.; Wallace, A.; Kinnear, J.

    1977-01-01

    Lycium andersonii A. Gray is an accumulator of Li. Assays were made of 200 samples of it collected from six different locations within the Northern Mojave Desert. Mean concentrations of Li varied from location to location and tended not to follow log/sub e/ normal distribution, and to follow a normal distribution only poorly. There was some negative skewness to the log/sub e/ distribution which did exist. The results imply that the variation in accumulation of Li depends upon native supply of Li. Possibly the Li supply and the ability of L. andersonii plants to accumulate it are both log/sub e/more » normally distributed. The mean leaf concentration of Li in all locations was 29 ..mu..g/g, but the maximum was 166 ..mu..g/g.« less

  10. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values.

    PubMed

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-01-30

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18 F-FLT PET SUV distributions (P  >  0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  11. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    NASA Astrophysics Data System (ADS)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  12. Time-dependent dose-response relation for absence of vaginal elasticity after gynecological radiation therapy.

    PubMed

    Alevronta, Eleftheria; Åvall-Lundqvist, Elisabeth; Al-Abany, Massoud; Nyberg, Tommy; Lind, Helena; Waldenström, Ann-Charlotte; Olsson, Caroline; Dunberger, Gail; Bergmark, Karin; Steineck, Gunnar; Lind, Bengt K

    2016-09-01

    To investigate the dose-response relation between the dose to the vagina and the patient-reported symptom 'absence of vaginal elasticity' and how time to follow-up influences this relation. The study included 78 long-term gynecological cancer survivors treated between 1991 and 2003 with external beam radiation therapy. Of those, 24 experienced absence of vaginal elasticity. A normal tissue complication model is introduced that takes into account the influence of time to follow-up on the dose-response relation and the patient's age. The best estimates of the dose-response parameters were calculated using Probit, Probit-Relative Seriality (RS) and Probit-time models. Log likelihood (LL) values and the Akaike Information Criterion (AIC) were used to evaluate the model fit. The dose-response parameters for 'absence of vaginal elasticity' according to the Probit and Probit-time models with the 68% Confidence Intervals (CI) were: LL=-39.8, D 50 =49.7 (47.2-52.4) Gy, γ 50 =1.40 (1.12-1.70) and LL=-37.4, D 50 =46.9 (43.5-50.9) Gy, γ 50 =1.81 (1.17-2.51) respectively. The proposed model, which describes the influence of time to follow-up on the dose-response relation, fits our data best. Our data indicate that the steepness of the dose-response curve of the dose to the vagina and the symptom 'absence of vaginal elasticity' increases with time to follow-up, while D 50 decreases. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Checking distributional assumptions for pharmacokinetic summary statistics based on simulations with compartmental models.

    PubMed

    Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V

    2016-08-12

    Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.

  14. Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.

    PubMed

    Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M

    2016-02-01

    Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons. © The Author(s) 2012.

  15. A review of statistical estimators for risk-adjusted length of stay: analysis of the Australian and new Zealand Intensive Care Adult Patient Data-Base, 2008-2009.

    PubMed

    Moran, John L; Solomon, Patricia J

    2012-05-16

    For the analysis of length-of-stay (LOS) data, which is characteristically right-skewed, a number of statistical estimators have been proposed as alternatives to the traditional ordinary least squares (OLS) regression with log dependent variable. Using a cohort of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 2008-2009, 12 different methods were used for estimation of intensive care (ICU) length of stay. These encompassed risk-adjusted regression analysis of firstly: log LOS using OLS, linear mixed model [LMM], treatment effects, skew-normal and skew-t models; and secondly: unmodified (raw) LOS via OLS, generalised linear models [GLMs] with log-link and 4 different distributions [Poisson, gamma, negative binomial and inverse-Gaussian], extended estimating equations [EEE] and a finite mixture model including a gamma distribution. A fixed covariate list and ICU-site clustering with robust variance were utilised for model fitting with split-sample determination (80%) and validation (20%) data sets, and model simulation was undertaken to establish over-fitting (Copas test). Indices of model specification using Bayesian information criterion [BIC: lower values preferred] and residual analysis as well as predictive performance (R2, concordance correlation coefficient (CCC), mean absolute error [MAE]) were established for each estimator. The data-set consisted of 111663 patients from 131 ICUs; with mean(SD) age 60.6(18.8) years, 43.0% were female, 40.7% were mechanically ventilated and ICU mortality was 7.8%. ICU length-of-stay was 3.4(5.1) (median 1.8, range (0.17-60)) days and demonstrated marked kurtosis and right skew (29.4 and 4.4 respectively). BIC showed considerable spread, from a maximum of 509801 (OLS-raw scale) to a minimum of 210286 (LMM). R2 ranged from 0.22 (LMM) to 0.17 and the CCC from 0.334 (LMM) to 0.149, with MAE 2.2-2.4. Superior residual behaviour was established for the log-scale estimators. There was a general tendency for over-prediction (negative residuals) and for over-fitting, the exception being the GLM negative binomial estimator. The mean-variance function was best approximated by a quadratic function, consistent with log-scale estimation; the link function was estimated (EEE) as 0.152(0.019, 0.285), consistent with a fractional-root function. For ICU length of stay, log-scale estimation, in particular the LMM, appeared to be the most consistently performing estimator(s). Neither the GLM variants nor the skew-regression estimators dominated.

  16. Mechanism-based model for tumor drug resistance.

    PubMed

    Kuczek, T; Chan, T C

    1992-01-01

    The development of tumor resistance to cytotoxic agents has important implications in the treatment of cancer. If supported by experimental data, mathematical models of resistance can provide useful information on the underlying mechanisms and aid in the design of therapeutic regimens. We report on the development of a model of tumor-growth kinetics based on the assumption that the rates of cell growth in a tumor are normally distributed. We further assumed that the growth rate of each cell is proportional to its rate of total pyrimidine synthesis (de novo plus salvage). Using an ovarian carcinoma cell line (2008) and resistant variants selected for chronic exposure to a pyrimidine antimetabolite, N-phosphonacetyl-L-aspartate (PALA), we derived a simple and specific analytical form describing the growth curves generated in 72 h growth assays. The model assumes that the rate of de novo pyrimidine synthesis, denoted alpha, is shifted down by an amount proportional to the log10 PALA concentration and that cells whose rate of pyrimidine synthesis falls below a critical level, denoted alpha 0, can no longer grow. This is described by the equation: Probability (growth) = probability (alpha 0 less than alpha-constant x log10 [PALA]). This model predicts that when growth curves are plotted on probit paper, they will produce straight lines. This prediction is in agreement with the data we obtained for the 2008 cells. Another prediction of this model is that the same probit plots for the resistant variants should shift to the right in a parallel fashion. Probit plots of the dose-response data obtained for each resistant 2008 line following chronic exposure to PALA again confirmed this prediction. Correlation of the rightward shift of dose responses to uridine transport (r = 0.99) also suggests that salvage metabolism plays a key role in tumor-cell resistance to PALA. Furthermore, the slope of the regression lines enables the detection of synergy such as that observed between dipyridamole and PALA. Although the rate-normal model was used to study the rate of salvage metabolism in PALA resistance in the present study, it may be widely applicable to modeling of other resistance mechanisms such as gene amplification of target enzymes.

  17. Algae Tile Data: 2004-2007, BPA-51; Preliminary Report, October 28, 2008.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holderman, Charles

    Multiple files containing 2004 through 2007 Tile Chlorophyll data for the Kootenai River sites designated as: KR1, KR2, KR3, KR4 (Downriver) and KR6, KR7, KR9, KR9.1, KR10, KR11, KR12, KR13, KR14 (Upriver) were received by SCS. For a complete description of the sites covered, please refer to http://ktoi.scsnetw.com. To maintain consistency with the previous SCS algae reports, all analyses were carried out separately for the Upriver and Downriver categories, as defined in the aforementioned paragraph. The Upriver designation, however, now includes three additional sites, KR11, KR12, and the nutrient addition site, KR9.1. Summary statistics and information on the four responses,more » chlorophyll a, chlorophyll a Accrual Rate, Total Chlorophyll, and Total Chlorophyll Accrual Rate are presented in Print Out 2. Computations were carried out separately for each river position (Upriver and Downriver) and year. For example, the Downriver position in 2004 showed an average Chlorophyll a level of 25.5 mg with a standard deviation of 21.4 and minimum and maximum values of 3.1 and 196 mg, respectively. The Upriver data in 2004 showed a lower overall average chlorophyll a level at 2.23 mg with a lower standard deviation (3.6) and minimum and maximum values of (0.13 and 28.7, respectively). A more comprehensive summary of each variable and position is given in Print Out 3. This lists the information above as well as other summary information such as the variance, standard error, various percentiles and extreme values. Using the 2004 Downriver Chlorophyll a as an example again, the variance of this data was 459.3 and the standard error of the mean was 1.55. The median value or 50th percentile was 21.3, meaning 50% of the data fell above and below this value. It should be noted that this value is somewhat different than the mean of 25.5. This is an indication that the frequency distribution of the data is not symmetrical (skewed). The skewness statistic, listed as part of the first section of each analysis, quantifies this. In a symmetric distribution, such as a Normal distribution, the skewness value would be 0. The tile chlorophyll data, however, shows larger values. Chlorophyll a, in the 2004 Downriver example, has a skewness statistic of 3.54, which is quite high. In the last section of the summary analysis, the stem and leaf plot graphically demonstrates the asymmetry, showing most of the data centered around 25 with a large value at 196. The final plot is referred to as a normal probability plot and graphically compares the data to a theoretical normal distribution. For chlorophyll a, the data (asterisks) deviate substantially from the theoretical normal distribution (diagonal reference line of pluses), indicating that the data is non-normal. Other response variables in both the Downriver and Upriver categories also indicated skewed distributions. Because the sample size and mean comparison procedures below require symmetrical, normally distributed data, each response in the data set was logarithmically transformed. The logarithmic transformation, in this case, can help mitigate skewness problems. The summary statistics for the four transformed responses (log-ChlorA, log-TotChlor, and log-accrual ) are given in Print Out 4. For the 2004 Downriver Chlorophyll a data, the logarithmic transformation reduced the skewness value to -0.36 and produced a more bell-shaped symmetric frequency distribution. Similar improvements are shown for the remaining variables and river categories. Hence, all subsequent analyses given below are based on logarithmic transformations of the original responses.« less

  18. Log-gamma linear-mixed effects models for multiple outcomes with application to a longitudinal glaucoma study

    PubMed Central

    Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.

    2015-01-01

    Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565

  19. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student’s t-distribution*

    PubMed Central

    Leão, William L.; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210

  20. Bayesian analysis of stochastic volatility-in-mean model with leverage and asymmetrically heavy-tailed error using generalized hyperbolic skew Student's t-distribution.

    PubMed

    Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui

    2017-01-01

    A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.

  1. A Bayesian Surrogate for Regional Skew in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Kuczera, George

    1983-06-01

    The problem of how to best utilize site and regional flood data to infer the shape parameter of a flood distribution is considered. One approach to this problem is given in Bulletin 17B of the U.S. Water Resources Council (1981) for the log-Pearson distribution. Here a lesser known distribution is considered, namely, the power normal which fits flood data as well as the log-Pearson and has a shape parameter denoted by λ derived from a Box-Cox power transformation. The problem of regionalizing λ is considered from an empirical Bayes perspective where site and regional flood data are used to infer λ. The distortive effects of spatial correlation and heterogeneity of site sampling variance of λ are explicitly studied with spatial correlation being found to be of secondary importance. The end product of this analysis is the posterior distribution of the power normal parameters expressing, in probabilistic terms, what is known about the parameters given site flood data and regional information on λ. This distribution can be used to provide the designer with several types of information. The posterior distribution of the T-year flood is derived. The effect of nonlinearity in λ on inference is illustrated. Because uncertainty in λ is explicitly allowed for, the understatement in confidence limits due to fixing λ (analogous to fixing log skew) is avoided. Finally, it is shown how to obtain the marginal flood distribution which can be used to select a design flood with specified exceedance probability.

  2. Log-Normal Distribution of Cosmic Voids in Simulations and Mocks

    NASA Astrophysics Data System (ADS)

    Russell, E.; Pycke, J.-R.

    2017-01-01

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  3. Method of estimating flood-frequency parameters for streams in Idaho

    USGS Publications Warehouse

    Kjelstrom, L.C.; Moffatt, R.L.

    1981-01-01

    Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)

  4. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  5. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less

  6. Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments

    NASA Astrophysics Data System (ADS)

    Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.

    2004-07-01

    The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.

  7. An estimate of field size distributions for selected sites in the major grain producing countries

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.

    1977-01-01

    The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.

  8. Explorations in statistics: the log transformation.

    PubMed

    Curran-Everett, Douglas

    2018-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.

  9. The MDI Method as a Generalization of Logit, Probit and Hendry Analyses in Marketing.

    DTIC Science & Technology

    1980-04-01

    model involves nothing more than fitting a normal distribution function ( Hanushek and Jackson (1977)). For a given value of x, the probit model...preference shifts within the soft drink category. --For applications of probit models relevant for marketing, see Hausman and Wise (1978) and Hanushek and...Marketing Research" JMR XIV, Feb. (1977). Hanushek , E.A., and J.E. Jackson, Statistical Methods for Social Scientists. Academic Press, New York (1977

  10. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  11. Mapping of quantitative trait loci using the skew-normal distribution.

    PubMed

    Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos

    2007-11-01

    In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.

  12. Plasma Electrolyte Distributions in Humans-Normal or Skewed?

    PubMed

    Feldman, Mark; Dickson, Beverly

    2017-11-01

    It is widely believed that plasma electrolyte levels are normally distributed. Statistical tests and calculations using plasma electrolyte data are often reported based on this assumption of normality. Examples include t tests, analysis of variance, correlations and confidence intervals. The purpose of our study was to determine whether plasma sodium (Na + ), potassium (K + ), chloride (Cl - ) and bicarbonate [Formula: see text] distributions are indeed normally distributed. We analyzed plasma electrolyte data from 237 consecutive adults (137 women and 100 men) who had normal results on a standard basic metabolic panel which included plasma electrolyte measurements. The skewness of each distribution (as a measure of its asymmetry) was compared to the zero skewness of a normal (Gaussian) distribution. The plasma Na + distribution was skewed slightly to the right, but the skew was not significantly different from zero skew. The plasma Cl - distribution was skewed slightly to the left, but again the skew was not significantly different from zero skew. On the contrary, both the plasma K + and [Formula: see text] distributions were significantly skewed to the right (P < 0.01 zero skew). There was also a suggestion from examining frequency distribution curves that K + and [Formula: see text] distributions were bimodal. In adults with a normal basic metabolic panel, plasma potassium and bicarbonate levels are not normally distributed and may be bimodal. Thus, statistical methods to evaluate these 2 plasma electrolytes should be nonparametric tests and not parametric ones that require a normal distribution. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.

  13. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  14. An analytical approach to reduce between-plate variation in multiplex assays that measure antibodies to Plasmodium falciparum antigens.

    PubMed

    Fang, Rui; Wey, Andrew; Bobbili, Naveen K; Leke, Rose F G; Taylor, Diane Wallace; Chen, John J

    2017-07-17

    Antibodies play an important role in immunity to malaria. Recent studies show that antibodies to multiple antigens, as well as, the overall breadth of the response are associated with protection from malaria. Yet, the variability and reliability of antibody measurements against a combination of malarial antigens using multiplex assays have not been well characterized. A normalization procedure for reducing between-plate variation using replicates of pooled positive and negative controls was investigated. Sixty test samples (30 from malaria-positive and 30 malaria-negative individuals), together with five pooled positive-controls and two pooled negative-controls, were screened for antibody levels to 9 malarial antigens, including merozoite antigens (AMA1, EBA175, MSP1, MSP2, MSP3, MSP11, Pf41), sporozoite CSP, and pregnancy-associated VAR2CSA. The antibody levels were measured in triplicate on each of 3 plates, and the experiments were replicated on two different days by the same technician. The performance of the proposed normalization procedure was evaluated with the pooled controls for the test samples on both the linear and natural-log scales. Compared with data on the linear scale, the natural-log transformed data were less skewed and reduced the mean-variance relationship. The proposed normalization procedure using pooled controls on the natural-log scale significantly reduced between-plate variation. For malaria-related research that measure antibodies to multiple antigens with multiplex assays, the natural-log transformation is recommended for data analysis and use of the normalization procedure with multiple pooled controls can improve the precision of antibody measurements.

  15. Statistical considerations in the analysis of data from replicated bioassays

    USDA-ARS?s Scientific Manuscript database

    Multiple-dose bioassay is generally the preferred method for characterizing virulence of insect pathogens. Linear regression of probit mortality on log dose enables estimation of LD50/LC50 and slope, the latter having substantial effect on LD90/95s (doses of considerable interest in pest management)...

  16. The formulation and estimation of a spatial skew-normal generalized ordered-response model.

    DOT National Transportation Integrated Search

    2016-06-01

    This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...

  17. Statistical characteristics of cloud variability. Part 1: Retrieved cloud liquid water path at three ARM sites: Observed cloud variability at ARM sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Campos, Edwin; Liu, Yangang

    2014-09-17

    Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less

  18. Normalization of High Dimensional Genomics Data Where the Distribution of the Altered Variables Is Skewed

    PubMed Central

    Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per

    2011-01-01

    Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175

  19. A comparison of methods to handle skew distributed cost variables in the analysis of the resource consumption in schizophrenia treatment.

    PubMed

    Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C

    2002-03-01

    Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.

  20. Stochastic epigenetic mutations (DNA methylation) increase exponentially in human aging and correlate with X chromosome inactivation skewing in females.

    PubMed

    Gentilini, Davide; Garagnani, Paolo; Pisoni, Serena; Bacalini, Maria Giulia; Calzari, Luciano; Mari, Daniela; Vitale, Giovanni; Franceschi, Claudio; Di Blasio, Anna Maria

    2015-08-01

    In this study we applied a new analytical strategy to investigate the relations between stochastic epigenetic mutations (SEMs) and aging. We analysed methylation levels through the Infinium HumanMethylation27 and HumanMethylation450 BeadChips in a population of 178 subjects ranging from 3 to 106 years. For each CpG probe, epimutated subjects were identified as the extreme outliers with methylation level exceeding three times interquartile ranges the first quartile (Q1-(3 x IQR)) or the third quartile (Q3+(3 x IQR)). We demonstrated that the number of SEMs was low in childhood and increased exponentially during aging. Using the HUMARA method, skewing of X chromosome inactivation (XCI) was evaluated in heterozygotes women. Multivariate analysis indicated a significant correlation between log(SEMs) and degree of XCI skewing after adjustment for age (β = 0.41; confidence interval: 0.14, 0.68; p-value = 0.0053). The PATH analysis tested the complete model containing the variables: skewing of XCI, age, log(SEMs) and overall CpG methylation. After adjusting for the number of epimutations we failed to confirm the well reported correlation between skewing of XCI and aging. This evidence might suggest that the known correlation between XCI skewing and aging could not be a direct association but mediated by the number of SEMs.

  1. Estimating generalized skew of the log-Pearson Type III distribution for annual peak floods in Illinois

    USGS Publications Warehouse

    Oberg, Kevin A.; Mades, Dean M.

    1987-01-01

    Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)

  2. A novel gamma-fitting statistical method for anti-drug antibody assays to establish assay cut points for data with non-normal distribution.

    PubMed

    Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena

    2010-01-31

    In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.

  3. Enhancing tumor apparent diffusion coefficient histogram skewness stratifies the postoperative survival in recurrent glioblastoma multiforme patients undergoing salvage surgery.

    PubMed

    Zolal, Amir; Juratli, Tareq A; Linn, Jennifer; Podlesek, Dino; Sitoci Ficici, Kerim Hakan; Kitzler, Hagen H; Schackert, Gabriele; Sobottka, Stephan B; Rieger, Bernhard; Krex, Dietmar

    2016-05-01

    Objective To determine the value of apparent diffusion coefficient (ADC) histogram parameters for the prediction of individual survival in patients undergoing surgery for recurrent glioblastoma (GBM) in a retrospective cohort study. Methods Thirty-one patients who underwent surgery for first recurrence of a known GBM between 2008 and 2012 were included. The following parameters were collected: age, sex, enhancing tumor size, mean ADC, median ADC, ADC skewness, ADC kurtosis and fifth percentile of the ADC histogram, initial progression free survival (PFS), extent of second resection and further adjuvant treatment. The association of these parameters with survival and PFS after second surgery was analyzed using log-rank test and Cox regression. Results Using log-rank test, ADC histogram skewness of the enhancing tumor was significantly associated with both survival (p = 0.001) and PFS after second surgery (p = 0.005). Further parameters associated with prolonged survival after second surgery were: gross total resection at second surgery (p = 0.026), tumor size (0.040) and third surgery (p = 0.003). In the multivariate Cox analysis, ADC histogram skewness was shown to be an independent prognostic factor for survival after second surgery. Conclusion ADC histogram skewness of the enhancing lesion, enhancing lesion size, third surgery, as well as gross total resection have been shown to be associated with survival following the second surgery. ADC histogram skewness was an independent prognostic factor for survival in the multivariate analysis.

  4. Gluten-containing grains skew gluten assessment in oats due to sample grind non-homogeneity.

    PubMed

    Fritz, Ronald D; Chen, Yumin; Contreras, Veronica

    2017-02-01

    Oats are easily contaminated with gluten-rich kernels of wheat, rye and barley. These contaminants are like gluten 'pills', shown here to skew gluten analysis results. Using R-Biopharm R5 ELISA, we quantified gluten in gluten-free oatmeal servings from an in-market survey. For samples with a 5-20ppm reading on a first test, replicate analyses provided results ranging <5ppm to >160ppm. This suggests sample grinding may inadequately disperse gluten to allow a single accurate gluten assessment. To ascertain this, and characterize the distribution of 0.25-g gluten test results for kernel contaminated oats, twelve 50g samples of pure oats, each spiked with a wheat kernel, showed that 0.25g test results followed log-normal-like distributions. With this, we estimate probabilities of mis-assessment for a 'single measure/sample' relative to the <20ppm regulatory threshold, and derive an equation relating the probability of mis-assessment to sample average gluten content. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A model for the flux-r.m.s. correlation in blazar variability or the minijets-in-a-jet statistical model

    NASA Astrophysics Data System (ADS)

    Biteau, J.; Giebels, B.

    2012-12-01

    Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.

  6. Methane, Black Carbon, and Ethane Emissions from Natural Gas Flares in the Bakken Shale, North Dakota.

    PubMed

    Gvakharia, Alexander; Kort, Eric A; Brandt, Adam; Peischl, Jeff; Ryerson, Thomas B; Schwarz, Joshua P; Smith, Mackenzie L; Sweeney, Colm

    2017-05-02

    Incomplete combustion during flaring can lead to production of black carbon (BC) and loss of methane and other pollutants to the atmosphere, impacting climate and air quality. However, few studies have measured flare efficiency in a real-world setting. We use airborne data of plume samples from 37 unique flares in the Bakken region of North Dakota in May 2014 to calculate emission factors for BC, methane, ethane, and combustion efficiency for methane and ethane. We find no clear relationship between emission factors and aircraft-level wind speed or between methane and BC emission factors. Observed median combustion efficiencies for methane and ethane are close to expected values for typical flares according to the US EPA (98%). However, we find that the efficiency distribution is skewed, exhibiting log-normal behavior. This suggests incomplete combustion from flares contributes almost 1/5 of the total field emissions of methane and ethane measured in the Bakken shale, more than double the expected value if 98% efficiency was representative. BC emission factors also have a skewed distribution, but we find lower emission values than previous studies. The direct observation for the first time of a heavy-tail emissions distribution from flares suggests the need to consider skewed distributions when assessing flare impacts globally.

  7. The Use of the Skew T, Log P Diagram in Analysis and Forecasting. Revision

    DTIC Science & Technology

    1990-03-01

    28 x 30 been added to further enhance the value of the inches. This version now includes the Apple - diagram. A detailed description of the Skew T, man...airocrau rqor we ovailable. The eauning lIkIaatte U the lop rate Is. at times. recorded as swot - adobaik wheun the mulm leave* a cloud Up and ener

  8. Nonparametric Bayesian models through probit stick-breaking processes

    PubMed Central

    Rodríguez, Abel; Dunson, David B.

    2013-01-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology. PMID:24358072

  9. Nonparametric Bayesian models through probit stick-breaking processes.

    PubMed

    Rodríguez, Abel; Dunson, David B

    2011-03-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology.

  10. Modeling absolute differences in life expectancy with a censored skew-normal regression approach

    PubMed Central

    Clough-Gorr, Kerri; Zwahlen, Marcel

    2015-01-01

    Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544

  11. Dichotomisation using a distributional approach when the outcome is skewed.

    PubMed

    Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L

    2015-04-24

    Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.

  12. Skewness, long-time memory, and non-stationarity: Application to leverage effect in financial time series

    NASA Astrophysics Data System (ADS)

    Roman, H. E.; Porto, M.; Dose, C.

    2008-10-01

    We analyze daily log-returns data for a set of 1200 stocks, taken from US stock markets, over a period of 2481 trading days (January 1996-November 2005). We estimate the degree of non-stationarity in daily market volatility employing a polynomial fit, used as a detrending function. We find that the autocorrelation function of absolute detrended log-returns departs strongly from the corresponding original data autocorrelation function, while the observed leverage effect depends only weakly on trends. Such effect is shown to occur when both skewness and long-time memory are simultaneously present. A fractional derivative random walk model is discussed yielding a quantitative agreement with the empirical results.

  13. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  14. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    NASA Astrophysics Data System (ADS)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  15. Diameter distribution in a Brazilian tropical dry forest domain: predictions for the stand and species.

    PubMed

    Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C

    2017-01-01

    Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.

  16. Acoustic evaluation of wood quality in standing trees. Part I, Acoustic wave behavior

    Treesearch

    Xiping Wang; Robert J. Ross; Peter Carter

    2007-01-01

    Acoustic wave velocities in standing trees or live softwood species were measured by the time-of-flight (TOF) method. Tree velocities were compared with acoustic velocities measured in corresponding butt logs through a resonance acoustic method. The experimental data showed a skewed relationship between tree and log acoustic measurements. For most trees tested,...

  17. Defining the cause of skewed X-chromosome inactivation in X-linked mental retardation by use of a mouse model.

    PubMed

    Muers, Mary R; Sharpe, Jacqueline A; Garrick, David; Sloane-Stanley, Jacqueline; Nolan, Patrick M; Hacker, Terry; Wood, William G; Higgs, Douglas R; Gibbons, Richard J

    2007-06-01

    Extreme skewing of X-chromosome inactivation (XCI) is rare in the normal female population but is observed frequently in carriers of some X-linked mutations. Recently, it has been shown that various forms of X-linked mental retardation (XLMR) have a strong association with skewed XCI in female carriers, but the mechanisms underlying this skewing are unknown. ATR-X syndrome, caused by mutations in a ubiquitously expressed, chromatin-associated protein, provides a clear example of XLMR in which phenotypically normal female carriers virtually all have highly skewed XCI biased against the X chromosome that harbors the mutant allele. Here, we have used a mouse model to understand the processes causing skewed XCI. In female mice heterozygous for a null Atrx allele, we found that XCI is balanced early in embryogenesis but becomes skewed over the course of development, because of selection favoring cells expressing the wild-type Atrx allele. Unexpectedly, selection does not appear to be the result of general cellular-viability defects in Atrx-deficient cells, since it is restricted to specific stages of development and is not ongoing throughout the life of the animal. Instead, there is evidence that selection results from independent tissue-specific effects. This illustrates an important mechanism by which skewed XCI may occur in carriers of XLMR and provides insight into the normal role of ATRX in regulating cell fate.

  18. Differential models of twin correlations in skew for body-mass index (BMI).

    PubMed

    Tsang, Siny; Duncan, Glen E; Dinescu, Diana; Turkheimer, Eric

    2018-01-01

    Body Mass Index (BMI), like most human phenotypes, is substantially heritable. However, BMI is not normally distributed; the skew appears to be structural, and increases as a function of age. Moreover, twin correlations for BMI commonly violate the assumptions of the most common variety of the classical twin model, with the MZ twin correlation greater than twice the DZ correlation. This study aimed to decompose twin correlations for BMI using more general skew-t distributions. Same sex MZ and DZ twin pairs (N = 7,086) from the community-based Washington State Twin Registry were included. We used latent profile analysis (LPA) to decompose twin correlations for BMI into multiple mixture distributions. LPA was performed using the default normal mixture distribution and the skew-t mixture distribution. Similar analyses were performed for height as a comparison. Our analyses are then replicated in an independent dataset. A two-class solution under the skew-t mixture distribution fits the BMI distribution for both genders. The first class consists of a relatively normally distributed, highly heritable BMI with a mean in the normal range. The second class is a positively skewed BMI in the overweight and obese range, with lower twin correlations. In contrast, height is normally distributed, highly heritable, and is well-fit by a single latent class. Results in the replication dataset were highly similar. Our findings suggest that two distinct processes underlie the skew of the BMI distribution. The contrast between height and weight is in accord with subjective psychological experience: both are under obvious genetic influence, but BMI is also subject to behavioral control, whereas height is not.

  19. A novel, efficient method for estimating the prevalence of acute malnutrition in resource-constrained and crisis-affected settings: A simulation study.

    PubMed

    Frison, Severine; Kerac, Marko; Checchi, Francesco; Nicholas, Jennifer

    2017-01-01

    The assessment of the prevalence of acute malnutrition in children under five is widely used for the detection of emergencies, planning interventions, advocacy, and monitoring and evaluation. This study examined PROBIT Methods which convert parameters (mean and standard deviation (SD)) of a normally distributed variable to a cumulative probability below any cut-off to estimate acute malnutrition in children under five using Middle-Upper Arm Circumference (MUAC). We assessed the performance of: PROBIT Method I, with mean MUAC from the survey sample and MUAC SD from a database of previous surveys; and PROBIT Method II, with mean and SD of MUAC observed in the survey sample. Specifically, we generated sub-samples from 852 survey datasets, simulating 100 surveys for eight sample sizes. Overall the methods were tested on 681 600 simulated surveys. PROBIT methods relying on sample sizes as small as 50 had better performance than the classic method for estimating and classifying the prevalence of acute malnutrition. They had better precision in the estimation of acute malnutrition for all sample sizes and better coverage for smaller sample sizes, while having relatively little bias. They classified situations accurately for a threshold of 5% acute malnutrition. Both PROBIT methods had similar outcomes. PROBIT Methods have a clear advantage in the assessment of acute malnutrition prevalence based on MUAC, compared to the classic method. Their use would require much lower sample sizes, thus enable great time and resource savings and permit timely and/or locally relevant prevalence estimates of acute malnutrition for a swift and well-targeted response.

  20. Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments

    USGS Publications Warehouse

    Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.

    2004-01-01

    The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.

  1. Generalised Extreme Value Distributions Provide a Natural Hypothesis for the Shape of Seed Mass Distributions

    PubMed Central

    2015-01-01

    Among co-occurring species, values for functionally important plant traits span orders of magnitude, are uni-modal, and generally positively skewed. Such data are usually log-transformed “for normality” but no convincing mechanistic explanation for a log-normal expectation exists. Here we propose a hypothesis for the distribution of seed masses based on generalised extreme value distributions (GEVs), a class of probability distributions used in climatology to characterise the impact of event magnitudes and frequencies; events that impose strong directional selection on biological traits. In tests involving datasets from 34 locations across the globe, GEVs described log10 seed mass distributions as well or better than conventional normalising statistics in 79% of cases, and revealed a systematic tendency for an overabundance of small seed sizes associated with low latitudes. GEVs characterise disturbance events experienced in a location to which individual species’ life histories could respond, providing a natural, biological explanation for trait expression that is lacking from all previous hypotheses attempting to describe trait distributions in multispecies assemblages. We suggest that GEVs could provide a mechanistic explanation for plant trait distributions and potentially link biology and climatology under a single paradigm. PMID:25830773

  2. No evidence that skewing of X chromosome inactivation patterns is transmitted to offspring in humans

    PubMed Central

    Bolduc, Véronique; Chagnon, Pierre; Provost, Sylvie; Dubé, Marie-Pierre; Belisle, Claude; Gingras, Marianne; Mollica, Luigina; Busque, Lambert

    2007-01-01

    Skewing of X chromosome inactivation (XCI) can occur in normal females and increases in tissues with age. The mechanisms underlying skewing in normal females, however, remain controversial. To better understand the phenomenon of XCI in nondisease states, we evaluated XCI patterns in epithelial and hematopoietic cells of over 500 healthy female mother-neonate pairs. The incidence of skewing observed in mothers was twice that observed in neonates, and in both cohorts, the incidence of XCI was lower in epithelial cells than hematopoietic cells. These results suggest that XCI incidence varies by tissue type and that age-dependent mechanisms can influence skewing in both epithelial and hematopoietic cells. In both cohorts, a correlation was identified in the direction of skewing in epithelial and hematopoietic cells, suggesting common underlying skewing mechanisms across tissues. However, there was no correlation between the XCI patterns of mothers and their respective neonates, and skewed mothers gave birth to skewed neonates at the same frequency as nonskewed mothers. Taken together, our data suggest that in humans, the XCI pattern observed at birth does not reflect a single heritable genetic locus, but rather corresponds to a complex trait determined, at least in part, by selection biases occurring after XCI. PMID:18097474

  3. Flow-covariate prediction of stream pesticide concentrations.

    PubMed

    Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin

    2018-01-01

    Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.

  4. A simple method for optimising transformation of non-parametric data: an illustration by reference to cortisol assays.

    PubMed

    Clark, James E; Osborne, Jason W; Gallagher, Peter; Watson, Stuart

    2016-07-01

    Neuroendocrine data are typically positively skewed and rarely conform to the expectations of a Gaussian distribution. This can be a problem when attempting to analyse results within the framework of the general linear model, which relies on assumptions that residuals in the data are normally distributed. One frequently used method for handling violations of this assumption is to transform variables to bring residuals into closer alignment with assumptions (as residuals are not directly manipulated). This is often attempted through ad hoc traditional transformations such as square root, log and inverse. However, Box and Cox (Box & Cox, ) observed that these are all special cases of power transformations and proposed a more flexible method of transformation for researchers to optimise alignment with assumptions. The goal of this paper is to demonstrate the benefits of the infinitely flexible Box-Cox transformation on neuroendocrine data using syntax in spss. When applied to positively skewed data typical of neuroendocrine data, the majority (~2/3) of cases were brought into strict alignment with Gaussian distribution (i.e. a non-significant Shapiro-Wilks test). Those unable to meet this challenge showed substantial improvement in distributional properties. The biggest challenge was distributions with a high ratio of kurtosis to skewness. We discuss how these cases might be handled, and we highlight some of the broader issues associated with transformation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Phylogenetic analyses suggest that diversification and body size evolution are independent in insects.

    PubMed

    Rainford, James L; Hofreiter, Michael; Mayhew, Peter J

    2016-01-08

    Skewed body size distributions and the high relative richness of small-bodied taxa are a fundamental property of a wide range of animal clades. The evolutionary processes responsible for generating these distributions are well described in vertebrate model systems but have yet to be explored in detail for other major terrestrial clades. In this study, we explore the macro-evolutionary patterns of body size variation across families of Hexapoda (insects and their close relatives), using recent advances in phylogenetic understanding, with an aim to investigate the link between size and diversity within this ancient and highly diverse lineage. The maximum, minimum and mean-log body lengths of hexapod families are all approximately log-normally distributed, consistent with previous studies at lower taxonomic levels, and contrasting with skewed distributions typical of vertebrate groups. After taking phylogeny and within-tip variation into account, we find no evidence for a negative relationship between diversification rate and body size, suggesting decoupling of the forces controlling these two traits. Likelihood-based modeling of the log-mean body size identifies distinct processes operating within Holometabola and Diptera compared with other hexapod groups, consistent with accelerating rates of size evolution within these clades, while as a whole, hexapod body size evolution is found to be dominated by neutral processes including significant phylogenetic conservatism. Based on our findings we suggest that the use of models derived from well-studied but atypical clades, such as vertebrates may lead to misleading conclusions when applied to other major terrestrial lineages. Our results indicate that within hexapods, and within the limits of current systematic and phylogenetic knowledge, insect diversification is generally unfettered by size-biased macro-evolutionary processes, and that these processes over large timescales tend to converge on apparently neutral evolutionary processes. We also identify limitations on available data within the clade and modeling approaches for the resolution of trees of higher taxa, the resolution of which may collectively enhance our understanding of this key component of terrestrial ecosystems.

  6. Bayesian WLS/GLS regression for regional skewness analysis for regions with large crest stage gage networks

    USGS Publications Warehouse

    Veilleux, Andrea G.; Stedinger, Jery R.; Eash, David A.

    2012-01-01

    This paper summarizes methodological advances in regional log-space skewness analyses that support flood-frequency analysis with the log Pearson Type III (LP3) distribution. A Bayesian Weighted Least Squares/Generalized Least Squares (B-WLS/B-GLS) methodology that relates observed skewness coefficient estimators to basin characteristics in conjunction with diagnostic statistics represents an extension of the previously developed B-GLS methodology. B-WLS/B-GLS has been shown to be effective in two California studies. B-WLS/B-GLS uses B-WLS to generate stable estimators of model parameters and B-GLS to estimate the precision of those B-WLS regression parameters, as well as the precision of the model. The study described here employs this methodology to develop a regional skewness model for the State of Iowa. To provide cost effective peak-flow data for smaller drainage basins in Iowa, the U.S. Geological Survey operates a large network of crest stage gages (CSGs) that only record flow values above an identified recording threshold (thus producing a censored data record). CSGs are different from continuous-record gages, which record almost all flow values and have been used in previous B-GLS and B-WLS/B-GLS regional skewness studies. The complexity of analyzing a large CSG network is addressed by using the B-WLS/B-GLS framework along with the Expected Moments Algorithm (EMA). Because EMA allows for the censoring of low outliers, as well as the use of estimated interval discharges for missing, censored, and historic data, it complicates the calculations of effective record length (and effective concurrent record length) used to describe the precision of sample estimators because the peak discharges are no longer solely represented by single values. Thus new record length calculations were developed. The regional skewness analysis for the State of Iowa illustrates the value of the new B-WLS/BGLS methodology with these new extensions.

  7. Universal statistics of selected values

    NASA Astrophysics Data System (ADS)

    Smerlak, Matteo; Youssef, Ahmed

    2017-03-01

    Selection, the tendency of some traits to become more frequent than others under the influence of some (natural or artificial) agency, is a key component of Darwinian evolution and countless other natural and social phenomena. Yet a general theory of selection, analogous to the Fisher-Tippett-Gnedenko theory of extreme events, is lacking. Here we introduce a probabilistic definition of selection and show that selected values are attracted to a universal family of limiting distributions which generalize the log-normal distribution. The universality classes and scaling exponents are determined by the tail thickness of the random variable under selection. Our results provide a possible explanation for skewed distributions observed in diverse contexts where selection plays a key role, from molecular biology to agriculture and sport.

  8. Geometric mean IELT and premature ejaculation: appropriate statistics to avoid overestimation of treatment efficacy.

    PubMed

    Waldinger, Marcel D; Zwinderman, Aeilko H; Olivier, Berend; Schweitzer, Dave H

    2008-02-01

    The intravaginal ejaculation latency time (IELT) behaves in a skewed manner and needs the appropriate statistics for correct interpretation of treatment results. To explain the rightful use of geometrical mean IELT values and the fold increase of the geometric mean IELT because of the positively skewed IELT distribution. Linking theoretical arguments to the outcome of several selective serotonin reuptake inhibitor and modern antidepressant study results. Geometric mean IELT and fold increase of geometrical mean IELT. Log-transforming each separate IELT measurement of each individual man is the basis for the calculation of the geometric mean IELT. A drug-induced positively skewed IELT distribution necessitates the calculation of the geometric mean IELTs at baseline and during drug treatment. In a positively skewed IELT distribution, the use of the "arithmetic" mean IELT risks an overestimation of the drug-induced ejaculation delay as the mean IELT is always higher than the geometric mean IELT. Strong ejaculation-delaying drugs give rise to a strong positively skewed IELT distribution, whereas weak ejaculation-delaying drugs give rise to (much) less skewed IELT distributions. Ejaculation delay is expressed in fold increase of the geometric mean IELT. Drug-induced ejaculatory performance discloses a positively skewed IELT distribution, requiring the use of the geometric mean IELT and the fold increase of the geometric mean IELT.

  9. Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.

    PubMed

    Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J

    2017-06-01

    Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.

  10. Nonpoint Source Solute Transport Normal to Aquifer Bedding in Heterogeneous, Markov Chain Random Fields

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Harter, T.; Sivakumar, B.

    2005-12-01

    Facies-based geostatistical models have become important tools for the stochastic analysis of flow and transport processes in heterogeneous aquifers. However, little is known about the dependency of these processes on the parameters of facies- based geostatistical models. This study examines the nonpoint source solute transport normal to the major bedding plane in the presence of interconnected high conductivity (coarse- textured) facies in the aquifer medium and the dependence of the transport behavior upon the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute travel time probability distribution functions (pdfs) for solute flux from the water table to the bottom boundary (production horizon) of the aquifer. The cases examined include, two-, three-, and four-facies models with horizontal to vertical facies mean length anisotropy ratios, ek, from 25:1 to 300:1, and with a wide range of facies volume proportions (e.g, from 5% to 95% coarse textured facies). Predictions of travel time pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer, the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and - to a lesser degree - the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, travel time pdfs are not log- normally distributed as is often assumed. Also, macrodispersive behavior (variance of the travel time pdf) was found to not be a unique function of the conductivity variance. The skewness of the travel time pdf varied from negatively skewed to strongly positively skewed within the parameter range examined. We also show that the Markov chain approach may give significantly different travel time pdfs when compared to the more commonly used Gaussian random field approach even though the first and second order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport.

  11. The stochastic distribution of available coefficient of friction for human locomotion of five different floor surfaces.

    PubMed

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2014-05-01

    The maximum coefficient of friction that can be supported at the shoe and floor interface without a slip is usually called the available coefficient of friction (ACOF) for human locomotion. The probability of a slip could be estimated using a statistical model by comparing the ACOF with the required coefficient of friction (RCOF), assuming that both coefficients have stochastic distributions. An investigation of the stochastic distributions of the ACOF of five different floor surfaces under dry, water and glycerol conditions is presented in this paper. One hundred friction measurements were performed on each floor surface under each surface condition. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF distributions had a slightly better match with the normal and log-normal distributions than with the Weibull in only three out of 15 cases with a statistical significance. The results are far more complex than what had heretofore been published and different scenarios could emerge. Since the ACOF is compared with the RCOF for the estimate of slip probability, the distribution of the ACOF in seven cases could be considered a constant for this purpose when the ACOF is much lower or higher than the RCOF. A few cases could be represented by a normal distribution for practical reasons based on their skewness and kurtosis values without a statistical significance. No representation could be found in three cases out of 15. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. Problems with Using the Normal Distribution – and Ways to Improve Quality and Efficiency of Data Analysis

    PubMed Central

    Limpert, Eckhard; Stahel, Werner A.

    2011-01-01

    Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325

  13. Problems with using the normal distribution--and ways to improve quality and efficiency of data analysis.

    PubMed

    Limpert, Eckhard; Stahel, Werner A

    2011-01-01

    The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

  14. Defining surfaces for skewed, highly variable data

    USGS Publications Warehouse

    Helsel, D.R.; Ryker, S.J.

    2002-01-01

    Skewness of environmental data is often caused by more than simply a handful of outliers in an otherwise normal distribution. Statistical procedures for such datasets must be sufficiently robust to deal with distributions that are strongly non-normal, containing both a large proportion of outliers and a skewed main body of data. In the field of water quality, skewness is commonly associated with large variation over short distances. Spatial analysis of such data generally requires either considerable effort at modeling or the use of robust procedures not strongly affected by skewness and local variability. Using a skewed dataset of 675 nitrate measurements in ground water, commonly used methods for defining a surface (least-squares regression and kriging) are compared to a more robust method (loess). Three choices are critical in defining a surface: (i) is the surface to be a central mean or median surface? (ii) is either a well-fitting transformation or a robust and scale-independent measure of center used? (iii) does local spatial autocorrelation assist in or detract from addressing objectives? Published in 2002 by John Wiley & Sons, Ltd.

  15. Portfolio optimization with skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-04-01

    Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.

  16. Genomic-Enabled Prediction of Ordinal Data with Bayesian Logistic Ordinal Regression.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Burgueño, Juan; Eskridge, Kent

    2015-08-18

    Most genomic-enabled prediction models developed so far assume that the response variable is continuous and normally distributed. The exception is the probit model, developed for ordered categorical phenotypes. In statistical applications, because of the easy implementation of the Bayesian probit ordinal regression (BPOR) model, Bayesian logistic ordinal regression (BLOR) is implemented rarely in the context of genomic-enabled prediction [sample size (n) is much smaller than the number of parameters (p)]. For this reason, in this paper we propose a BLOR model using the Pólya-Gamma data augmentation approach that produces a Gibbs sampler with similar full conditional distributions of the BPOR model and with the advantage that the BPOR model is a particular case of the BLOR model. We evaluated the proposed model by using simulation and two real data sets. Results indicate that our BLOR model is a good alternative for analyzing ordinal data in the context of genomic-enabled prediction with the probit or logit link. Copyright © 2015 Montesinos-López et al.

  17. [Data distribution and transformation in population based sampling survey of viral load in HIV positive men who have sex with men in China].

    PubMed

    Dou, Z; Chen, J; Jiang, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To understand the distribution of population viral load (PVL) data in HIV infected men who have sex with men (MSM), fit distribution function and explore the appropriate estimating parameter of PVL. Methods: The detection limit of viral load (VL) was ≤ 50 copies/ml. Box-Cox transformation and normal distribution tests were used to describe the general distribution characteristics of the original and transformed data of PVL, then the stable distribution function was fitted with test of goodness of fit. Results: The original PVL data fitted a skewed distribution with the variation coefficient of 622.24%, and had a multimodal distribution after Box-Cox transformation with optimal parameter ( λ ) of-0.11. The distribution of PVL data over the detection limit was skewed and heavy tailed when transformed by Box-Cox with optimal λ =0. By fitting the distribution function of the transformed data over the detection limit, it matched the stable distribution (SD) function ( α =1.70, β =-1.00, γ =0.78, δ =4.03). Conclusions: The original PVL data had some censored data below the detection limit, and the data over the detection limit had abnormal distribution with large degree of variation. When proportion of the censored data was large, it was inappropriate to use half-value of detection limit to replace the censored ones. The log-transformed data over the detection limit fitted the SD. The median ( M ) and inter-quartile ranger ( IQR ) of log-transformed data can be used to describe the centralized tendency and dispersion tendency of the data over the detection limit.

  18. Application of survival analysis methodology to the quantitative analysis of LC-MS proteomics data.

    PubMed

    Tekwe, Carmen D; Carroll, Raymond J; Dabney, Alan R

    2012-08-01

    Protein abundance in quantitative proteomics is often based on observed spectral features derived from liquid chromatography mass spectrometry (LC-MS) or LC-MS/MS experiments. Peak intensities are largely non-normal in distribution. Furthermore, LC-MS-based proteomics data frequently have large proportions of missing peak intensities due to censoring mechanisms on low-abundance spectral features. Recognizing that the observed peak intensities detected with the LC-MS method are all positive, skewed and often left-censored, we propose using survival methodology to carry out differential expression analysis of proteins. Various standard statistical techniques including non-parametric tests such as the Kolmogorov-Smirnov and Wilcoxon-Mann-Whitney rank sum tests, and the parametric survival model and accelerated failure time-model with log-normal, log-logistic and Weibull distributions were used to detect any differentially expressed proteins. The statistical operating characteristics of each method are explored using both real and simulated datasets. Survival methods generally have greater statistical power than standard differential expression methods when the proportion of missing protein level data is 5% or more. In particular, the AFT models we consider consistently achieve greater statistical power than standard testing procedures, with the discrepancy widening with increasing missingness in the proportions. The testing procedures discussed in this article can all be performed using readily available software such as R. The R codes are provided as supplemental materials. ctekwe@stat.tamu.edu.

  19. Causal Mediation Analysis of Survival Outcome with Multiple Mediators.

    PubMed

    Huang, Yen-Tsung; Yang, Hwai-I

    2017-05-01

    Mediation analyses have been a popular approach to investigate the effect of an exposure on an outcome through a mediator. Mediation models with multiple mediators have been proposed for continuous and dichotomous outcomes. However, development of multimediator models for survival outcomes is still limited. We present methods for multimediator analyses using three survival models: Aalen additive hazard models, Cox proportional hazard models, and semiparametric probit models. Effects through mediators can be characterized by path-specific effects, for which definitions and identifiability assumptions are provided. We derive closed-form expressions for path-specific effects for the three models, which are intuitively interpreted using a causal diagram. Mediation analyses using Cox models under the rare-outcome assumption and Aalen additive hazard models consider effects on log hazard ratio and hazard difference, respectively; analyses using semiparametric probit models consider effects on difference in transformed survival time and survival probability. The three models were applied to a hepatitis study where we investigated effects of hepatitis C on liver cancer incidence mediated through baseline and/or follow-up hepatitis B viral load. The three methods show consistent results on respective effect scales, which suggest an adverse estimated effect of hepatitis C on liver cancer not mediated through hepatitis B, and a protective estimated effect mediated through the baseline (and possibly follow-up) of hepatitis B viral load. Causal mediation analyses of survival outcome with multiple mediators are developed for additive hazard and proportional hazard and probit models with utility demonstrated in a hepatitis study.

  20. Economic values under inappropriate normal distribution assumptions.

    PubMed

    Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R

    2012-08-01

    The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.

  1. Combining Deterministic structures and stochastic heterogeneity for transport modeling

    NASA Astrophysics Data System (ADS)

    Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg

    2017-04-01

    Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.

  2. SAHARA: A package of PC computer programs for estimating both log-hyperbolic grain-size parameters and standard moments

    NASA Astrophysics Data System (ADS)

    Christiansen, Christian; Hartmann, Daniel

    This paper documents a package of menu-driven POLYPASCAL87 computer programs for handling grouped observations data from both sieving (increment data) and settling tube procedures (cumulative data). The package is designed deliberately for use on IBM-compatible personal computers. Two of the programs solve the numerical problem of determining the estimates of the four (main) parameters of the log-hyperbolic distribution and their derivatives. The package also contains a program for determining the mean, sorting, skewness. and kurtosis according to the standard moments. Moreover, the package contains procedures for smoothing and grouping of settling tube data. A graphic part of the package plots the data in a log-log plot together with the estimated log-hyperbolic curve. Along with the plot follows all estimated parameters. Another graphic option is a plot of the log-hyperbolic shape triangle with the (χ,ζ) position of the sample.

  3. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research.

    PubMed

    Currie, L A

    2001-07-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.

  4. Menarcheal age of girls from dysfunctional families.

    PubMed

    Toromanović, Alma; Tahirović, Husref

    2004-07-01

    The objective of the present study was to determine median age at menarche and the influence of familial instability on maturation. The sample included 7047 girls between the ages of 9 and 17 years from Tuzla Canton. The girls were divided into two groups. Group A (N=5230) comprised girls who lived in families free of strong traumatic events. Group B (N=1817) included girls whose family dysfunction exposed them to prolonged distress. Probit analysis was performed to estimate mean menarcheal age using the Probit procedure of SAS package. The mean menarcheal age calculated by probit analysis for all the girls studied was 13.07 years. In girls from dysfunctional families a very clear shift toward earlier maturation was observed. The mean age at menarche for group B was 13.0 years, which was significantly lower that that for group A, 13.11 years (t=2.92, P<0.01). The results surveyed here lead to the conclusion that girls from dysfunctional families mature not later but even earlier than girls from normal families. This supports the hypothesis that stressful childhood life events accelerate maturation of girls.

  5. Flexible link functions in nonparametric binary regression with Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K

    2016-09-01

    In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.

  6. Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors

    PubMed Central

    Li, Dan; Lin, Lizhen; Dey, Dipak K.

    2015-01-01

    Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333

  7. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    PubMed

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  8. Diurnal patterns and associations among salivary cortisol, DHEA and alpha-amylase in older adults.

    PubMed

    Wilcox, Rand R; Granger, Douglas A; Szanton, Sarah; Clark, Florence

    2014-04-22

    Cortisol and dehydroepiandrosterone (DHEA) are considered to be valuable markers of the hypothalamus-pituitary-adrenal (HPA) axis, while salivary alpha-amylase (sAA) reflects the autonomic nervous system. Past studies have found certain diurnal patterns among these biomarkers, with some studies reporting results that differ from others. Also, some past studies have found an association among these three biomarkers while other studies have not. This study investigates these patterns and associations in older adults by taking advantage of modern statistical methods for dealing with non-normality, outliers and curvature. Basic characteristics of the data are reported as well, which are relevant to understanding the nature of any patterns and associations. Boxplots were used to check on the skewness and presence of outliers, including the impact of using simple transformations for dealing with non-normality. Diurnal patterns were investigated using recent advances aimed at comparing medians. When studying associations, the initial step was to check for curvature using a non-parametric regression estimator. Based on the resulting fit, a robust regression estimator was used that is designed to deal with skewed distributions and outliers. Boxplots indicated highly skewed distributions with outliers. Simple transformations (such as taking logs) did not deal with this issue in an effective manner. Consequently, diurnal patterns were investigated using medians and found to be consistent with some previous studies but not others. A positive association between awakening cortisol levels and DHEA was found when DHEA is relatively low; otherwise no association was found. The nature of the association between cortisol and DHEA was found to change during the course of the day. Upon awakening, cortisol was found to have no association with sAA when DHEA levels are relatively low, but otherwise there is a negative association. DHEA was found to have a positive association with sAA upon awakening. Shortly after awakening and for the remainder of the day, no association was found between DHEA and sAA ignoring cortisol. For DHEA and cortisol (taken as the independent variables) versus sAA (the dependent variable), again an association is found only upon awakening. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. On the efficacy of procedures to normalize Ex-Gaussian distributions.

    PubMed

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2014-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.

  10. MIXOR: a computer program for mixed-effects ordinal regression analysis.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-03-01

    MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.

  11. New S control chart using skewness correction method for monitoring process dispersion of skewed distributions

    NASA Astrophysics Data System (ADS)

    Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha

    2017-11-01

    Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.

  12. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies

    ERIC Educational Resources Information Center

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-01-01

    Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…

  13. Internet Data Analysis for the Undergraduate Statistics Curriculum

    ERIC Educational Resources Information Center

    Sanchez, Juana; He, Yan

    2005-01-01

    Statistics textbooks for undergraduates have not caught up with the enormous amount of analysis of Internet data that is taking place these days. Case studies that use Web server log data or Internet network traffic data are rare in undergraduate Statistics education. And yet these data provide numerous examples of skewed and bimodal…

  14. A note on `Analysis of gamma-ray burst duration distribution using mixtures of skewed distributions'

    NASA Astrophysics Data System (ADS)

    Kwong, Hok Shing; Nadarajah, Saralees

    2018-01-01

    Tarnopolski [Monthly Notices of the Royal Astronomical Society, 458 (2016) 2024-2031] analysed data sets on gamma-ray burst durations using skew distributions. He showed that the best fits are provided by two skew normal and three Gaussian distributions. Here, we suggest other distributions, including some that are heavy tailed. At least one of these distributions is shown to provide better fits than those considered in Tarnopolski. Five criteria are used to assess best fits.

  15. Increased skewing of X chromosome inactivation in Rett syndrome patients and their mothers.

    PubMed

    Knudsen, Gun Peggy S; Neilson, Tracey C S; Pedersen, June; Kerr, Alison; Schwartz, Marianne; Hulten, Maj; Bailey, Mark E S; Orstavik, Karen Helene

    2006-11-01

    Rett syndrome is a largely sporadic, X-linked neurological disorder with a characteristic phenotype, but which exhibits substantial phenotypic variability. This variability has been partly attributed to an effect of X chromosome inactivation (XCI). There have been conflicting reports regarding incidence of skewed X inactivation in Rett syndrome. In rare familial cases of Rett syndrome, favourably skewed X inactivation has been found in phenotypically normal carrier mothers. We have investigated the X inactivation pattern in DNA from blood and buccal cells of sporadic Rett patients (n=96) and their mothers (n=84). The mean degree of skewing in blood was higher in patients (70.7%) than controls (64.9%). Unexpectedly, the mothers of these patients also had a higher mean degree of skewing in blood (70.8%) than controls. In accordance with these findings, the frequency of skewed (XCI > or =80%) X inactivation in blood was also higher in both patients (25%) and mothers (30%) than in controls (11%). To test whether the Rett patients with skewed X inactivation were daughters of skewed mothers, 49 mother-daughter pairs were analysed. Of 14 patients with skewed X inactivation, only three had a mother with skewed X inactivation. Among patients, mildly affected cases were shown to be more skewed than more severely affected cases, and there was a trend towards preferential inactivation of the paternally inherited X chromosome in skewed cases. These findings, particularly the greater degree of X inactivation skewing in Rett syndrome patients, are of potential significance in the analysis of genotype-phenotype correlations in Rett syndrome.

  16. Statistical hypothesis tests of some micrometeorological observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SethuRaman, S.; Tichler, J.

    Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g/sub 1/ has a good correlation with the chi-square values. Events withmore » vertical-barg/sub 1/vertical-bar<0.21 were normal to begin with and those with 0.21« less

  17. Bayesian generalized least squares regression with application to log Pearson type 3 regional skew estimation

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2005-10-01

    This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.

  18. Location tests for biomarker studies: a comparison using simulations for the two-sample case.

    PubMed

    Scheinhardt, M O; Ziegler, A

    2013-01-01

    Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.

  19. Metric adjusted skew information

    PubMed Central

    Hansen, Frank

    2008-01-01

    We extend the concept of Wigner–Yanase–Dyson skew information to something we call “metric adjusted skew information” (of a state with respect to a conserved observable). This “skew information” is intended to be a non-negative quantity bounded by the variance (of an observable in a state) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova–Chentsov functions describing the possible quantum statistics is a Bauer simplex and determine its extreme points. We determine a particularly simple skew information, the “λ-skew information,” parametrized by a λ ∈ (0, 1], and show that the convex cone this family generates coincides with the set of all metric adjusted skew informations. PMID:18635683

  20. Estimation of transformation parameters for microarray data.

    PubMed

    Durbin, Blythe; Rocke, David M

    2003-07-22

    Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.

  1. On the efficacy of procedures to normalize Ex-Gaussian distributions

    PubMed Central

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2015-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588

  2. Adaptive Neural Mechanism for Listing’s Law Revealed in Patients with Skew Deviation Caused by Brainstem or Cerebellar Lesion

    PubMed Central

    Fesharaki, Maryam; Karagiannis, Peter; Tweed, Douglas; Sharpe, James A.; Wong, Agnes M. F.

    2016-01-01

    Purpose Skew deviation is a vertical strabismus caused by damage to the otolithic–ocular reflex pathway and is associated with abnormal ocular torsion. This study was conducted to determine whether patients with skew deviation show the normal pattern of three-dimensional eye control called Listing’s law, which specifies the eye’s torsional angle as a function of its horizontal and vertical position. Methods Ten patients with skew deviation caused by brain stem or cerebellar lesions and nine normal control subjects were studied. Patients with diplopia and neurologic symptoms less than 1 month in duration were designated as acute (n = 4) and those with longer duration were classified as chronic (n = 10). Serial recordings were made in the four patients with acute skew deviation. With the head immobile, subjects made saccades to a target that moved between straight ahead and eight eccentric positions, while wearing search coils. At each target position, fixation was maintained for 3 seconds before the next saccade. From the eye position data, the plane of best fit, referred to as Listing’s plane, was fitted. Violations of Listing’s law were quantified by computing the “thickness” of this plane, defined as the SD of the distances to the plane from the data points. Results Both the hypertropic and hypotropic eyes in patients with acute skew deviation violated Listing’s and Donders’ laws—that is, the eyes did not show one consistent angle of torsion in any given gaze direction, but rather an abnormally wide range of torsional angles. In contrast, each eye in patients with chronic skew deviation obeyed the laws. However, in chronic skew deviation, Listing’s planes in both eyes had abnormal orientations. Conclusions Patients with acute skew deviation violated Listing’s law, whereas those with chronic skew deviation obeyed it, indicating that despite brain lesions, neural adaptation can restore Listing’s law so that the neural linkage between horizontal, vertical, and torsional eye position remains intact. Violation of Listing’s and Donders’ laws during fixation arises primarily from torsional drifts, indicating that patients with acute skew deviation have unstable torsional gaze holding that is independent of their horizontal–vertical eye positions. PMID:18172094

  3. Analyzing coastal environments by means of functional data analysis

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.

    2017-07-01

    Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.

  4. Inferring climate variability from skewed proxy records

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Tingley, M.

    2013-12-01

    Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and compared to other proxy records. (2) a multiproxy reconstruction of temperature over the Common Era (Mann et al., 2009), where we find that about one third of the records display significant departures from normality. Accordingly, accounting for skewness in proxy predictors has a notable influence on both reconstructed global mean and spatial patterns of temperature change. Inferring climate variability from skewed proxy records thus requires cares, but can be done with relatively simple tools. References - Mann, M. E., Z. Zhang, S. Rutherford, R. S. Bradley, M. K. Hughes, D. Shindell, C. Ammann, G. Faluvegi, and F. Ni (2009), Global signatures and dynamical origins of the little ice age and medieval climate anomaly, Science, 326(5957), 1256-1260, doi:10.1126/science.1177303. - Moy, C., G. Seltzer, D. Rodbell, and D. Anderson (2002), Variability of El Niño/Southern Oscillation activ- ity at millennial timescales during the Holocene epoch, Nature, 420(6912), 162-165.

  5. A novel generalized normal distribution for human longevity and other negatively skewed data.

    PubMed

    Robertson, Henry T; Allison, David B

    2012-01-01

    Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.

  6. A Novel Generalized Normal Distribution for Human Longevity and other Negatively Skewed Data

    PubMed Central

    Robertson, Henry T.; Allison, David B.

    2012-01-01

    Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution. PMID:22623974

  7. Estimation of Item Parameters and the GEM Algorithm.

    ERIC Educational Resources Information Center

    Tsutakawa, Robert K.

    The models and procedures discussed in this paper are related to those presented in Bock and Aitkin (1981), where they considered the 2-parameter probit model and approximated a normally distributed prior distribution of abilities by a finite and discrete distribution. One purpose of this paper is to clarify the nature of the general EM (GEM)…

  8. Parameter estimation method and updating of regional prediction equations for ungaged sites in the desert region of California

    USGS Publications Warehouse

    Barth, Nancy A.; Veilleux, Andrea G.

    2012-01-01

    The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.

  9. Regeneration patterns of a long-lived dominant conifer and the effects of logging in southern South America

    NASA Astrophysics Data System (ADS)

    Souza, Alexandre F.; Forgiarini, Cristiane; Longhi, Solon Jonas; Brena, Doádi Antônio

    2008-09-01

    The regeneration ecology of the long-lived conifer Araucaria angustifolia was studied in São Francisco de Paula, southern Brazil. We evaluated the expectations that: (i) size distribution of populations of Araucaria angustifolia, a large conifer that dominates southern Brazil's mixed forests, is left-skewed in old-growth forests but right-skewed in logged forests, indicating chronic recruitment failure in the first kind of habitat as well as a recruitment pulse in the second; (ii) seedlings and juveniles are found under more open-canopy microsites than would be expected by chance; and (iii) reproductive females would be aggregated at the coarse spatial scales in which past massive recruitment events are expected to have occurred, and young plants would be spatially associated with females due to the prevalence of vertebrate and large-bird seed dispersers. Data were collected in the threatened mixed conifer-hardwood forests in southern Brazil in ten 1-ha plots and one 0.25-ha plot that was hit by a small tornado in 2003. Five of these plots corresponded to unlogged old-growth forests, three to forests where A. angustifolia was selectively logged ca. 60 years ago and two to forests selectively logged ca. 20 years ago. For the first time, ontogenetic life stages of this important conifer are identified and described. The first and second expectations were fulfilled, and the third was partially fulfilled, since seedlings and juveniles were hardly ever associated with reproductive females. These results confirm the generalization of the current conceptual model of emergent long-lived pioneer regeneration to Araucaria angustifolia and associate its regeneration niche to the occupation of large-scale disturbances with long return times.

  10. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    PubMed

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  11. T helper cell 2 immune skewing in pregnancy/early life: chemical exposure and the development of atopic disease and allergy.

    PubMed

    McFadden, J P; Thyssen, J P; Basketter, D A; Puangpet, P; Kimber, I

    2015-03-01

    During the last 50 years there has been a significant increase in Western societies of atopic disease and associated allergy. The balance between functional subpopulations of T helper cells (Th) determines the quality of the immune response provoked by antigen. One such subpopulation - Th2 cells - is associated with the production of IgE antibody and atopic allergy, whereas, Th1 cells antagonize IgE responses and the development of allergic disease. In seeking to provide a mechanistic basis for this increased prevalence of allergic disease, one proposal has been the 'hygiene hypothesis', which argues that in Westernized societies reduced exposure during early childhood to pathogenic microorganisms favours the development of atopic allergy. Pregnancy is normally associated with Th2 skewing, which persists for some months in the neonate before Th1/Th2 realignment occurs. In this review, we consider the immunophysiology of Th2 immune skewing during pregnancy. In particular, we explore the possibility that altered and increased patterns of exposure to certain chemicals have served to accentuate this normal Th2 skewing and therefore further promote the persistence of a Th2 bias in neonates. Furthermore, we propose that the more marked Th2 skewing observed in first pregnancy may, at least in part, explain the higher prevalence of atopic disease and allergy in the first born. © 2014 British Association of Dermatologists.

  12. Timing of Puberty in Overweight Versus Obese Boys.

    PubMed

    Lee, Joyce M; Wasserman, Richard; Kaciroti, Niko; Gebremariam, Achamyeleh; Steffes, Jennifer; Dowshen, Steven; Harris, Donna; Serwint, Janet; Abney, Dianna; Smitherman, Lynn; Reiter, Edward; Herman-Giddens, Marcia E

    2016-02-01

    Studies of the relationship of weight status with timing of puberty in boys have been mixed. This study examined whether overweight and obesity are associated with differences in the timing of puberty in US boys. We reanalyzed recent community-based pubertal data from the American Academy of Pediatrics' Pediatric Research in Office Settings study in which trained clinicians assessed boys 6 to 16 years for height, weight, Tanner stages, testicular volume (TV), and other pubertal variables. We classified children based on BMI as normal weight, overweight, or obese and compared median age at a given Tanner stage or greater by weight class using probit and ordinal probit models and a Bayesian approach. Half of boys (49.9%, n = 1931) were white, 25.8% (n = 1000) were African American, and 24.3% (n = 941) were Hispanic. For genital development in white and African American boys across a variety of Tanner stages, we found earlier puberty in overweight compared with normal weight boys, and later puberty in obese compared with overweight, but no significant differences for Hispanics. For TV (≥3 mL or ≥4 mL), our findings support earlier puberty for overweight compared with normal weight white boys. In a large, racially diverse, community-based sample of US boys, we found evidence of earlier puberty for overweight compared with normal or obese, and later puberty for obese boys compared with normal and overweight boys. Additional studies are needed to understand the possible relationships among race/ethnicity, gender, BMI, and the timing of pubertal development. Copyright © 2016 by the American Academy of Pediatrics.

  13. Response to treatment of myasthenia gravis according to clinical subtype.

    PubMed

    Akaishi, Tetsuya; Suzuki, Yasushi; Imai, Tomihiro; Tsuda, Emiko; Minami, Naoya; Nagane, Yuriko; Uzawa, Akiyuki; Kawaguchi, Naoki; Masuda, Masayuki; Konno, Shingo; Suzuki, Hidekazu; Murai, Hiroyuki; Aoki, Masashi; Utsugisawa, Kimiaki

    2016-11-17

    We have previously reported using two-step cluster analysis to classify myasthenia gravis (MG) patients into the following five subtypes: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; anti-acetylcholine receptor antibody (AChR-Ab)-negative MG; and AChR-Ab-positive MG without thymic abnormalities. The objectives of the present study were to examine the reproducibility of this five-subtype classification using a new data set of MG patients and to identify additional characteristics of these subtypes, particularly in regard to response to treatment. A total of 923 consecutive MG patients underwent two-step cluster analysis for the classification of subtypes. The variables used for classification were sex, age of onset, disease duration, presence of thymoma or thymic hyperplasia, positivity for AChR-Ab or anti-muscle-specific tyrosine kinase antibody, positivity for other concurrent autoantibodies, and disease condition at worst and current. The period from the start of treatment until the achievement of minimal manifestation status (early-stage response) was determined and then compared between subtypes using Kaplan-Meier analysis and the log-rank test. In addition, between subtypes, the rate of the number of patients who maintained minimal manifestations during the study period/that of patients who only achieved the status once (stability of improved status) was compared. As a result of two-step cluster analysis, 923 MG patients were classified into five subtypes as follows: ocular MG (AChR-Ab-positivity, 77%; histogram of onset age, skewed to older age); thymoma-associated MG (100%; normal distribution); MG with thymic hyperplasia (89%; skewed to younger age); AChR-Ab-negative MG (0%; normal distribution); and AChR-Ab-positive MG without thymic abnormalities (100%, skewed to older age). Furthermore, patients classified as ocular MG showed the best early-stage response to treatment and stability of improved status, followed by those classified as thymoma-associated MG and AChR-Ab-positive MG without thymic abnormalities; by contrast, those classified as AChR-Ab-negative MG showed the worst early-stage response to treatment and stability of improved status. Differences were seen between the five subtypes in demographic characteristics, clinical severity, and therapeutic response. Our five-subtype classification approach would be beneficial not only to elucidate disease subtypes, but also to plan treatment strategies for individual MG patients.

  14. European Multicenter Study on Analytical Performance of DxN Veris System HCV Assay.

    PubMed

    Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Gismondo, Maria Rita; Hofmann, Jörg; Izopet, Jacques; Kühn, Sebastian; Lombardi, Alessandra; Marcos, Maria Angeles; Sauné, Karine; O'Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel W

    2017-04-01

    The analytical performance of the Veris HCV Assay for use on the new and fully automated Beckman Coulter DxN Veris Molecular Diagnostics System (DxN Veris System) was evaluated at 10 European virology laboratories. Precision, analytical sensitivity, specificity, and performance with negative samples, linearity, and performance with hepatitis C virus (HCV) genotypes were evaluated. Precision for all sites showed a standard deviation (SD) of 0.22 log 10 IU/ml or lower for each level tested. Analytical sensitivity determined by probit analysis was between 6.2 and 9.0 IU/ml. Specificity on 94 unique patient samples was 100%, and performance with 1,089 negative samples demonstrated 100% not-detected results. Linearity using patient samples was shown from 1.34 to 6.94 log 10 IU/ml. The assay demonstrated linearity upon dilution with all HCV genotypes. The Veris HCV Assay demonstrated an analytical performance comparable to that of currently marketed HCV assays when tested across multiple European sites. Copyright © 2017 American Society for Microbiology.

  15. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  16. Inheritance of Properties of Normal and Non-Normal Distributions after Transformation of Scores to Ranks

    ERIC Educational Resources Information Center

    Zimmerman, Donald W.

    2011-01-01

    This study investigated how population parameters representing heterogeneity of variance, skewness, kurtosis, bimodality, and outlier-proneness, drawn from normal and eleven non-normal distributions, also characterized the ranks corresponding to independent samples of scores. When the parameters of population distributions from which samples were…

  17. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  18. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  19. Statistical analysis of the 70 meter antenna surface distortions

    NASA Technical Reports Server (NTRS)

    Kiedron, K.; Chian, C. T.; Chuang, K. L.

    1987-01-01

    Statistical analysis of surface distortions of the 70 meter NASA/JPL antenna, located at Goldstone, was performed. The purpose of this analysis is to verify whether deviations due to gravity loading can be treated as quasi-random variables with normal distribution. Histograms of the RF pathlength error distribution for several antenna elevation positions were generated. The results indicate that the deviations from the ideal antenna surface are not normally distributed. The observed density distribution for all antenna elevation angles is taller and narrower than the normal density, which results in large positive values of kurtosis and a significant amount of skewness. The skewness of the distribution changes from positive to negative as the antenna elevation changes from zenith to horizon.

  20. Studies of the Ability to Hold the Eye in Eccentric Gaze: Measurements in Normal Subjects with the Head Erect

    NASA Technical Reports Server (NTRS)

    Reschke, Millard F.; Somers, Jeffrey T.; Feiveson, Alan H.; Leigh, R. John; Wood, Scott J.; Paloski, William H.; Kornilova, Ludmila

    2006-01-01

    We studied the ability to hold the eyes in eccentric horizontal or vertical gaze angles in 68 normal humans, age range 19-56. Subjects attempted to sustain visual fixation of a briefly flashed target located 30 in the horizontal plane and 15 in the vertical plane in a dark environment. Conventionally, the ability to hold eccentric gaze is estimated by fitting centripetal eye drifts by exponential curves and calculating the time constant (t(sub c)) of these slow phases of gazeevoked nystagmus. Although the distribution of time-constant measurements (t(sub c)) in our normal subjects was extremely skewed due to occasional test runs that exhibited near-perfect stability (large t(sub c) values), we found that log10(tc) was approximately normally distributed within classes of target direction. Therefore, statistical estimation and inference on the effect of target direction was performed on values of z identical with log10t(sub c). Subjects showed considerable variation in their eyedrift performance over repeated trials; nonetheless, statistically significant differences emerged: values of tc were significantly higher for gaze elicited to targets in the horizontal plane than for the vertical plane (P less than 10(exp -5), suggesting eccentric gazeholding is more stable in the horizontal than in the vertical plane. Furthermore, centrifugal eye drifts were observed in 13.3, 16.0 and 55.6% of cases for horizontal, upgaze and downgaze tests, respectively. Fifth percentile values of the time constant were estimated to be 10.2 sec, 3.3 sec and 3.8 sec for horizontal, upward and downward gaze, respectively. The difference between horizontal and vertical gazeholding may be ascribed to separate components of the velocity position neural integrator for eye movements, and to differences in orbital mechanics. Our statistical method for representing the range of normal eccentric gaze stability can be readily applied in a clinical setting to patients who were exposed to environments that may have modified their central integrators and thus require monitoring. Patients with gaze-evoked nystagmus can be flagged by comparing to the above established normative criteria.

  1. Radiation exposure assessment for portsmouth naval shipyard health studies.

    PubMed

    Daniels, R D; Taulbee, T D; Chen, P

    2004-01-01

    Occupational radiation exposures of 13,475 civilian nuclear shipyard workers were investigated as part of a retrospective mortality study. Estimates of annual, cumulative and collective doses were tabulated for future dose-response analysis. Record sets were assembled and amended through range checks, examination of distributions and inspection. Methods were developed to adjust for administrative overestimates and dose from previous employment. Uncertainties from doses below the recording threshold were estimated. Low-dose protracted radiation exposures from submarine overhaul and repair predominated. Cumulative doses are best approximated by a hybrid log-normal distribution with arithmetic mean and median values of 20.59 and 3.24 mSv, respectively. The distribution is highly skewed with more than half the workers having cumulative doses <10 mSv and >95% having doses <100 mSv. The maximum cumulative dose is estimated at 649.39 mSv from 15 person-years of exposure. The collective dose was 277.42 person-Sv with 96.8% attributed to employment at Portsmouth Naval Shipyard.

  2. Skewness in large-scale structure and non-Gaussian initial conditions

    NASA Technical Reports Server (NTRS)

    Fry, J. N.; Scherrer, Robert J.

    1994-01-01

    We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, X; Yang, F

    Purpose: Knowing MLC leaf positioning error over the course of treatment would be valuable for treatment planning, QA design, and patient safety. The objective of the current study was to quantify the MLC positioning accuracy for VMAT delivery of head and neck treatment plans. Methods: A total of 837 MLC log files were collected from 14 head and neck cancer patients undergoing full arc VMAT treatment on one Varian Trilogy machine. The actual and planned leaf gaps were extracted from the retrieved MLC log files. For a given patient, the leaf gap error percentage (LGEP), defined as the ratio ofmore » the actual leaf gap over the planned, was evaluated for each leaf pair at all the gantry angles recorded over the course of the treatment. Statistics describing the distribution of the largest LGEP (LLGEP) of the 60 leaf pairs including the maximum, minimum, mean, Kurtosis, and skewness were evaluated. Results: For the 14 studied patients, their PTV located at tonsil, base of tongue, larynx, supraglottis, nasal cavity, and thyroid gland with volume ranging from 72.0 cm{sup 3} to 602.0 cm{sup 3}. The identified LLGEP differed between patients. It ranged from 183.9% to 457.7% with a mean of 368.6%. For the majority of the patients, the LLGEP distributions peaked at non-zero positions and showed no obvious dependence on gantry rotations. Kurtosis and skewness, with minimum/maximum of 66.6/217.9 and 6.5/12.6, respectively, suggested relatively more peaked while right-skewed leaf error distribution pattern. Conclusion: The results indicate pattern of MLC leaf gap error differs between patients of lesion located at similar anatomic site. Understanding the systemic mechanisms underlying these observed error patterns necessitates examining more patient-specific plan parameters in a large patient cohort setting.« less

  4. Determining the role of skewed X-chromosome inactivation in developing muscle symptoms in carriers of Duchenne muscular dystrophy.

    PubMed

    Viggiano, Emanuela; Ergoli, Manuela; Picillo, Esther; Politano, Luisa

    2016-07-01

    Duchenne and Becker dystrophinopathies (DMD and BMD) are X-linked recessive disorders caused by mutations in the dystrophin gene that lead to absent or reduced expression of dystrophin in both skeletal and heart muscles. DMD/BMD female carriers are usually asymptomatic, although about 8 % may exhibit muscle or cardiac symptoms. Several mechanisms leading to a reduced dystrophin have been hypothesized to explain the clinical manifestations and, in particular, the role of the skewed XCI is questioned. In this review, the mechanism of XCI and its involvement in the phenotype of BMD/DMD carriers with both a normal karyotype or with X;autosome translocations with breakpoints at Xp21 (locus of the DMD gene) will be analyzed. We have previously observed that DMD carriers with moderate/severe muscle involvement, exhibit a moderate or extremely skewed XCI, in particular if presenting with an early onset of symptoms, while DMD carriers with mild muscle involvement present a random XCI. Moreover, we found that among 87.1 % of the carriers with X;autosome translocations involving the locus Xp21 who developed signs and symptoms of dystrophinopathy such as proximal muscle weakness, difficulty to run, jump and climb stairs, 95.2 % had a skewed XCI pattern in lymphocytes. These data support the hypothesis that skewed XCI is involved in the onset of phenotype in DMD carriers, the X chromosome carrying the normal DMD gene being preferentially inactivated and leading to a moderate-severe muscle involvement.

  5. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.

    PubMed

    Bishara, Anthony J; Li, Jiexiang; Nash, Thomas

    2018-02-01

    When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.

  6. POLO: a user's guide to Probit Or LOgit analysis.

    Treesearch

    Jacqueline L. Robertson; Robert M. Russell; N.E. Savin

    1980-01-01

    This user's guide provides detailed instructions for the use of POLO (Probit Or LOgit), a computer program for the analysis of quantal response data such as that obtained from insecticide bioassays by the techniques of probit or logit analysis. Dosage-response lines may be compared for parallelism or...

  7. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    PubMed

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  8. A Skew-Normal Mixture Regression Model

    ERIC Educational Resources Information Center

    Liu, Min; Lin, Tsung-I

    2014-01-01

    A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…

  9. Choosing the Best Correction Formula for the Pearson r[superscript 2] Effect Size

    ERIC Educational Resources Information Center

    Skidmore, Susan Troncoso; Thompson, Bruce

    2011-01-01

    In the present Monte Carlo simulation study, the authors compared bias and precision of 7 sampling error corrections to the Pearson r[superscript 2] under 6 x 3 x 6 conditions (i.e., population ρ values of 0.0, 0.1, 0.3, 0.5, 0.7, and 0.9, respectively; population shapes normal, skewness = kurtosis = 1, and skewness = -1.5 with kurtosis =…

  10. Suppression of the Near Wall Burst Process of a Fully Developed Turbulent Pipe Flow

    DTIC Science & Technology

    1993-05-01

    tunmel turbulent boundary layer a) velocity fluctuation skewness levels and b) velocity fluctuation kurtosis levels ...by the undisturbed total uv level and u*. a) quadrants I and 2 and b) quadrants 3 and 4 ...................... 105 5.20 Spanwise development of the uw...and radial velocity skewness levels . Normalization with ref. u". .............................. 111 xi 5.23 Spanwise development of profi!s of the

  11. Transforming wealth: using the inverse hyperbolic sine (IHS) and splines to predict youth's math achievement.

    PubMed

    Friedline, Terri; Masa, Rainier D; Chowa, Gina A N

    2015-01-01

    The natural log and categorical transformations commonly applied to wealth for meeting the statistical assumptions of research may not always be appropriate for adjusting for skewness given wealth's unique properties. Finding and applying appropriate transformations is becoming increasingly important as researchers consider wealth as a predictor of well-being. We present an alternative transformation-the inverse hyperbolic sine (IHS)-for simultaneously dealing with skewness and accounting for wealth's unique properties. Using the relationship between household wealth and youth's math achievement as an example, we apply the IHS transformation to wealth data from US and Ghanaian households. We also explore non-linearity and accumulation thresholds by combining IHS transformed wealth with splines. IHS transformed wealth relates to youth's math achievement similarly when compared to categorical and natural log transformations, indicating that it is a viable alternative to other transformations commonly used in research. Non-linear relationships and accumulation thresholds emerge that predict youth's math achievement when splines are incorporated. In US households, accumulating debt relates to decreases in math achievement whereas accumulating assets relates to increases in math achievement. In Ghanaian households, accumulating assets between the 25th and 50th percentiles relates to increases in youth's math achievement. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Adjusting annual maximum peak discharges at selected stations in northeastern Illinois for changes in land-use conditions

    USGS Publications Warehouse

    Over, Thomas M.; Saito, Riki J.; Soong, David T.

    2016-06-30

    The observed and adjusted values for each streamgage are tabulated. To illustrate the overall effect of the adjustments, differences in the mean, standard deviation, and skewness of the log-transformed observed and urbanization-adjusted peak discharge series by streamgage are computed. For almost every streamgage where an adjustment was applied (no increase in urbanization was reported for a few streamgages), the mean increased and the standard deviation decreased; the effect on skewness values was more variable but usually they increased. Significant positive peak discharge trends were common in the observed values, occurring at 27.3 percent of streamgages at a p-value of 0.05 according to a Kendall’s tau correlation test; in the adjusted values, the incidence of such trends was reduced to 7.0 percent.

  13. Generating Multivariate Ordinal Data via Entropy Principles.

    PubMed

    Lee, Yen; Kaplan, David

    2018-03-01

    When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.

  14. Estimation of Rank Correlation for Clustered Data

    PubMed Central

    Rosner, Bernard; Glynn, Robert

    2017-01-01

    It is well known that the sample correlation coefficient (Rxy) is the maximum likelihood estimator (MLE) of the Pearson correlation (ρxy) for i.i.d. bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the MLE of ρxy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (a) converting ranks of both X and Y to the probit scale, (b) estimating the Pearson correlation between probit scores for X and Y, and (c) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. PMID:28399615

  15. A flexible model for multivariate interval-censored survival times with complex correlation structure.

    PubMed

    Falcaro, Milena; Pickles, Andrew

    2007-02-10

    We focus on the analysis of multivariate survival times with highly structured interdependency and subject to interval censoring. Such data are common in developmental genetics and genetic epidemiology. We propose a flexible mixed probit model that deals naturally with complex but uninformative censoring. The recorded ages of onset are treated as possibly censored ordinal outcomes with the interval censoring mechanism seen as arising from a coarsened measurement of a continuous variable observed as falling between subject-specific thresholds. This bypasses the requirement for the failure times to be observed as falling into non-overlapping intervals. The assumption of a normal age-of-onset distribution of the standard probit model is relaxed by embedding within it a multivariate Box-Cox transformation whose parameters are jointly estimated with the other parameters of the model. Complex decompositions of the underlying multivariate normal covariance matrix of the transformed ages of onset become possible. The new methodology is here applied to a multivariate study of the ages of first use of tobacco and first consumption of alcohol without parental permission in twins. The proposed model allows estimation of the genetic and environmental effects that are shared by both of these risk behaviours as well as those that are specific. 2006 John Wiley & Sons, Ltd.

  16. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  17. Multiple imputation in the presence of non-normal data.

    PubMed

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. WikiLeaks and Iraq Body Count: the sum of parts may not add up to the whole-a comparison of two tallies of Iraqi civilian deaths.

    PubMed

    Carpenter, Dustin; Fuller, Tova; Roberts, Les

    2013-06-01

    Introduction The number of civilians killed in Iraq following the 2003 invasion has proven difficult to measure and contentious in recent years. The release of the Wikileaks War Logs (WL) has created the potential to conduct a sensitivity analysis of the commonly-cited Iraq Body Count's (IBC's) tally, which is based on press, government, and other public sources. Hypothesis The 66,000 deaths reported in the Wikileaks War Logs are mostly the same events as those previously reported in the press and elsewhere as tallied by iraqbodycount.org. A systematic random sample of 2500 violent fatal War Log incidents was selected and evaluated to determine whether these incidents were also found in IBC's press-based listing. Each selected event was ranked on a scale of 0 (no match present) to 3 (almost certainly matched) with regard to the likelihood it was listed in the IBC database. Of the two thousand four hundred and nine War Log records, 488 (23.8%) were found to have likely matches in IBC records. Events that killed more people were far more likely to appear in both datasets, with 94.1% of events in which ≥20 people were killed being likely matches, as compared with 17.4% of singleton killings. Because of this skew towards the recording of large events in both datasets, it is estimated that 2035 (46.3%) of the 4394 deaths reported in the Wikileaks War Logs had been previously reported in IBC. Passive surveillance systems, widely seen as incomplete, may also be selective in the types of events detected in times of armed conflict. Bombings and other events during which many people are killed, and events in less violent areas, appear to be detected far more often, creating a skewed image of the mortality profile in Iraq. Members of the press and researchers should be hesitant to draw conclusions about the nature or extent of violence from passive surveillance systems of low or unknown sensitivity.

  19. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects

    ERIC Educational Resources Information Center

    Ho, Andrew D.; Yu, Carol C.

    2015-01-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…

  20. Muscle categorization using PDF estimation and Naive Bayes classification.

    PubMed

    Adel, Tameem M; Smith, Benn E; Stashuk, Daniel W

    2012-01-01

    The structure of motor unit potentials (MUPs) and their times of occurrence provide information about the motor units (MUs) that created them. As such, electromyographic (EMG) data can be used to categorize muscles as normal or suffering from a neuromuscular disease. Using pattern discovery (PD) allows clinicians to understand the rationale underlying a certain muscle characterization; i.e. it is transparent. Discretization is required in PD, which leads to some loss in accuracy. In this work, characterization techniques that are based on estimating probability density functions (PDFs) for each muscle category are implemented. Characterization probabilities of each motor unit potential train (MUPT) are obtained from these PDFs and then Bayes rule is used to aggregate the MUPT characterization probabilities to calculate muscle level probabilities. Even though this technique is not as transparent as PD, its accuracy is higher than the discrete PD. Ultimately, the goal is to use a technique that is based on both PDFs and PD and make it as transparent and as efficient as possible, but first it was necessary to thoroughly assess how accurate a fully continuous approach can be. Using gaussian PDF estimation achieved improvements in muscle categorization accuracy over PD and further improvements resulted from using feature value histograms to choose more representative PDFs; for instance, using log-normal distribution to represent skewed histograms.

  1. Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation.

    PubMed

    Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai

    2017-10-01

    Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.

  2. Evaluation of waste mushroom logs as a potential biomass resource for the production of bioethanol.

    PubMed

    Lee, Jae-Won; Koo, Bon-Wook; Choi, Joon-Weon; Choi, Don-Ha; Choi, In-Gyu

    2008-05-01

    In order to investigate the possibility of using waste mushroom logs as a biomass resource for alternative energy production, the chemical and physical characteristics of normal wood and waste mushroom logs were examined. Size reduction of normal wood (145 kW h/tone) required significantly higher energy consumption than waste mushroom logs (70 kW h/tone). The crystallinity value of waste mushroom logs was dramatically lower (33%) than normal wood (49%) after cultivation by Lentinus edodes as spawn. Lignin, an enzymatic hydrolysis inhibitor in sugar production, decreased from 21.07% to 18.78% after inoculation of L. edodes. Total sugar yields obtained by enzyme and acid hydrolysis were higher in waste mushroom logs than in normal wood. After 24h fermentation, 12 g/L ethanol was produced on waste mushroom logs, while normal wood produced 8 g/L ethanol. These results indicate that waste mushroom logs are economically suitable lignocellulosic material for the production of fermentable sugars related to bioethanol production.

  3. Anorexia Nervosa: Analysis of Trabecular Texture with CT

    PubMed Central

    Tabari, Azadeh; Torriani, Martin; Miller, Karen K.; Klibanski, Anne; Kalra, Mannudeep K.

    2017-01-01

    Purpose To determine indexes of skeletal integrity by using computed tomographic (CT) trabecular texture analysis of the lumbar spine in patients with anorexia nervosa and normal-weight control subjects and to determine body composition predictors of trabecular texture. Materials and Methods This cross-sectional study was approved by the institutional review board and compliant with HIPAA. Written informed consent was obtained. The study included 30 women with anorexia nervosa (mean age ± standard deviation, 26 years ± 6) and 30 normal-weight age-matched women (control group). All participants underwent low-dose single-section quantitative CT of the L4 vertebral body with use of a calibration phantom. Trabecular texture analysis was performed by using software. Skewness (asymmetry of gray-level pixel distribution), kurtosis (pointiness of pixel distribution), entropy (inhomogeneity of pixel distribution), and mean value of positive pixels (MPP) were assessed. Bone mineral density and abdominal fat and paraspinal muscle areas were quantified with quantitative CT. Women with anorexia nervosa and normal-weight control subjects were compared by using the Student t test. Linear regression analyses were performed to determine associations between trabecular texture and body composition. Results Women with anorexia nervosa had higher skewness and kurtosis, lower MPP (P < .001), and a trend toward lower entropy (P = .07) compared with control subjects. Bone mineral density, abdominal fat area, and paraspinal muscle area were inversely associated with skewness and kurtosis and positively associated with MPP and entropy. Texture parameters, but not bone mineral density, were associated with lowest lifetime weight and duration of amenorrhea in anorexia nervosa. Conclusion Patients with anorexia nervosa had increased skewness and kurtosis and decreased entropy and MPP compared with normal-weight control subjects. These parameters were associated with lowest lifetime weight and duration of amenorrhea, but there were no such associations with bone mineral density. These findings suggest that trabecular texture analysis might contribute information about bone health in anorexia nervosa that is independent of that provided with bone mineral density. © RSNA, 2016 PMID:27797678

  4. Anorexia Nervosa: Analysis of Trabecular Texture with CT.

    PubMed

    Tabari, Azadeh; Torriani, Martin; Miller, Karen K; Klibanski, Anne; Kalra, Mannudeep K; Bredella, Miriam A

    2017-04-01

    Purpose To determine indexes of skeletal integrity by using computed tomographic (CT) trabecular texture analysis of the lumbar spine in patients with anorexia nervosa and normal-weight control subjects and to determine body composition predictors of trabecular texture. Materials and Methods This cross-sectional study was approved by the institutional review board and compliant with HIPAA. Written informed consent was obtained. The study included 30 women with anorexia nervosa (mean age ± standard deviation, 26 years ± 6) and 30 normal-weight age-matched women (control group). All participants underwent low-dose single-section quantitative CT of the L4 vertebral body with use of a calibration phantom. Trabecular texture analysis was performed by using software. Skewness (asymmetry of gray-level pixel distribution), kurtosis (pointiness of pixel distribution), entropy (inhomogeneity of pixel distribution), and mean value of positive pixels (MPP) were assessed. Bone mineral density and abdominal fat and paraspinal muscle areas were quantified with quantitative CT. Women with anorexia nervosa and normal-weight control subjects were compared by using the Student t test. Linear regression analyses were performed to determine associations between trabecular texture and body composition. Results Women with anorexia nervosa had higher skewness and kurtosis, lower MPP (P < .001), and a trend toward lower entropy (P = .07) compared with control subjects. Bone mineral density, abdominal fat area, and paraspinal muscle area were inversely associated with skewness and kurtosis and positively associated with MPP and entropy. Texture parameters, but not bone mineral density, were associated with lowest lifetime weight and duration of amenorrhea in anorexia nervosa. Conclusion Patients with anorexia nervosa had increased skewness and kurtosis and decreased entropy and MPP compared with normal-weight control subjects. These parameters were associated with lowest lifetime weight and duration of amenorrhea, but there were no such associations with bone mineral density. These findings suggest that trabecular texture analysis might contribute information about bone health in anorexia nervosa that is independent of that provided with bone mineral density. © RSNA, 2016.

  5. Two brothers with skewed thiopurine metabolism in ulcerative colitis treated successfully with allopurinol and mercaptopurine dose reduction.

    PubMed

    Hoentjen, Frank; Hanauer, Stephen B; de Boer, Nanne K; Rubin, David T

    2012-01-01

    Thiopurine therapy effectively maintains remission in inflammatory bowel disease. However, many patients are unable to achieve optimum benefits from azathioprine or 6-mercaptopurine because of undesirable metabolism related to high thiopurine methyltransferase (TPMT) activity characterized by hepatic transaminitis secondary to increased 6-methylmercaptopurine (6-MMP) production and reduced levels of therapeutic 6-thioguanine nucleotide (6-TGN). Allopurinol can optimize this skewed metabolism. We discuss two brothers who were both diagnosed with ulcerative colitis (UC). Their disease remained active despite oral and topical mesalamines. Steroids followed by 6-mercaptopurine (MP) were unsuccessfully introduced for both patients and both were found to have high 6-MMP and low 6-TGN levels, despite normal TMPT enzyme activity, accompanied by transaminitis. Allopurinol was introduced in combination with MP dose reduction. For both brothers addition of allopurinol was associated with successful remission and optimized MP metabolites. These siblings with active UC illustrate that skewed thiopurine metabolism may occur despite normal TPMT enzyme activity and can lead to adverse events in the absence of disease control. We confirm previous data showing that addition of allopurinol can reverse this skewed metabolism, and reduce both hepatotoxicity and disease activity, but we now also introduce the concept of a family history of preferential MP metabolism as a clue to effective management for other family members.

  6. An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis

    ERIC Educational Resources Information Center

    Diwakar, Rekha

    2017-01-01

    Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…

  7. Considerations on the mechanisms of alternating skew deviation in patients with cerebellar lesions.

    PubMed

    Zee, D S

    1996-01-01

    Alternating skew deviation, in which the side of the higher eye changes depending upon whether gaze is directed to the left or the right, is a frequent sign in patients with posterior fossa lesions, including those restricted to the cerebellum. Here we propose a mechanism for alternating skews related to the otolith-ocular responses to fore and aft pitch of the head in lateral-eyed animals. In lateral-eyed animals the expected response to a static head pitch is cyclorotation of the eyes. But if the eyes are rotated horizontally in the orbit, away from the primary position, a compensatory skew deviation should also appear. The direction of the skew would depend upon whether the eyes were directed to the right (left eye forward, right eye backward) or to the left (left eye backward, right eye forward). In contrast, for frontal-eyed animals, skew deviations are counterproductive because they create diplopia and interfere with binocular vision. We attribute the emergence of skew deviations in frontal-eyed animals in pathological conditions to 1) an imbalance in otolithocular pathways and 2) a loss of the component of ocular motor innervation that normally corrects for the differences in pulling directions and strengths of the various ocular muscles as the eyes change position in the orbit. Such a compensatory mechanism is necessary to ensure optimal binocular visual function during and after head motion. This compensatory mechanism may depend upon the cerebellum.

  8. Determining the Relationship Between Moral Waivers and Marine Corps Unsuitability Attrition

    DTIC Science & Technology

    2008-03-01

    observed characteristics. However, econometric research indicates that the magnitude of interaction effects estimated via probit or logit models may...1997 to 2005. Multivariate probit models were used to analyze the effects of moral waivers on unsatisfactory service separations. 15. NUMBER OF...files from fiscal years 1997 to 2005. Multivariate probit models were used to analyze the effects of moral waivers on unsatisfactory service

  9. A Single HIV-1 Cluster and a Skewed Immune Homeostasis Drive the Early Spread of HIV among Resting CD4+ Cell Subsets within One Month Post-Infection

    PubMed Central

    Avettand-Fenoël, Véronique; Nembot, Georges; Mélard, Adeline; Blanc, Catherine; Lascoux-Combe, Caroline; Slama, Laurence; Allegre, Thierry; Allavena, Clotilde; Yazdanpanah, Yazdan; Duvivier, Claudine; Katlama, Christine; Goujard, Cécile; Seksik, Bao Chau Phung; Leplatois, Anne; Molina, Jean-Michel; Meyer, Laurence; Autran, Brigitte; Rouzioux, Christine

    2013-01-01

    Optimizing therapeutic strategies for an HIV cure requires better understanding the characteristics of early HIV-1 spread among resting CD4+ cells within the first month of primary HIV-1 infection (PHI). We studied the immune distribution, diversity, and inducibility of total HIV-DNA among the following cell subsets: monocytes, peripheral blood activated and resting CD4 T cells, long-lived (naive [TN] and central-memory [TCM]) and short-lived (transitional-memory [TTM] and effector-memory cells [TEM]) resting CD4+T cells from 12 acutely-infected individuals recruited at a median 36 days from infection. Cells were sorted for total HIV-DNA quantification, phylogenetic analysis and inducibility, all studied in relation to activation status and cell signaling. One month post-infection, a single CCR5-restricted viral cluster was massively distributed in all resting CD4+ subsets from 88% subjects, while one subject showed a slight diversity. High levels of total HIV-DNA were measured among TN (median 3.4 log copies/million cells), although 10-fold less (p = 0.0005) than in equally infected TCM (4.5), TTM (4.7) and TEM (4.6) cells. CD3−CD4+ monocytes harbored a low viral burden (median 2.3 log copies/million cells), unlike equally infected resting and activated CD4+ T cells (4.5 log copies/million cells). The skewed repartition of resting CD4 subsets influenced their contribution to the pool of resting infected CD4+T cells, two thirds of which consisted of short-lived TTM and TEM subsets, whereas long-lived TN and TCM subsets contributed the balance. Each resting CD4 subset produced HIV in vitro after stimulation with anti-CD3/anti-CD28+IL-2 with kinetics and magnitude varying according to subset differentiation, while IL-7 preferentially induced virus production from long-lived resting TN cells. In conclusion, within a month of infection, a clonal HIV-1 cluster is massively distributed among resting CD4 T-cell subsets with a flexible inducibility, suggesting that subset activation and skewed immune homeostasis determine the conditions of viral dissemination and early establishment of the HIV reservoir. PMID:23691172

  10. HINTS to diagnose stroke in the acute vestibular syndrome: three-step bedside oculomotor examination more sensitive than early MRI diffusion-weighted imaging.

    PubMed

    Kattah, Jorge C; Talkad, Arun V; Wang, David Z; Hsieh, Yu-Hsiang; Newman-Toker, David E

    2009-11-01

    Acute vestibular syndrome (AVS) is often due to vestibular neuritis but can result from vertebrobasilar strokes. Misdiagnosis of posterior fossa infarcts in emergency care settings is frequent. Bedside oculomotor findings may reliably identify stroke in AVS, but prospective studies have been lacking. The authors conducted a prospective, cross-sectional study at an academic hospital. Consecutive patients with AVS (vertigo, nystagmus, nausea/vomiting, head-motion intolerance, unsteady gait) with >or=1 stroke risk factor underwent structured examination, including horizontal head impulse test of vestibulo-ocular reflex function, observation of nystagmus in different gaze positions, and prism cross-cover test of ocular alignment. All underwent neuroimaging and admission (generally <72 hours after symptom onset). Strokes were diagnosed by MRI or CT. Peripheral lesions were diagnosed by normal MRI and clinical follow-up. One hundred one high-risk patients with AVS included 25 peripheral and 76 central lesions (69 ischemic strokes, 4 hemorrhages, 3 other). The presence of normal horizontal head impulse test, direction-changing nystagmus in eccentric gaze, or skew deviation (vertical ocular misalignment) was 100% sensitive and 96% specific for stroke. Skew was present in 17% and associated with brainstem lesions (4% peripheral, 4% pure cerebellar, 30% brainstem involvement; chi(2), P=0.003). Skew correctly predicted lateral pontine stroke in 2 of 3 cases in which an abnormal horizontal head impulse test erroneously suggested peripheral localization. Initial MRI diffusion-weighted imaging was falsely negative in 12% (all <48 hours after symptom onset). Skew predicts brainstem involvement in AVS and can identify stroke when an abnormal horizontal head impulse test falsely suggests a peripheral lesion. A 3-step bedside oculomotor examination (HINTS: Head-Impulse-Nystagmus-Test-of-Skew) appears more sensitive for stroke than early MRI in AVS.

  11. The spatial Probit model-An application to the study of banking crises at the end of the 1990’s

    NASA Astrophysics Data System (ADS)

    Amaral, Andrea; Abreu, Margarida; Mendes, Victor

    2014-12-01

    We use a spatial Probit model to study the effect of contagion between banking systems of different countries. Applied to the late 1990s banking crisis in Asia we show that the phenomena of contagion is better seized using a spatial than a traditional Probit model. Unlike the latter, the spatial Probit model allows one to consider the cascade of cross and feedback effects of contagion that result from the outbreak of one initial crisis in one country or system. These contagion effects may result either from business connections between institutions of different countries or from institutional similarities between banking systems.

  12. Using the range to calculate the coefficient of variation.

    PubMed

    Rhiel, G Steven

    2004-12-01

    In this research a coefficient of variation (CVhigh-low) is calculated from the highest and lowest values in a set of data. Use of CVhigh-low when the population is normal, leptokurtic, and skewed is discussed. The statistic is the most effective when sampling from the normal distribution. With the leptokurtic distributions, CVhigh-low works well for comparing the relative variability between two or more distributions but does not provide a very "good" point estimate of the population coefficient of variation. With skewed distributions CVhigh-low works well in identifying which data set has the more relative variation but does not specify how much difference there is in the variation. It also does not provide a "good" point estimate.

  13. Robust Bayesian Factor Analysis

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Yuan, Ke-Hai

    2003-01-01

    Bayesian factor analysis (BFA) assumes the normal distribution of the current sample conditional on the parameters. Practical data in social and behavioral sciences typically have significant skewness and kurtosis. If the normality assumption is not attainable, the posterior analysis will be inaccurate, although the BFA depends less on the current…

  14. Cohort profile: The promotion of breastfeeding intervention trial (PROBIT).

    PubMed

    Patel, Rita; Oken, Emily; Bogdanovich, Natalia; Matush, Lidia; Sevkovskaya, Zinaida; Chalmers, Beverley; Hodnett, Ellen D; Vilchuck, Konstantin; Kramer, Michael S; Martin, Richard M

    2014-06-01

    The PROmotion of Breastfeeding Intervention Trial (PROBIT) is a multicentre, cluster-randomized controlled trial conducted in the Republic of Belarus, in which the experimental intervention was the promotion of increased breastfeeding duration and exclusivity, modelled on the Baby-friendly hospital initiative. Between June 1996 and December 1997, 17,046 mother-infant pairs were recruited during their postpartum hospital stay from 31 maternity hospitals, of which 16 hospitals and their affiliated polyclinics had been randomly assigned to the arm of PROBIT investigating the promotion of breastfeeding and 15 had been assigned to the control arm, in which breastfeeding practices and policies in effect at the time of randomization was continued. Of the mother-infant pairs originally recruited for the study, 16,492 (96.7%) were followed at regular intervals until the infants were 12 months of age (PROBIT I) for the outcomes of breastfeeding duration and exclusivity; gastrointestinal and respiratory infections; and atopic eczema. Subsequently, 13,889 (81.5%) of the children from these mother-infant pairs were followed-up at age 6.5 years (PROBIT II) for anthropometry, blood pressure (BP), behaviour, dental health, cognitive function, asthma and atopy outcomes, and 13,879 (81.4%) children were followed to the age of 11.5 years (PROBIT III) for anthropometry, body composition, BP, and the measurement of fasted glucose, insulin, adiponectin, insulin-like growth factor-I, and apolipoproteins. The trial registration number for Current Controlled Trials is ISRCTN37687716 and that for ClinicalTrials.gov is NCT01561612. Proposals for collaboration are welcome, and enquires about PROBIT should be made to an executive group of the study steering committee (M.S.K., R.M.M., and E.O.). More information, including information about how to access the trial data, data collection documents, and bibliography, is available at the trial website (http://www.bristol.ac.uk/social-community-medicine/projects/probit/). Published by Oxford University Press on behalf of the International Epidemiological Association © The Author 2013; all rights reserved.

  15. Skew scattering dominated anomalous Hall effect in Co x (MgO)100-x granular thin films

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Wen, Yan; Zhao, Yuelei; Li, Peng; He, Xin; Zhang, Junli; He, Yao; Peng, Yong; Yu, Ronghai; Zhang, Xixiang

    2017-10-01

    We investigated the mechanism(s) of the anomalous Hall effect (AHE) in magnetic granular materials by fabricating 100 nm-thick thin films of Co x (MgO)100-x with a Co volume fraction of 34  ⩽  x  ⩽  100 using co-sputtering at room temperature. We measured the temperature dependence of longitudinal resistivity ({{ρ }xx} ) and anomalous Hall resistivity ({{ρ }AHE} ) from 5 K to 300 K in all samples. We found that when x decreases from 100 to 34, the values of {{ρ }xx} and {{ρ }AHE} respectively increased by about four and three orders in magnitude. By linearly fitting the data, obtained at 5 K, of anomalous Hall coefficient ({{R}s} ) and of {{ρ }xx} to log({{R}s})˜ γ log({{ρ }xx}) , we found that our results perfectly fell on a straight line with a slope of γ = 0.97  ±  0.02. This fitting value of γ in {{R}s}\\propto ρ xxγ ~ clearly suggests that skew scattering dominated the AHE in this granular system. To explore the effect of the scattering on the AHE, we performed the same measurements on annealed samples. We found that although both {{ρ }xx} and {{ρ }AHE} significantly reduced after annealing, the correlation between them was almost the same, which was confirmed by the fitted value, γ   =  0.99  ±  0.03. These data strongly suggest that the AHE originates from the skew scattering in Co-MgO granular thin films no matter how strong the scattering of electrons by the interfaces and defects is. This observation may be of importance to the development of spintronic devices based on MgO.

  16. Health Insurance: The Trade-Off Between Risk Pooling and Moral Hazard.

    DTIC Science & Technology

    1989-12-01

    bias comes about because we suppress the intercept term in estimating VFor the power, the test is against 1, - 1. With this transform, the risk...dealing with the same utility function. As one test of whether families behave in the way economic theory suggests, we have also fitted a probit model of...nonparametric alternative to test our results’ sensitivity to the assumption of a normal error in both the theoretical and empirical models of the

  17. Probabilistic Model for Laser Damage to the Human Retina

    DTIC Science & Technology

    2012-03-01

    the beam. Power density may be measured in radiant exposure, J cm2 , or by irradiance , W cm2 . In the experimental database used in this study and...to quan- tify a binary response, either lethal or non-lethal, within a population such as insects or rats. In directed energy research, probit...value of the normalized Arrhenius damage integral. In a one-dimensional simulation, the source term is determined as a spatially averaged irradiance (W

  18. Estimation of rank correlation for clustered data.

    PubMed

    Rosner, Bernard; Glynn, Robert J

    2017-06-30

    It is well known that the sample correlation coefficient (R xy ) is the maximum likelihood estimator of the Pearson correlation (ρ xy ) for independent and identically distributed (i.i.d.) bivariate normal data. However, this is not true for ophthalmologic data where X (e.g., visual acuity) and Y (e.g., visual field) are available for each eye and there is positive intraclass correlation for both X and Y in fellow eyes. In this paper, we provide a regression-based approach for obtaining the maximum likelihood estimator of ρ xy for clustered data, which can be implemented using standard mixed effects model software. This method is also extended to allow for estimation of partial correlation by controlling both X and Y for a vector U_ of other covariates. In addition, these methods can be extended to allow for estimation of rank correlation for clustered data by (i) converting ranks of both X and Y to the probit scale, (ii) estimating the Pearson correlation between probit scores for X and Y, and (iii) using the relationship between Pearson and rank correlation for bivariate normally distributed data. The validity of the methods in finite-sized samples is supported by simulation studies. Finally, two examples from ophthalmology and analgesic abuse are used to illustrate the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Toxicity of certain new compounds to insecticide-resistant houseflies*

    PubMed Central

    Georghiou, G. P.; Metcalf, R. L.; von Zboray, E. P.

    1965-01-01

    Houseflies in poultry ranches in certain areas of California are now resistant to most insecticides licensed for use in these establishments, and this resistance problem appears likely to spread to other areas in the future. The authors have therefore studied the contact and oral toxicity of 19 new compounds that have shown interesting properties against resistant flies. These compounds were selected from among several hundred submitted by various laboratories for evaluation under a co-operative programme sponsored by the World Health Organization. Five compounds were found to be as toxic to three insecticide-resistant strains as to a susceptible strain, and showed strikingly steep log-dosage/probit mortality lines against the resistant strains. The authors suggest that these compounds be further studied for fly control in field trials. PMID:5294994

  20. Methods for peak-flow frequency analysis and reporting for streamgages in or near Montana based on data through water year 2015

    USGS Publications Warehouse

    Sando, Steven K.; McCarthy, Peter M.

    2018-05-10

    This report documents the methods for peak-flow frequency (hereinafter “frequency”) analysis and reporting for streamgages in and near Montana following implementation of the Bulletin 17C guidelines. The methods are used to provide estimates of peak-flow quantiles for 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for selected streamgages operated by the U.S. Geological Survey Wyoming-Montana Water Science Center (WY–MT WSC). These annual exceedance probabilities correspond to 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Standard procedures specific to the WY–MT WSC for implementing the Bulletin 17C guidelines include (1) the use of the Expected Moments Algorithm analysis for fitting the log-Pearson Type III distribution, incorporating historical information where applicable; (2) the use of weighted skew coefficients (based on weighting at-site station skew coefficients with generalized skew coefficients from the Bulletin 17B national skew map); and (3) the use of the Multiple Grubbs-Beck Test for identifying potentially influential low flows. For some streamgages, the peak-flow records are not well represented by the standard procedures and require user-specified adjustments informed by hydrologic judgement. The specific characteristics of peak-flow records addressed by the informed-user adjustments include (1) regulated peak-flow records, (2) atypical upper-tail peak-flow records, and (3) atypical lower-tail peak-flow records. In all cases, the informed-user adjustments use the Expected Moments Algorithm fit of the log-Pearson Type III distribution using the at-site station skew coefficient, a manual potentially influential low flow threshold, or both.Appropriate methods can be applied to at-site frequency estimates to provide improved representation of long-term hydroclimatic conditions. The methods for improving at-site frequency estimates by weighting with regional regression equations and by Maintenance of Variance Extension Type III record extension are described.Frequency analyses were conducted for 99 example streamgages to indicate various aspects of the frequency-analysis methods described in this report. The frequency analyses and results for the example streamgages are presented in a separate data release associated with this report consisting of tables and graphical plots that are structured to include information concerning the interpretive decisions involved in the frequency analyses. Further, the separate data release includes the input files to the PeakFQ program, version 7.1, including the peak-flow data file and the analysis specification file that were used in the peak-flow frequency analyses. Peak-flow frequencies are also reported in separate data releases for selected streamgages in the Beaverhead River and Clark Fork Basins and also for selected streamgages in the Ruby, Jefferson, and Madison River Basins.

  1. Examining dental expenditure and dental insurance accounting for probability of incurring expenses.

    PubMed

    Teusner, Dana; Smith, Valerie; Gnanamanickam, Emmanuel; Brennan, David

    2017-04-01

    There are few studies of dental service expenditure in Australia. Although dental insurance status is strongly associated with a higher probability of dental visiting, some studies indicate that there is little variation in expenditure by insurance status among those who attend for care. Our objective was to assess the overall impact of insurance on expenditures by modelling the association between insurance and expenditure accounting for variation in the probability of incurring expenses, that is dental visiting. A sample of 3000 adults (aged 30-61 years) was randomly selected from the Australian electoral roll. Dental service expenditures were collected prospectively over 2 years by client-held log books. Questionnaires collecting participant characteristics were administered at baseline, 12 months and 24 months. Unadjusted and adjusted ratios of expenditure were estimated using marginalized two-part log-skew-normal models. Such models accommodate highly skewed data and estimate effects of covariates on the overall marginal mean while accounting for the probability of incurring expenses. Baseline response was 39%; of these, 40% (n = 438) were retained over the 2-year period. Only participants providing complete data were included in the analysis (n = 378). Of these, 68.5% were insured, and 70.9% accessed dental services of which nearly all (97.7%) incurred individual dental expenses. The mean dental service expenditure for the total sample (those who did and did not attend) for dental care was AUS$788. Model-adjusted ratios of mean expenditures were higher for the insured (1.61; 95% CI 1.18, 2.20), females (1.38; 95% CI 1.06, 1.81), major city residents (1.43; 95% CI 1.10, 1.84) and those who brushed their teeth twice or more a day (1.50; 95% CI 1.15, 1.96) than their respective counterparts. Accounting for the probability of incurring dental expenses, and other explanatory factors, insured working-aged adults had (on average) approximately 60% higher individual dental service expenditures than uninsured adults. The analytical approach adopted in this study is useful for estimating effects on dental expenditure when a variable is associated with both the probability of visiting for care, and with the types of services received. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Non-Gaussianity and cross-scale coupling in interplanetary magnetic field turbulence during a rope-rope magnetic reconnection event

    NASA Astrophysics Data System (ADS)

    Miranda, Rodrigo A.; Schelin, Adriane B.; Chian, Abraham C.-L.; Ferreira, José L.

    2018-03-01

    In a recent paper (Chian et al., 2016) it was shown that magnetic reconnection at the interface region between two magnetic flux ropes is responsible for the genesis of interplanetary intermittent turbulence. The normalized third-order moment (skewness) and the normalized fourth-order moment (kurtosis) display a quadratic relation with a parabolic shape that is commonly observed in observational data from turbulence in fluids and plasmas, and is linked to non-Gaussian fluctuations due to coherent structures. In this paper we perform a detailed study of the relation between the skewness and the kurtosis of the modulus of the magnetic field |B| during a triple interplanetary magnetic flux rope event. In addition, we investigate the skewness-kurtosis relation of two-point differences of |B| for the same event. The parabolic relation displays scale dependence and is found to be enhanced during magnetic reconnection, rendering support for the generation of non-Gaussian coherent structures via rope-rope magnetic reconnection. Our results also indicate that a direct coupling between the scales of magnetic flux ropes and the scales within the inertial subrange occurs in the solar wind.

  3. On the generation of log-Lévy distributions and extreme randomness

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2011-10-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.

  4. Demonstration of a novel Xp22.2 microdeletion as the cause of familial extreme skewing of X-inactivation utilizing case-parent trio SNP microarray analysis.

    PubMed

    Mason, Jane A; Aung, Hnin T; Nandini, Adayapalam; Woods, Rickie G; Fairbairn, David J; Rowell, John A; Young, David; Susman, Rachel D; Brown, Simon A; Hyland, Valentine J; Robertson, Jeremy D

    2018-05-01

    We report a kindred referred for molecular investigation of severe hemophilia A in a young female in which extremely skewed X-inactivation was observed in both the proband and her clinically normal mother. Bidirectional Sanger sequencing of all F8 gene coding regions and exon/intron boundaries was undertaken. Methylation-sensitive restriction enzymes were utilized to investigate skewed X-inactivation using both a classical human androgen receptor (HUMARA) assay, and a novel method targeting differential methylation patterns in multiple informative X-chromosome SNPs. Illumina Whole-Genome Infinium microarray analysis was performed in the case-parent trio (proband and both parents), and the proband's maternal grandmother. The proband was a cytogenetically normal female with severe hemophilia A resulting from a heterozygous F8 pathogenic variant inherited from her similarly affected father. No F8 mutation was identified in the proband's mother, however, both the proband and her mother both demonstrated completely skewed X-chromosome inactivation (100%) in association with a previously unreported 2.3 Mb deletion at Xp22.2. At least three disease-associated genes (FANCB, AP1S2, and PIGA) were contained within the deleted region. We hypothesize that true "extreme" skewing of X-inactivation (≥95%) is a rare occurrence, but when defined correctly there is a high probability of finding an X-chromosome disease-causing variant or larger deletion resulting in X-inactivation through a survival disadvantage or cell lethal mechanism. We postulate that the 2.3 Mb Xp22.2 deletion identified in our kindred arose de novo in the proband's mother (on the grandfather's homolog), and produced extreme skewing of X-inactivation via a "cell lethal" mechanism. We introduce a novel multitarget approach for X-inactivation analysis using multiple informative differentially methylated SNPs, as an alternative to the classical single locus (HUMARA) method. We propose that for females with unexplained severe phenotypic expression of an X-linked recessive disorder trio-SNP microarray should be undertaken in combination with X-inactivation analysis. © 2018 The Authors. Molecular Genetics & Genomic Medicine published by Wiley Periodicals, Inc.

  5. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  6. Social and economic costs and health-related quality of life in stroke survivors in the Canary Islands, Spain

    PubMed Central

    2012-01-01

    Background Cost-of-illness analysis is the main method of providing an overall vision of the economic impact of a disease. Such studies have been used to set priorities for healthcare policies and inform resource allocation. The aim of this study was to determine the economic burden and health-related quality of life (HRQOL) in the first, second and third years after surviving a stroke in the Canary Islands, Spain. Methods Cross-sectional, retrospective study of 448 patients with stroke based on ICD 9 discharge codes, who received outpatient care at five hospitals. The study was approved by the Research Ethics Committee of Nuestra Señora de la Candelaria University Hospital. Data on demographic characteristics, health resource utilization, informal care, labor productivity losses and HRQOL were collected from the hospital admissions databases and questionnaires completed by stroke patients or their caregivers. Labor productivity losses were calculated from physical units and converted into monetary units with a human capital-based method. HRQOL was measured with the EuroQol EQ-5D questionnaire. Healthcare costs, productivity losses and informal care costs were analyzed with log-normal, probit and ordered probit multivariate models. Results The average cost for each stroke survivor was €17 618 in the first, €14 453 in the second and €12 924 in the third year after the stroke; the reference year for unit prices was 2004. The largest expenditures in the first year were informal care and hospitalizations; in the second and third years the main costs were for informal care, productivity losses and medication. Mean EQ-5D index scores for stroke survivors were 0.50 for the first, 0.47 for the second and 0.46 for the third year, and mean EQ-5D visual analog scale scores were 56, 52 and 55, respectively. Conclusions The main strengths of this study lie in our bottom-up-approach to costing, and in the evaluation of stroke survivors from a broad perspective (societal costs) in the first, second and third years after surviving the stroke. This type of analysis is rare in the Spanish context. We conclude that stroke incurs considerable societal costs among survivors to three years and there is substantial deterioration in HRQOL. PMID:22970797

  7. A European multicenter study on the analytical performance of the VERIS HBV assay.

    PubMed

    Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Izopet, Jacques; Lombardi, Alessandra; Mancon, Alessandro; Marcos, Maria Angeles; Sauné, Karine; O Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel

    Hepatitis B viral load monitoring is an essential part of managing patients with chronic Hepatits B infection. Beckman Coulter has developed the VERIS HBV Assay for use on the fully automated Beckman Coulter DxN VERIS Molecular Diagnostics System. 1 OBJECTIVES: To evaluate the analytical performance of the VERIS HBV Assay at multiple European virology laboratories. Precision, analytical sensitivity, negative sample performance, linearity and performance with major HBV genotypes/subtypes for the VERIS HBV Assay was evaluated. Precision showed an SD of 0.15 log 10 IU/mL or less for each level tested. Analytical sensitivity determined by probit analysis was between 6.8-8.0 IU/mL. Clinical specificity on 90 unique patient samples was 100.0%. Performance with 754 negative samples demonstrated 100.0% not detected results, and a carryover study showed no cross contamination. Linearity using clinical samples was shown from 1.23-8.23 log 10 IU/mL and the assay detected and showed linearity with major HBV genotypes/subtypes. The VERIS HBV Assay demonstrated comparable analytical performance to other currently marketed assays for HBV DNA monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Year-Long Vertical Velocity Statistics Derived from Doppler Lidar Data for the Continental Convective Boundary Layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, Larry K.; Newsom, Rob K.; Turner, David D.

    One year of Coherent Doppler Lidar (CDL) data collected at the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) site in Oklahoma is analyzed to provide profiles of vertical velocity variance, skewness, and kurtosis for cases of cloud-free convective boundary layers. The variance was scaled by the Deardorff convective velocity scale, which was successful when the boundary layer depth was stationary but failed in situations when the layer was changing rapidly. In this study the data are sorted according to time of day, season, wind direction, surface shear stress, degree of instability, and wind shear across the boundary-layer top. Themore » normalized variance was found to have its peak value near a normalized height of 0.25. The magnitude of the variance changes with season, shear stress, and degree of instability, but was not impacted by wind shear across the boundary-layer top. The skewness was largest in the top half of the boundary layer (with the exception of wintertime conditions). The skewness was found to be a function of the season, shear stress, wind shear across the boundary-layer top, with larger amounts of shear leading to smaller values. Like skewness, the vertical profile of kurtosis followed a consistent pattern, with peak values near the boundary-layer top (also with the exception of wintertime data). The altitude of the peak values of kurtosis was found to be lower when there was a large amount of wind shear at the boundary-layer top.« less

  9. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  10. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  11. Median infectious dose (ID₅₀) of porcine reproductive and respiratory syndrome virus isolate MN-184 via aerosol exposure.

    PubMed

    Cutler, Timothy D; Wang, Chong; Hoff, Steven J; Kittawornrat, Apisit; Zimmerman, Jeffrey J

    2011-08-05

    The median infectious dose (ID(50)) of porcine reproductive and respiratory syndrome (PRRS) virus isolate MN-184 was determined for aerosol exposure. In 7 replicates, 3-week-old pigs (n=58) respired 10l of airborne PRRS virus from a dynamic aerosol toroid (DAT) maintained at -4°C. Thereafter, pigs were housed in isolation and monitored for evidence of infection. Infection occurred at virus concentrations too low to quantify by microinfectivity assays. Therefore, exposure dose was determined using two indirect methods ("calculated" and "theoretical"). "Calculated" virus dose was derived from the concentration of rhodamine B monitored over the exposure sequence. "Theoretical" virus dose was based on the continuous stirred-tank reactor model. The ID(50) estimate was modeled on the proportion of pigs that became infected using the probit and logit link functions for both "calculated" and "theoretical" exposure doses. Based on "calculated" doses, the probit and logit ID(50) estimates were 1 × 10(-0.13)TCID(50) and 1 × 10(-0.14)TCID(50), respectively. Based on "theoretical" doses, the probit and logit ID(50) were 1 × 10(0.26)TCID(50) and 1 × 10(0.24)TCID(50), respectively. For each point estimate, the 95% confidence interval included the other three point estimates. The results indicated that MN-184 was far more infectious than PRRS virus isolate VR-2332, the only other PRRS virus isolate for which ID(50) has been estimated for airborne exposure. Since aerosol ID(50) estimates are available for only these two isolates, it is uncertain whether one or both of these isolates represent the normal range of PRRS virus infectivity by this route. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Scale Mixture Models with Applications to Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Qin, Zhaohui S.; Damien, Paul; Walker, Stephen

    2003-11-01

    Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.

  13. LETTER TO THE EDITOR: Exact energy distribution function in a time-dependent harmonic oscillator

    NASA Astrophysics Data System (ADS)

    Robnik, Marko; Romanovski, Valery G.; Stöckmann, Hans-Jürgen

    2006-09-01

    Following a recent work by Robnik and Romanovski (2006 J. Phys. A: Math. Gen. 39 L35, 2006 Open Syst. Inf. Dyn. 13 197-222), we derive an explicit formula for the universal distribution function of the final energies in a time-dependent 1D harmonic oscillator, whose functional form does not depend on the details of the frequency ω(t) and is closely related to the conservation of the adiabatic invariant. The normalized distribution function is P(x) = \\pi^{-1} (2\\mu^2 - x^2)^{-\\frac{1}{2}} , where x=E_1- \\skew3\\bar{E}_1 ; E1 is the final energy, \\skew3\\bar{E}_1 is its average value and µ2 is the variance of E1. \\skew3\\bar{E}_1 and µ2 can be calculated exactly using the WKB approach to all orders.

  14. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  15. Gradually truncated log-normal in USA publicly traded firm size distribution

    NASA Astrophysics Data System (ADS)

    Gupta, Hari M.; Campanha, José R.; de Aguiar, Daniela R.; Queiroz, Gabriel A.; Raheja, Charu G.

    2007-03-01

    We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution.

  16. Thorium normalization as a hydrocarbon accumulation indicator for Lower Miocene rocks in Ras Ghara area, Gulf of Suez, Egypt

    NASA Astrophysics Data System (ADS)

    El-Khadragy, A. A.; Shazly, T. F.; AlAlfy, I. M.; Ramadan, M.; El-Sawy, M. Z.

    2018-06-01

    An exploration method has been developed using surface and aerial gamma-ray spectral measurements in prospecting petroleum in stratigraphic and structural traps. The Gulf of Suez is an important region for studying hydrocarbon potentiality in Egypt. Thorium normalization technique was applied on the sandstone reservoirs in the region to determine the hydrocarbon potentialities zones using the three spectrometric radioactive gamma ray-logs (eU, eTh and K% logs). This method was applied on the recorded gamma-ray spectrometric logs for Rudeis and Kareem Formations in Ras Ghara oil Field, Gulf of Suez, Egypt. The conventional well logs (gamma-ray, resistivity, neutron, density and sonic logs) were analyzed to determine the net pay zones in the study area. The agreement ratios between the thorium normalization technique and the results of the well log analyses are high, so the application of thorium normalization technique can be used as a guide for hydrocarbon accumulation in the study reservoir rocks.

  17. Impaired imprinted X chromosome inactivation is responsible for the skewed sex ratio following in vitro fertilization

    PubMed Central

    Tan, Kun; An, Lei; Miao, Kai; Ren, Likun; Hou, Zhuocheng; Tao, Li; Zhang, Zhenni; Wang, Xiaodong; Xia, Wei; Liu, Jinghao; Wang, Zhuqing; Xi, Guangyin; Gao, Shuai; Sui, Linlin; Zhu, De-Sheng; Wang, Shumin; Wu, Zhonghong; Bach, Ingolf; Chen, Dong-bao; Tian, Jianhui

    2016-01-01

    Dynamic epigenetic reprogramming occurs during normal embryonic development at the preimplantation stage. Erroneous epigenetic modifications due to environmental perturbations such as manipulation and culture of embryos during in vitro fertilization (IVF) are linked to various short- or long-term consequences. Among these, the skewed sex ratio, an indicator of reproductive hazards, was reported in bovine and porcine embryos and even human IVF newborns. However, since the first case of sex skewing reported in 1991, the underlying mechanisms remain unclear. We reported herein that sex ratio is skewed in mouse IVF offspring, and this was a result of female-biased peri-implantation developmental defects that were originated from impaired imprinted X chromosome inactivation (iXCI) through reduced ring finger protein 12 (Rnf12)/X-inactive specific transcript (Xist) expression. Compensation of impaired iXCI by overexpression of Rnf12 to up-regulate Xist significantly rescued female-biased developmental defects and corrected sex ratio in IVF offspring. Moreover, supplementation of an epigenetic modulator retinoic acid in embryo culture medium up-regulated Rnf12/Xist expression, improved iXCI, and successfully redeemed the skewed sex ratio to nearly 50% in mouse IVF offspring. Thus, our data show that iXCI is one of the major epigenetic barriers for the developmental competence of female embryos during preimplantation stage, and targeting erroneous epigenetic modifications may provide a potential approach for preventing IVF-associated complications. PMID:26951653

  18. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  19. powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks

    NASA Astrophysics Data System (ADS)

    Murray, Steven G.

    2018-05-01

    powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.

  20. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    PubMed

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  1. A Bayesian estimate of the concordance correlation coefficient with skewed data.

    PubMed

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2015-01-01

    Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Familial skewed X inactivation: a molecular trait associated with high spontaneous-abortion rate maps to Xq28.

    PubMed Central

    Pegoraro, E; Whitaker, J; Mowery-Rushton, P; Surti, U; Lanasa, M; Hoffman, E P

    1997-01-01

    We report a family ascertained for molecular diagnosis of muscular dystrophy in a young girl, in which preferential activation (> or = 95% of cells) of the paternal X chromosome was seen in both the proband and her mother. To determine the molecular basis for skewed X inactivation, we studied X-inactivation patterns in peripheral blood and/or oral mucosal cells from 50 members of this family and from a cohort of normal females. We found excellent concordance between X-inactivation patterns in blood and oral mucosal cell nuclei in all females. Of the 50 female pedigree members studied, 16 showed preferential use (> or = 95% cells) of the paternal X chromosome; none of 62 randomly selected females showed similarly skewed X inactivation was maternally inherited in this family. A linkage study using the molecular trait of skewed X inactivation as the scored phenotype localized this trait to Xq28 (DXS1108; maximum LOD score [Zmax] = 4.34, recombination fraction [theta] = 0). Both genotyping of additional markers and FISH of a YAC probe in Xq28 showed a deletion spanning from intron 22 of the factor VIII gene to DXS115-3. This deletion completely cosegregated with the trait (Zmax = 6.92, theta = 0). Comparison of clinical findings between affected and unaffected females in the 50-member pedigree showed a statistically significant increase in spontaneous-abortion rate in the females carrying the trait (P < .02). To our knowledge, this is the first gene-mapping study of abnormalities of X-inactivation patterns and is the first association of a specific locus for recurrent spontaneous abortion in a cytogenetically normal family. The involvement of this locus in cell lethality, cell-growth disadvantage, developmental abnormalities, or the X-inactivation process is discussed. Images Figure 4 Figure 7 PMID:9245997

  3. Natural thermodynamics

    NASA Astrophysics Data System (ADS)

    Annila, Arto

    2016-02-01

    The principle of increasing entropy is derived from statistical physics of open systems assuming that quanta of actions, as undividable basic build blocks, embody everything. According to this tenet, all systems evolve from one state to another either by acquiring quanta from their surroundings or by discarding quanta to the surroundings in order to attain energetic balance in least time. These natural processes result in ubiquitous scale-free patterns: skewed distributions that accumulate in a sigmoid manner and hence span log-log scales mostly as straight lines. Moreover, the equation for least-time motions reveals that evolution is by nature a non-deterministic process. Although the obtained insight in thermodynamics from the notion of quanta in motion yields nothing new, it accentuates that contemporary comprehension is impaired when modeling evolution as a computable process by imposing conservation of energy and thereby ignoring that quantum of actions are the carriers of energy from the system to its surroundings.

  4. Probit vs. semi-nonparametric estimation: examining the role of disability on institutional entry for older adults.

    PubMed

    Sharma, Andy

    2017-06-01

    The purpose of this study was to showcase an advanced methodological approach to model disability and institutional entry. Both of these are important areas to investigate given the on-going aging of the United States population. By 2020, approximately 15% of the population will be 65 years and older. Many of these older adults will experience disability and require formal care. A probit analysis was employed to determine which disabilities were associated with admission into an institution (i.e. long-term care). Since this framework imposes strong distributional assumptions, misspecification leads to inconsistent estimators. To overcome such a short-coming, this analysis extended the probit framework by employing an advanced semi-nonparamertic maximum likelihood estimation utilizing Hermite polynomial expansions. Specification tests show semi-nonparametric estimation is preferred over probit. In terms of the estimates, semi-nonparametric ratios equal 42 for cognitive difficulty, 64 for independent living, and 111 for self-care disability while probit yields much smaller estimates of 19, 30, and 44, respectively. Public health professionals can use these results to better understand why certain interventions have not shown promise. Equally important, healthcare workers can use this research to evaluate which type of treatment plans may delay institutionalization and improve the quality of life for older adults. Implications for rehabilitation With on-going global aging, understanding the association between disability and institutional entry is important in devising successful rehabilitation interventions. Semi-nonparametric is preferred to probit and shows ambulatory and cognitive impairments present high risk for institutional entry (long-term care). Informal caregiving and home-based care require further examination as forms of rehabilitation/therapy for certain types of disabilities.

  5. An Ensemble System Based on Hybrid EGARCH-ANN with Different Distributional Assumptions to Predict S&P 500 Intraday Volatility

    NASA Astrophysics Data System (ADS)

    Lahmiri, S.; Boukadoum, M.

    2015-10-01

    Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.

  6. Accumulation risk assessment for the flooding hazard

    NASA Astrophysics Data System (ADS)

    Roth, Giorgio; Ghizzoni, Tatiana; Rudari, Roberto

    2010-05-01

    One of the main consequences of the demographic and economic development and of markets and trades globalization is represented by risks cumulus. In most cases, the cumulus of risks intuitively arises from the geographic concentration of a number of vulnerable elements in a single place. For natural events, risks cumulus can be associated, in addition to intensity, also to event's extension. In this case, the magnitude can be such that large areas, that may include many regions or even large portions of different countries, are stroked by single, catastrophic, events. Among natural risks, the impact of the flooding hazard cannot be understated. To cope with, a variety of mitigation actions can be put in place: from the improvement of monitoring and alert systems to the development of hydraulic structures, throughout land use restrictions, civil protection, financial and insurance plans. All of those viable options present social and economic impacts, either positive or negative, whose proper estimate should rely on the assumption of appropriate - present and future - flood risk scenarios. It is therefore necessary to identify proper statistical methodologies, able to describe the multivariate aspects of the involved physical processes and their spatial dependence. In hydrology and meteorology, but also in finance and insurance practice, it has early been recognized that classical statistical theory distributions (e.g., the normal and gamma families) are of restricted use for modeling multivariate spatial data. Recent research efforts have been therefore directed towards developing statistical models capable of describing the forms of asymmetry manifest in data sets. This, in particular, for the quite frequent case of phenomena whose empirical outcome behaves in a non-normal fashion, but still maintains some broad similarity with the multivariate normal distribution. Fruitful approaches were recognized in the use of flexible models, which include the normal distribution as a special or limiting case (e.g., the skew-normal or skew-t distributions). The present contribution constitutes an attempt to provide a better estimation of the joint probability distribution able to describe flood events in a multi-site multi-basin fashion. This goal will be pursued through the multivariate skew-t distribution, which allows to analytically define the joint probability distribution. Performances of the skew-t distribution will be discussed with reference to the Tanaro River in Northwestern Italy. To enhance the characteristics of the correlation structure, both nested and non-nested gauging stations will be selected, with significantly different contributing areas.

  7. Measuring skew in average surface roughness as a function of surface preparation

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.

    2015-08-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  8. Logit and probit model in toll sensitivity analysis of Solo-Ngawi, Kartasura-Palang Joglo segment based on Willingness to Pay (WTP)

    NASA Astrophysics Data System (ADS)

    Handayani, Dewi; Cahyaning Putri, Hera; Mahmudah, AMH

    2017-12-01

    Solo-Ngawi toll road project is part of the mega project of the Trans Java toll road development initiated by the government and is still under construction until now. PT Solo Ngawi Jaya (SNJ) as the Solo-Ngawi toll management company needs to determine the toll fare that is in accordance with the business plan. The determination of appropriate toll rates will affect progress in regional economic sustainability and decrease the traffic congestion. These policy instruments is crucial for achieving environmentally sustainable transport. Therefore, the objective of this research is to find out how the toll fare sensitivity of Solo-Ngawi toll road based on Willingness To Pay (WTP). Primary data was obtained by distributing stated preference questionnaires to four wheeled vehicle users in Kartasura-Palang Joglo artery road segment. Further data obtained will be analysed with logit and probit model. Based on the analysis, it is found that the effect of fare change on the amount of WTP on the binomial logit model is more sensitive than the probit model on the same travel conditions. The range of tariff change against values of WTP on the binomial logit model is 20% greater than the range of values in the probit model . On the other hand, the probability results of the binomial logit model and the binary probit have no significant difference (less than 1%).

  9. Tennis Elbow Diagnosis Using Equivalent Uniform Voltage to Fit the Logistic and the Probit Diseased Probability Models

    PubMed Central

    Lin, Wei-Chun; Lin, Shu-Yuan; Wu, Li-Fu; Guo, Shih-Sian; Huang, Hsiang-Jui; Chao, Pei-Ju

    2015-01-01

    To develop the logistic and the probit models to analyse electromyographic (EMG) equivalent uniform voltage- (EUV-) response for the tenderness of tennis elbow. In total, 78 hands from 39 subjects were enrolled. In this study, surface EMG (sEMG) signal is obtained by an innovative device with electrodes over forearm region. The analytical endpoint was defined as Visual Analog Score (VAS) 3+ tenderness of tennis elbow. The logistic and the probit diseased probability (DP) models were established for the VAS score and EMG absolute voltage-time histograms (AVTH). TV50 is the threshold equivalent uniform voltage predicting a 50% risk of disease. Twenty-one out of 78 samples (27%) developed VAS 3+ tenderness of tennis elbow reported by the subject and confirmed by the physician. The fitted DP parameters were TV50 = 153.0 mV (CI: 136.3–169.7 mV), γ 50 = 0.84 (CI: 0.78–0.90) and TV50 = 155.6 mV (CI: 138.9–172.4 mV), m = 0.54 (CI: 0.49–0.59) for logistic and probit models, respectively. When the EUV ≥ 153 mV, the DP of the patient is greater than 50% and vice versa. The logistic and the probit models are valuable tools to predict the DP of VAS 3+ tenderness of tennis elbow. PMID:26380281

  10. Bayesian multinomial probit modeling of daily windows of susceptibility for maternal PM2.5 exposure and congenital heart defects

    EPA Science Inventory

    Past epidemiologic studies suggest maternal ambient air pollution exposure during critical periods of the pregnancy is associated with fetal development. We introduce a multinomial probit model that allows for the joint identification of susceptible daily periods during the pregn...

  11. Is coverage a factor in non-Gaussianity of IMF parameters?

    NASA Technical Reports Server (NTRS)

    Ahluwalia, H. S.; Fikani, M. M.

    1995-01-01

    Recently, Feynman and Ruzmaikin (1994) showed that IMF parameters for the 1973 to 1990 period are not log-normally distributed as previously suggested by Burlaga and King (1979) for the data obtained over a shorter time period (1963-75). They studied the first four moments, namely: mean, variance, skewness, and kurtosis. For a Gaussian distribution, moments higher than the variance should vanish. In particular, Feynman and Ruzmaikin obtained very high values of kurtosis during some periods of their analysis. We note that the coverage for IMF parameters is very uneven for the period analyzed by them, ranging from less than 40% to greater than 80%. So a question arises as to whether the amount of coverage is a factor in their analysis. We decided to test this for the B(sub z) component of IMF, since it is an effective geoactive parameter for short term disturbances. Like them, we used 1-hour averaged data available on the Omnitape. We studied the scatter plots of the annual mean values of B(sub z)(nT) and its kurtosis versus the percent coverage for the year. We obtain a correlation coefficient of 0.48 and 0.42 respectively for the 1973-90 period. The probability for a chance occurrence of these correlation coefficients for 18 pair of points is less than 8%. As a rough measure of skewness, we determined the percent asymmetry between the areas of the histograms representing the distributions of the positive and the negative values of B(sub z) and studied its correlation with the coverage for the year. This analysis yields a correlation coefficient of 0.41 When we extended the analysis for the whole period for which IMF data are available (1963-93) the corresponding correlation coefficients are 0.59, 0.14, and 0.42. Our findings will be presented and discussed

  12. MRI-guided brachytherapy in locally advanced cervical cancer: Small bowel [Formula: see text] and [Formula: see text] are not predictive of late morbidity.

    PubMed

    Petit, Claire; Dumas, Isabelle; Chargari, Cyrus; Martinetti, Florent; Maroun, Pierre; Doyeux, Kaya; Tailleur, Anne; Haie-Meder, Christine; Mazeron, Renaud

    2016-01-01

    To establish dose-volume effect correlations for late small bowel (SB) toxicities in patients treated for locally advanced cervical cancer with concomitant chemoradiation followed by pulsed-dose rate MRI-guided adaptive brachytherapy. Patients treated with curative intent and followed prospectively were included. The SB loops closed to CTV were delineated, but no specific dose constraint was applied. The dosimetric data, converted in 2-Gy equivalent, were confronted with the occurrence of late morbidity assessed using the CTC-AE 3.0. Dose-effect relationships were assessed using mean-dose comparisons, log-rank tests on event-free periods, and probit analyses. A total of 115 patients with a median followup of 35.5 months were included. Highest grade per patient was: Grades 0 for 17, 1 for 75, 2 for 20, and 3 for 3. The mean [Formula: see text] and [Formula: see text] were, respectively, 68.7 ± 13.6 Gy and 85.8 ± 33.1 Gy and did not differ according to event severity (p = 0.47 and p = 0.52), even when comparing Grades 0-1 vs. 2-4 events (68.0 ± 12.4 vs. 71.4 ± 17.7 Gy; p = 0.38 and 83.7 ± 26.4 vs. 94.5 ± 51.9 Gy; p = 0.33). Log-rank tests were performed after splitting the cohort according to four [Formula: see text] levels: >80 Gy, 70-79 Gy, 60-70 Gy, and <60 Gy. No difference was observed for Grades 1-4, Grades 2-4, or Grades 3-4 (p = 0.21-0.52). Probit analyses showed no correlation between the dosimetric parameters and probability of Grades 1-4, 2-4, or 3-4 events (p = 0.19-0.48). No significant dose-volume effect relationships were demonstrated between the [Formula: see text] and [Formula: see text] and the probability of late SB morbidity. These parameters should not limit the pulsed-dose rate brachytherapy optimization process. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  13. Testing models of parental investment strategy and offspring size in ants.

    PubMed

    Gilboa, Smadar; Nonacs, Peter

    2006-01-01

    Parental investment strategies can be fixed or flexible. A fixed strategy predicts making all offspring a single 'optimal' size. Dynamic models predict flexible strategies with more than one optimal size of offspring. Patterns in the distribution of offspring sizes may thus reveal the investment strategy. Static strategies should produce normal distributions. Dynamic strategies should often result in non-normal distributions. Furthermore, variance in morphological traits should be positively correlated with the length of developmental time the traits are exposed to environmental influences. Finally, the type of deviation from normality (i.e., skewed left or right, or platykurtic) should be correlated with the average offspring size. To test the latter prediction, we used simulations to detect significant departures from normality and categorize distribution types. Data from three species of ants strongly support the predicted patterns for dynamic parental investment. Offspring size distributions are often significantly non-normal. Traits fixed earlier in development, such as head width, are less variable than final body weight. The type of distribution observed correlates with mean female dry weight. The overall support for a dynamic parental investment model has implications for life history theory. Predicted conflicts over parental effort, sex investment ratios, and reproductive skew in cooperative breeders follow from assumptions of static parental investment strategies and omnipresent resource limitations. By contrast, with flexible investment strategies such conflicts can be either absent or maladaptive.

  14. Pan-European comparison of candidate distributions for climatological drought indices, SPI and SPEI

    NASA Astrophysics Data System (ADS)

    Stagge, James; Tallaksen, Lena; Gudmundsson, Lukas; Van Loon, Anne; Stahl, Kerstin

    2013-04-01

    Drought indices are vital to objectively quantify and compare drought severity, duration, and extent across regions with varied climatic and hydrologic regimes. The Standardized Precipitation Index (SPI), a well-reviewed meterological drought index recommended by the WMO, and its more recent water balance variant, the Standardized Precipitation-Evapotranspiration Index (SPEI) both rely on selection of univariate probability distributions to normalize the index, allowing for comparisons across climates. The SPI, considered a universal meteorological drought index, measures anomalies in precipitation, whereas the SPEI measures anomalies in climatic water balance (precipitation minus potential evapotranspiration), a more comprehensive measure of water availability that incorporates temperature. Many reviewers recommend use of the gamma (Pearson Type III) distribution for SPI normalization, while developers of the SPEI recommend use of the three parameter log-logistic distribution, based on point observation validation. Before the SPEI can be implemented at the pan-European scale, it is necessary to further validate the index using a range of candidate distributions to determine sensitivity to distribution selection, identify recommended distributions, and highlight those instances where a given distribution may not be valid. This study rigorously compares a suite of candidate probability distributions using WATCH Forcing Data, a global, historical (1958-2001) climate dataset based on ERA40 reanalysis with 0.5 x 0.5 degree resolution and bias-correction based on CRU-TS2.1 observations. Using maximum likelihood estimation, alternative candidate distributions are fit for the SPI and SPEI across the range of European climate zones. When evaluated at this scale, the gamma distribution for the SPI results in negatively skewed values, exaggerating the index severity of extreme dry conditions, while decreasing the index severity of extreme high precipitation. This bias is particularly notable for shorter aggregation periods (1-6 months) during the summer months in southern Europe (below 45° latitude), and can partially be attributed to distribution fitting difficulties in semi-arid regions where monthly precipitation totals cluster near zero. By contrast, the SPEI has potential for avoiding this fitting difficulty because it is not bounded by zero. However, the recommended log-logistic distribution produces index values with less variation than the standard normal distribution. Among the alternative candidate distributions, the best fit distribution and the distribution parameters vary in space and time, suggesting regional commonalities within hydroclimatic regimes, as discussed further in the presentation.

  15. Mean velocities and Reynolds stresses upstream of a simulated wing-fuselage juncture

    NASA Technical Reports Server (NTRS)

    Mcmahon, H.; Hubbartt, J.; Kubendran, L. R.

    1983-01-01

    Values of three mean velocity components and six turbulence stresses measured in a turbulent shear layer upstream of a simulated wing-fuselage juncture and immediately downstream of the start of the juncture are presented nd discussed. Two single-sensor hot-wire probes were used in the measurements. The separated region just upstream of the wing contains an area of reversed flow near the fuselage surface where the turbulence level is high. Outside of this area the flow skews as it passes around the body, and in this skewed region the magnitude and distribution of the turbulent normal and shear stresses within the shear layer are modified slightly by the skewing and deceleration of the flow. A short distance downstream of the wing leading edge the secondary flow vortext is tightly rolled up and redistributes both mean flow and turbulence in the juncture. The data acquisition technique employed here allows a hot wire to be used in a reversed flow region to indicate flow direction.

  16. Detection of warfare agents in liquid foods using the brine shrimp lethality assay.

    PubMed

    Lumor, Stephen E; Diez-Gonzalez, Francisco; Labuza, Theodore P

    2011-01-01

    The brine shrimp lethality assay (BSLA) was used for rapid and non-specific detection of biological and chemical warfare agents at concentrations considerably below that which will cause harm to humans. Warfare agents detected include T-2 toxin, trimethylsilyl cyanide, and commercially available pesticides such as dichlorvos, diazinon, dursban, malathion, and parathion. The assay was performed by introducing 50 μL of milk or orange juice contaminated with each analyte into vials containing 10 freshly hatched brine shrimp nauplii in seawater. This was incubated at 28 °C for 24 h, after which mortality was determined. Mortality was converted to probits and the LC(50) was determined for each analyte by plotting probits of mortality against analyte concentration (log(10)). Our findings were the following: (1) the lethal effects of toxins dissolved in milk were observed, with T-2 toxin being the most lethal and malathion being the least, (2) except for parathion, the dosage (based on LC(50)) of analyte in a cup of milk (200 mL) consumed by a 6-y-old (20 kg) was less than the respective published rat LD(50) values, and (3) the BSLA was only suitable for detecting toxins dissolved in orange juice if incubation time was reduced to 6 h. Our results support the application of the BSLA for routine, rapid, and non-specific prescreening of liquid foods for possible sabotage by an employee or an intentional bioterrorist act. Practical Application: The findings of this study strongly indicate that the brine shrimp lethality assay can be adapted for nonspecific detection of warfare agents or toxins in food at any point during food production and distribution.

  17. Modeling Early Postnatal Brain Growth and Development with CT: Changes in the Brain Radiodensity Histogram from Birth to 2 Years.

    PubMed

    Cauley, K A; Hu, Y; Och, J; Yorks, P J; Fielden, S W

    2018-04-01

    The majority of brain growth and development occur in the first 2 years of life. This study investigated these changes by analysis of the brain radiodensity histogram of head CT scans from the clinical population, 0-2 years of age. One hundred twenty consecutive head CTs with normal findings meeting the inclusion criteria from children from birth to 2 years were retrospectively identified from 3 different CT scan platforms. Histogram analysis was performed on brain-extracted images, and histogram mean, mode, full width at half maximum, skewness, kurtosis, and SD were correlated with subject age. The effects of scan platform were investigated. Normative curves were fitted by polynomial regression analysis. Average total brain volume was 360 cm 3 at birth, 948 cm 3 at 1 year, and 1072 cm 3 at 2 years. Total brain tissue density showed an 11% increase in mean density at 1 year and 19% at 2 years. Brain radiodensity histogram skewness was positive at birth, declining logarithmically in the first 200 days of life. The histogram kurtosis also decreased in the first 200 days to approach a normal distribution. Direct segmentation of CT images showed that changes in brain radiodensity histogram skewness correlated with, and can be explained by, a relative increase in gray matter volume and an increase in gray and white matter tissue density that occurs during this period of brain maturation. Normative metrics of the brain radiodensity histogram derived from routine clinical head CT images can be used to develop a model of normal brain development. © 2018 by American Journal of Neuroradiology.

  18. Skewness and kurtosis of net baryon-number distributions at small values of the baryon chemical potential

    DOE PAGES

    Bazavov, A.; Ding, H. -T.; Hegde, P.; ...

    2017-10-27

    In this paper, we present results for the ratios of mean (M B), variance (σmore » $$2\\atop{B}$$), skewness (S B) and kurtosis (κ B) of net baryon-number fluctuations obtained in lattice QCD calculations with physical values of light and strange quark masses. Using next-to-leading order Taylor expansions in baryon chemical potential we find that qualitative features of these ratios closely resemble the corresponding experimentally measured cumulants ratios of net proton-number fluctuations for beam energies down to √sNN ≥ 19.6 GeV. We show that the difference in cumulant ratios for the mean net baryon-number, M B/σ$$2\\atop{B}$$ = χ$$B\\atop{1}$$ (T, µ B)/χ$$B\\atop{2}$$ (T, µ B) and the normalized skewness, S Bσ B = χ$$B\\atop{3}$$ (T, µB)/χ2 (T, µB ), nat-urally arises in QCD thermodynamics. Moreover, we establish a close relation between skewness and kurtosis ratios, S Bσ$$B\\atop{3}$$/M B = χ$$B\\atop{3}$$ (T, µ B)/χ$$B\\atop{1}$$ (T,µ B) and κ Bσ$$2\\atop{B}$$ = χ$$B\\atop{4}$$ (T,μ B)/χ$$B\\atop{2}$$ (T,μ B), valid at small values of the baryon chemical potential.« less

  19. Skewness and kurtosis of net baryon-number distributions at small values of the baryon chemical potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazavov, A.; Ding, H. -T.; Hegde, P.

    In this paper, we present results for the ratios of mean (M B), variance (σmore » $$2\\atop{B}$$), skewness (S B) and kurtosis (κ B) of net baryon-number fluctuations obtained in lattice QCD calculations with physical values of light and strange quark masses. Using next-to-leading order Taylor expansions in baryon chemical potential we find that qualitative features of these ratios closely resemble the corresponding experimentally measured cumulants ratios of net proton-number fluctuations for beam energies down to √sNN ≥ 19.6 GeV. We show that the difference in cumulant ratios for the mean net baryon-number, M B/σ$$2\\atop{B}$$ = χ$$B\\atop{1}$$ (T, µ B)/χ$$B\\atop{2}$$ (T, µ B) and the normalized skewness, S Bσ B = χ$$B\\atop{3}$$ (T, µB)/χ2 (T, µB ), nat-urally arises in QCD thermodynamics. Moreover, we establish a close relation between skewness and kurtosis ratios, S Bσ$$B\\atop{3}$$/M B = χ$$B\\atop{3}$$ (T, µ B)/χ$$B\\atop{1}$$ (T,µ B) and κ Bσ$$2\\atop{B}$$ = χ$$B\\atop{4}$$ (T,μ B)/χ$$B\\atop{2}$$ (T,μ B), valid at small values of the baryon chemical potential.« less

  20. Using rank-order geostatistics for spatial interpolation of highly skewed data in a heavy-metal contaminated site.

    PubMed

    Juang, K W; Lee, D Y; Ellsworth, T R

    2001-01-01

    The spatial distribution of a pollutant in contaminated soils is usually highly skewed. As a result, the sample variogram often differs considerably from its regional counterpart and the geostatistical interpolation is hindered. In this study, rank-order geostatistics with standardized rank transformation was used for the spatial interpolation of pollutants with a highly skewed distribution in contaminated soils when commonly used nonlinear methods, such as logarithmic and normal-scored transformations, are not suitable. A real data set of soil Cd concentrations with great variation and high skewness in a contaminated site of Taiwan was used for illustration. The spatial dependence of ranks transformed from Cd concentrations was identified and kriging estimation was readily performed in the standardized-rank space. The estimated standardized rank was back-transformed into the concentration space using the middle point model within a standardized-rank interval of the empirical distribution function (EDF). The spatial distribution of Cd concentrations was then obtained. The probability of Cd concentration being higher than a given cutoff value also can be estimated by using the estimated distribution of standardized ranks. The contour maps of Cd concentrations and the probabilities of Cd concentrations being higher than the cutoff value can be simultaneously used for delineation of hazardous areas of contaminated soils.

  1. Appropriateness of Probit-9 in development of quarantine treatments for timber and timber commodities

    Treesearch

    Marcus Schortemeyer; Ken Thomas; Robert A. Haack; Adnan Uzunovic; Kelli Hoover; Jack A. Simpson; Cheryl A. Grgurinovic

    2011-01-01

    Following the increasing international phasing out of methyl bromide for quarantine purposes, the development of alternative treatments for timber pests becomes imperative. The international accreditation of new quarantine treatments requires verification standards that give confidence in the effectiveness of a treatment. Probit-9 mortality is a standard for treatment...

  2. Log Normal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of Alpha Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2008-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316

  3. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  4. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  5. Local correction of quadrupole errors at LHC interaction regions using action and phase jump analysis on turn-by-turn beam position data

    NASA Astrophysics Data System (ADS)

    Cardona, Javier Fernando; García Bonilla, Alba Carolina; Tomás García, Rogelio

    2017-11-01

    This article shows that the effect of all quadrupole errors present in an interaction region with low β * can be modeled by an equivalent magnetic kick, which can be estimated from action and phase jumps found on beam position data. This equivalent kick is used to find the strengths that certain normal and skew quadrupoles located on the IR must have to make an effective correction in that region. Additionally, averaging techniques to reduce noise on beam position data, which allows precise estimates of equivalent kicks, are presented and mathematically justified. The complete procedure is tested with simulated data obtained from madx and 2015-LHC experimental data. The analyses performed in the experimental data indicate that the strengths of the IR skew quadrupole correctors and normal quadrupole correctors can be estimated within a 10% uncertainty. Finally, the effect of IR corrections in the β* is studied, and a correction scheme that returns this parameter to its designed value is proposed.

  6. Robustness of S1 statistic with Hodges-Lehmann for skewed distributions

    NASA Astrophysics Data System (ADS)

    Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping

    2016-10-01

    Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.

  7. Global-scale analysis of vegetation indices for moderate resolution monitoring of terrestrial vegetation

    NASA Astrophysics Data System (ADS)

    Huete, Alfredo R.; Didan, Kamel; van Leeuwen, Willem J. D.; Vermote, Eric F.

    1999-12-01

    Vegetation indices have emerged as important tools in the seasonal and inter-annual monitoring of the Earth's vegetation. They are radiometric measures of the amount and condition of vegetation. In this study, the Sea-viewing Wide Field-of-View sensor (SeaWiFS) is used to investigate coarse resolution monitoring of vegetation with multiple indices. A 30-day series of SeaWiFS data, corrected for molecular scattering and absorption, was composited to cloud-free, single channel reflectance images. The normalized difference vegetation index (NDVI) and an optimized index, the enhanced vegetation index (EVI), were computed over various 'continental' regions. The EVI had a normal distribution of values over the continental set of biomes while the NDVI was skewed toward higher values and saturated over forested regions. The NDVI resembled the skewed distributions found in the red band while the EVI resembled the normal distributions found in the NIR band. The EVI minimized smoke contamination over extensive portions of the tropics. As a result, major biome types with continental regions were discriminable in both the EVI imagery and histograms, whereas smoke and saturation considerably degraded the NDVI histogram structure preventing reliable discrimination of biome types.

  8. Functional form and risk adjustment of hospital costs: Bayesian analysis of a Box-Cox random coefficients model.

    PubMed

    Hollenbeak, Christopher S

    2005-10-15

    While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.

  9. Using Heteroskedastic Ordered Probit Models to Recover Moments of Continuous Test Score Distributions from Coarsened Data

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Shear, Benjamin R.; Castellano, Katherine E.; Ho, Andrew D.

    2017-01-01

    Test score distributions of schools or demographic groups are often summarized by frequencies of students scoring in a small number of ordered proficiency categories. We show that heteroskedastic ordered probit (HETOP) models can be used to estimate means and standard deviations of multiple groups' test score distributions from such data. Because…

  10. Dynamics of analyst forecasts and emergence of complexity: Role of information disparity

    PubMed Central

    Ahn, Kwangwon

    2017-01-01

    We report complex phenomena arising among financial analysts, who gather information and generate investment advice, and elucidate them with the help of a theoretical model. Understanding how analysts form their forecasts is important in better understanding the financial market. Carrying out big-data analysis of the analyst forecast data from I/B/E/S for nearly thirty years, we find skew distributions as evidence for emergence of complexity, and show how information asymmetry or disparity affects financial analysts’ forming their forecasts. Here regulations, information dissemination throughout a fiscal year, and interactions among financial analysts are regarded as the proxy for a lower level of information disparity. It is found that financial analysts with better access to information display contrasting behaviors: a few analysts become bolder and issue forecasts independent of other forecasts while the majority of analysts issue more accurate forecasts and flock to each other. Main body of our sample of optimistic forecasts fits a log-normal distribution, with the tail displaying a power law. Based on the Yule process, we propose a model for the dynamics of issuing forecasts, incorporating interactions between analysts. Explaining nicely empirical data on analyst forecasts, this provides an appealing instance of understanding social phenomena in the perspective of complex systems. PMID:28498831

  11. WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarpelli, M; Eickhoff, J; Perlman, S

    Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less

  12. Effect of Box-Cox transformation on power of Haseman-Elston and maximum-likelihood variance components tests to detect quantitative trait Loci.

    PubMed

    Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I

    2003-01-01

    Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel

  13. Disability weights for infectious diseases in four European countries: comparison between countries and across respondent characteristics

    PubMed Central

    Maertens de Noordhout, Charline; Devleesschauwer, Brecht; Salomon, Joshua A; Turner, Heather; Cassini, Alessandro; Colzani, Edoardo; Speybroeck, Niko; Polinder, Suzanne; Kretzschmar, Mirjam E; Havelaar, Arie H; Haagsma, Juanita A

    2018-01-01

    Abstract Background In 2015, new disability weights (DWs) for infectious diseases were constructed based on data from four European countries. In this paper, we evaluated if country, age, sex, disease experience status, income and educational levels have an impact on these DWs. Methods We analyzed paired comparison responses of the European DW study by participants’ characteristics with separate probit regression models. To evaluate the effect of participants’ characteristics, we performed correlation analyses between countries and within country by respondent characteristics and constructed seven probit regression models, including a null model and six models containing participants’ characteristics. We compared these seven models using Akaike Information Criterion (AIC). Results According to AIC, the probit model including country as covariate was the best model. We found a lower correlation of the probit coefficients between countries and income levels (range rs: 0.97–0.99, P < 0.01) than between age groups (range rs: 0.98–0.99, P < 0.01), educational level (range rs: 0.98–0.99, P < 0.01), sex (rs = 0.99, P < 0.01) and disease status (rs = 0.99, P < 0.01). Within country the lowest correlations of the probit coefficients were between low and high income level (range rs = 0.89–0.94, P < 0.01). Conclusions We observed variations in health valuation across countries and within country between income levels. These observations should be further explored in a systematic way, also in non-European countries. We recommend future researches studying the effect of other characteristics of respondents on health assessment. PMID:29020343

  14. Seeking alternatives to probit 9 when developing treatments for wood packaging materials under ISPM No. 15

    Treesearch

    R.A. Haack; A. Uzunovic; K. Hoover; J.A. Cook

    2011-01-01

    ISPM No. 15 presents guidelines for treating wood packaging material used in international trade. There are currently two approved phytosanitary treatments: heat treatment and methyl bromide fumigation. New treatments are under development, and are needed given that methyl bromide is being phased out. Probit 9 efficacy (100% mortality of at least 93 613 test organisms...

  15. POLO2: a user's guide to multiple Probit Or LOgit analysis

    Treesearch

    Robert M. Russell; N. E. Savin; Jacqueline L. Robertson

    1981-01-01

    This guide provides instructions for the use of POLO2, a computer program for multivariate probit or logic analysis of quantal response data. As many as 3000 test subjects may be included in a single analysis. Including the constant term, up to nine explanatory variables may be used. Examples illustrating input, output, and uses of the program's special features...

  16. Directional Dependence in Developmental Research

    ERIC Educational Resources Information Center

    von Eye, Alexander; DeShon, Richard P.

    2012-01-01

    In this article, we discuss and propose methods that may be of use to determine direction of dependence in non-normally distributed variables. First, it is shown that standard regression analysis is unable to distinguish between explanatory and response variables. Then, skewness and kurtosis are discussed as tools to assess deviation from…

  17. Box–Cox Transformation and Random Regression Models for Fecal egg Count Data

    PubMed Central

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P.; Sonstegard, Tad S.; Cobuci, Jaime Araujo; Gasbarre, Louis C.

    2012-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box–Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box–Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated. PMID:22303406

  18. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    PubMed

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  19. The statistical properties and possible causes of polar motion prediction errors

    NASA Astrophysics Data System (ADS)

    Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria

    2015-08-01

    The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.

  20. European Multicenter Study on Analytical Performance of Veris HIV-1 Assay.

    PubMed

    Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Hofmann, Jörg; Izopet, Jacques; Kalus, Ulrich; Lombardi, Alessandra; Marcos, Maria Angeles; Mileto, Davide; Sauné, Karine; O'Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel W

    2017-07-01

    The analytical performance of the Veris HIV-1 assay for use on the new, fully automated Beckman Coulter DxN Veris molecular diagnostics system was evaluated at 10 European virology laboratories. The precision, analytical sensitivity, performance with negative samples, linearity, and performance with HIV-1 groups/subtypes were evaluated. The precision for the 1-ml assay showed a standard deviation (SD) of 0.14 log 10 copies/ml or less and a coefficient of variation (CV) of ≤6.1% for each level tested. The 0.175-ml assay showed an SD of 0.17 log 10 copies/ml or less and a CV of ≤5.2% for each level tested. The analytical sensitivities determined by probit analysis were 19.3 copies/ml for the 1-ml assay and 126 copies/ml for the 0.175-ml assay. The performance with 1,357 negative samples demonstrated 99.2% with not detected results. Linearity using patient samples was shown from 1.54 to 6.93 log 10 copies/ml. The assay performed well, detecting and showing linearity with all HIV-1 genotypes tested. The Veris HIV-1 assay demonstrated analytical performance comparable to that of currently marketed HIV-1 assays. (DxN Veris products are Conformité Européenne [CE]-marked in vitro diagnostic products. The DxN Veris product line has not been submitted to the U.S. FDA and is not available in the U.S. market. The DxN Veris molecular diagnostics system is also known as the Veris MDx molecular diagnostics system and the Veris MDx system.). Copyright © 2017 American Society for Microbiology.

  1. Exploration of the psychophysics of a motion displacement hyperacuity stimulus.

    PubMed

    Verdon-Roe, Gay Mary; Westcott, Mark C; Viswanathan, Ananth C; Fitzke, Frederick W; Garway-Heath, David F

    2006-11-01

    To explore the summation properties of a motion-displacement hyperacuity stimulus with respect to stimulus area and luminance, with the goal of applying the results to the development of a motion-displacement test (MDT) for the detection of early glaucoma. A computer-generated line stimulus was presented with displacements randomized between 0 and 40 minutes of arc (min arc). Displacement thresholds (50% seen) were compared for stimuli of equal area but different edge length (orthogonal to the direction of motion) at four retinal locations. Also, MDT thresholds were recorded at five values of Michelson contrast (25%-84%) for each of five line lengths (11-128 min arc) at a single nasal location (-27,3). Frequency-of-seeing (FOS) curves were generated and displacement thresholds and interquartile ranges (IQR, 25%-75% seen) determined by probit analysis. Equivalent displacement thresholds were found for stimuli of equal area but half the edge length. Elevations of thresholds and IQR were demonstrated as line length and contrast were reduced. Equivalent displacement thresholds were also found for stimuli of equivalent energy (stimulus area x [stimulus luminance - background luminance]), in accordance with Ricco's law. There was a linear relationship (slope -0.5) between log MDT threshold and log stimulus energy. Stimulus area, rather than edge length, determined displacement thresholds within the experimental conditions tested. MDT thresholds are linearly related to the square root of the total energy of the stimulus. A new law, the threshold energy-displacement (TED) law, is proposed to apply to MDT summation properties, giving the relationship T = K logE where, T is the MDT threshold, Kis the constant, and E is the stimulus energy.

  2. Skew-t fits to mortality data--can a Gaussian-related distribution replace the Gompertz-Makeham as the basis for mortality studies?

    PubMed

    Clark, Jeremy S C; Kaczmarczyk, Mariusz; Mongiało, Zbigniew; Ignaczak, Paweł; Czajkowski, Andrzej A; Klęsk, Przemysław; Ciechanowicz, Andrzej

    2013-08-01

    Gompertz-related distributions have dominated mortality studies for 187 years. However, nonrelated distributions also fit well to mortality data. These compete with the Gompertz and Gompertz-Makeham data when applied to data with varying extents of truncation, with no consensus as to preference. In contrast, Gaussian-related distributions are rarely applied, despite the fact that Lexis in 1879 suggested that the normal distribution itself fits well to the right of the mode. Study aims were therefore to compare skew-t fits to Human Mortality Database data, with Gompertz-nested distributions, by implementing maximum likelihood estimation functions (mle2, R package bbmle; coding given). Results showed skew-t fits obtained lower Bayesian information criterion values than Gompertz-nested distributions, applied to low-mortality country data, including 1711 and 1810 cohorts. As Gaussian-related distributions have now been found to have almost universal application to error theory, one conclusion could be that a Gaussian-related distribution might replace Gompertz-related distributions as the basis for mortality studies.

  3. How log-normal is your country? An analysis of the statistical distribution of the exported volumes of products

    NASA Astrophysics Data System (ADS)

    Annunziata, Mario Alberto; Petri, Alberto; Pontuale, Giorgio; Zaccaria, Andrea

    2016-10-01

    We have considered the statistical distributions of the volumes of 1131 products exported by 148 countries. We have found that the form of these distributions is not unique but heavily depends on the level of development of the nation, as expressed by macroeconomic indicators like GDP, GDP per capita, total export and a recently introduced measure for countries' economic complexity called fitness. We have identified three major classes: a) an incomplete log-normal shape, truncated on the left side, for the less developed countries, b) a complete log-normal, with a wider range of volumes, for nations characterized by intermediate economy, and c) a strongly asymmetric shape for countries with a high degree of development. Finally, the log-normality hypothesis has been checked for the distributions of all the 148 countries through different tests, Kolmogorov-Smirnov and Cramér-Von Mises, confirming that it cannot be rejected only for the countries of intermediate economy.

  4. Scaling laws and properties of compositional data

    NASA Astrophysics Data System (ADS)

    Buccianti, Antonella; Albanese, Stefano; Lima, AnnaMaria; Minolfi, Giulia; De Vivo, Benedetto

    2016-04-01

    Many random processes occur in geochemistry. Accurate predictions of the manner in which elements or chemical species interact each other are needed to construct models able to treat presence of random components. Geochemical variables actually observed are the consequence of several events, some of which may be poorly defined or imperfectly understood. Variables tend to change with time/space but, despite their complexity, may share specific common traits and it is possible to model them stochastically. Description of the frequency distribution of the geochemical abundances has been an important target of research, attracting attention for at least 100 years, starting with CLARKE (1889) and continued by GOLDSCHMIDT (1933) and WEDEPOHL (1955). However, it was AHRENS (1954a,b) who focussed on the effect of skewness distributions, for example the log-normal distribution, regarded by him as a fundamental law of geochemistry. Although modeling of frequency distributions with some probabilistic models (for example Gaussian, log-normal, Pareto) has been well discussed in several fields of application, little attention has been devoted to the features of compositional data. When compositional nature of data is taken into account, the most typical distribution models for compositions are the Dirichlet and the additive logistic normal (or normal on the simplex) (AITCHISON et al. 2003; MATEU-FIGUERAS et al. 2005; MATEU-FIGUERAS and PAWLOWSKY-GLAHN 2008; MATEU-FIGUERAS et al. 2013). As an alternative, because compositional data have to be transformed from simplex space to real space, coordinates obtained by the ilr transformation or by application of the concept of balance can be analyzed by classical methods (EGOZCUE et al. 2003). In this contribution an approach coherent with the properties of compositional information is proposed and used to investigate the shape of the frequency distribution of compositional data. The purpose is to understand data-generation processes from the perspective of compositional theory. The approach is based on the use of the isometric log-ratio transformation, characterized by theoretical and practical advantages, but requiring a more complex geochemical interpretation compared with the investigation of single variables. The proposed methodology directs attention to model the frequency distributions of more complex indices, linking all the terms of the composition to better represent the dynamics of geochemical processes. An example of its application is presented and discussed by considering topsoil geochemistry of Campania Region (southern Italy). The investigated multi-element data archive contains, among others, Al, As, B, Ba, Ca, Co, Cr, Cu, Fe, K, La, Mg, Mn, Mo, Na, Ni, P, Pb, Sr, Th, Ti, V and Zn (mg/kg) contents determined in 3535 new topsoils as well as information on coordinates, geology, land cover. (BUCCIANTI et al., 2015). AHRENS, L. ,1954a. Geochim. Cosm. Acta 6, 121-131. AHRENS, L., 1954b. Geochim. Cosm. Acta 5, 49-73. AITCHISON, J., et al., 2003. Math Geol 35(6), 667-680. BUCCIANTI et al., 2015. Jour. Geoch. Explor., 159, 302-316. CLARKE, F., 1889. Phil. Society of Washington Bull. 11, 131-142. EGOZCUE, J.J. et al., 2003. Math Geol 35(3), 279-300. MATEU-FIGUERAS, G. et al, (2005), Stoch. Environ. Res. Risk Ass. 19(3), 205-214.

  5. The importance of normalisation in the construction of deprivation indices.

    PubMed

    Gilthorpe, M S

    1995-12-01

    Measuring socio-economic deprivation is a major challenge usually addressed through the use of composite indices. This paper aims to clarify the technical details regarding composite index construction. The distribution of some variables, for example unemployment, varies over time, and these variations must be considered when composite indices are periodically re-evaluated. The process of normalisation is examined in detail and particular attention is paid to the importance of symmetry and skewness of the composite variable distributions. Four different solutions of the Townsend index of socioeconomic deprivation are compared to reveal the effects that differing transformation processes have on the meaning or interpretation of the final index values. Differences in the rank order and the relative separation between values are investigated. Constituent variables which have been transformed to yield a more symmetric distribution provide indices that behave similarly, irrespective of the actual transformation methods adopted. Normalisation is seen to be of less importance than the removal of variable skewness. Furthermore, the degree of success of the transformation in removing skewness has a major effect in determining the variation between the individual electoral ward scores. Constituent variables undergoing no transformation produce an index that is distorted by the inherent variable skewness, and this index is not consistent between re-evaluations, either temporally or spatially. Effective transformation of constituent variables should always be undertaken when generating a composite index. The most important aspect is the removal of variable skewness. There is no need for the transformed variables to be normally distributed, only symmetrically distributed, before standardisation. Even where additional parameter weights are to be applied, which significantly alter the final index, appropriate transformation procedures should be adopted for the purpose of consistency over time and between different geographical areas.

  6. Study of Personnel Attrition and Revocation within U.S. Marine Corps Air Traffic Control Specialties

    DTIC Science & Technology

    2012-03-01

    Entrance Processing Stations (MEPS) and recruit depots, to include non-cognitive testing, such as Navy Computer Adaptive Personality Scales ( NCAPS ...Revocation, Selection, MOS, Regression, Probit, dProbit, STATA, Statistics, Marginal Effects, ASVAB, AFQT, Composite Scores, Screening, NCAPS 15. NUMBER...Navy Computer Adaptive Personality Scales ( NCAPS ), during recruitment. It is also recommended that an economic analysis be conducted comparing the

  7. Comparison of Survival Models for Analyzing Prognostic Factors in Gastric Cancer Patients

    PubMed

    Habibi, Danial; Rafiei, Mohammad; Chehrei, Ali; Shayan, Zahra; Tafaqodi, Soheil

    2018-03-27

    Objective: There are a number of models for determining risk factors for survival of patients with gastric cancer. This study was conducted to select the model showing the best fit with available data. Methods: Cox regression and parametric models (Exponential, Weibull, Gompertz, Log normal, Log logistic and Generalized Gamma) were utilized in unadjusted and adjusted forms to detect factors influencing mortality of patients. Comparisons were made with Akaike Information Criterion (AIC) by using STATA 13 and R 3.1.3 softwares. Results: The results of this study indicated that all parametric models outperform the Cox regression model. The Log normal, Log logistic and Generalized Gamma provided the best performance in terms of AIC values (179.2, 179.4 and 181.1, respectively). On unadjusted analysis, the results of the Cox regression and parametric models indicated stage, grade, largest diameter of metastatic nest, largest diameter of LM, number of involved lymph nodes and the largest ratio of metastatic nests to lymph nodes, to be variables influencing the survival of patients with gastric cancer. On adjusted analysis, according to the best model (log normal), grade was found as the significant variable. Conclusion: The results suggested that all parametric models outperform the Cox model. The log normal model provides the best fit and is a good substitute for Cox regression. Creative Commons Attribution License

  8. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  9. Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows

    NASA Technical Reports Server (NTRS)

    McKenzie, D.; Savage, S.

    2011-01-01

    The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.

  10. A Bayesian Multinomial Probit MODEL FOR THE ANALYSIS OF PANEL CHOICE DATA.

    PubMed

    Fong, Duncan K H; Kim, Sunghoon; Chen, Zhe; DeSarbo, Wayne S

    2016-03-01

    A new Bayesian multinomial probit model is proposed for the analysis of panel choice data. Using a parameter expansion technique, we are able to devise a Markov Chain Monte Carlo algorithm to compute our Bayesian estimates efficiently. We also show that the proposed procedure enables the estimation of individual level coefficients for the single-period multinomial probit model even when the available prior information is vague. We apply our new procedure to consumer purchase data and reanalyze a well-known scanner panel dataset that reveals new substantive insights. In addition, we delineate a number of advantageous features of our proposed procedure over several benchmark models. Finally, through a simulation analysis employing a fractional factorial design, we demonstrate that the results from our proposed model are quite robust with respect to differing factors across various conditions.

  11. Semi-nonparametric VaR forecasts for hedge funds during the recent crisis

    NASA Astrophysics Data System (ADS)

    Del Brio, Esther B.; Mora-Valencia, Andrés; Perote, Javier

    2014-05-01

    The need to provide accurate value-at-risk (VaR) forecasting measures has triggered an important literature in econophysics. Although these accurate VaR models and methodologies are particularly demanded for hedge fund managers, there exist few articles specifically devoted to implement new techniques in hedge fund returns VaR forecasting. This article advances in these issues by comparing the performance of risk measures based on parametric distributions (the normal, Student’s t and skewed-t), semi-nonparametric (SNP) methodologies based on Gram-Charlier (GC) series and the extreme value theory (EVT) approach. Our results show that normal-, Student’s t- and Skewed t- based methodologies fail to forecast hedge fund VaR, whilst SNP and EVT approaches accurately success on it. We extend these results to the multivariate framework by providing an explicit formula for the GC copula and its density that encompasses the Gaussian copula and accounts for non-linear dependences. We show that the VaR obtained by the meta GC accurately captures portfolio risk and outperforms regulatory VaR estimates obtained through the meta Gaussian and Student’s t distributions.

  12. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    NASA Astrophysics Data System (ADS)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  13. Estimating sales and sales market share from sales rank data for consumer appliances

    NASA Astrophysics Data System (ADS)

    Touzani, Samir; Van Buskirk, Robert

    2016-06-01

    Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.

  14. Distribution pattern of urine albumin creatinine ratio and the prevalence of high-normal levels in untreated asymptomatic non-diabetic hypertensive patients.

    PubMed

    Ohmaru, Natsuki; Nakatsu, Takaaki; Izumi, Reishi; Mashima, Keiichi; Toki, Misako; Kobayashi, Asako; Ogawa, Hiroko; Hirohata, Satoshi; Ikeda, Satoru; Kusachi, Shozo

    2011-01-01

    Even high-normal albuminuria is reportedly associated with cardiovascular events. We determined the urine albumin creatinine ratio (UACR) in spot urine samples and analyzed the UACR distribution and the prevalence of high-normal levels. The UACR was determined using immunoturbidimetry in 332 untreated asymptomatic non-diabetic Japanese patients with hypertension and in 69 control subjects. The microalbuminuria and macroalbuminuria levels were defined as a UCAR ≥30 and <300 µg/mg·creatinine and a UCAR ≥300 µg/mg·creatinine, respectively. The distribution patterns showed a highly skewed distribution for the lower levels, and a common logarithmic transformation produced a close fit to a Gaussian distribution with median, 25th and 75th percentile values of 22.6, 13.5 and 48.2 µg/mg·creatinine, respectively. When a high-normal UACR was set at >20 to <30 µg/mg·creatinine, 19.9% (66/332) of the hypertensive patients exhibited a high-normal UACR. Microalbuminuria and macroalbuminuria were observed in 36.1% (120/336) and 2.1% (7/332) of the patients, respectively. UACR was significantly correlated with the systolic and diastolic blood pressures and the pulse pressure. A stepwise multivariate analysis revealed that these pressures as well as age were independent factors that increased UACR. The UACR distribution exhibited a highly skewed pattern, with approximately 60% of untreated, non-diabetic hypertensive patients exhibiting a high-normal or larger UACR. Both hypertension and age are independent risk factors that increase the UACR. The present study indicated that a considerable percentage of patients require anti-hypertensive drugs with antiproteinuric effects at the start of treatment.

  15. Assessment of the hygienic performances of hamburger patty production processes.

    PubMed

    Gill, C O; Rahn, K; Sloan, K; McMullen, L M

    1997-05-20

    The hygienic conditions of the hamburger patties collected from three patty manufacturing plants and six retail outlets were examined. At each manufacturing plant a sample from newly formed, chilled patties and one from frozen patties were collected from each of 25 batches of patties selected at random. At three, two or one retail outlet, respectively, 25 samples from frozen, chilled or both frozen and chilled patties were collected at random. Each sample consisted of 30 g of meat obtained from five or six patties. Total aerobic, coliform and Escherichia coli counts per gram were enumerated for each sample. The mean log (x) and standard deviation (s) were calculated for the log10 values for each set of 25 counts, on the assumption that the distribution of counts approximated the log normal. A value for the log10 of the arithmetic mean (log A) was calculated for each set from the values of x and s. A chi2 statistic was calculated for each set as a test of the assumption of the log normal distribution. The chi2 statistic was calculable for 32 of the 39 sets. Four of the sets gave chi2 values indicative of gross deviation from log normality. On inspection of those sets, distributions obviously differing from the log normal were apparent in two. Log A values for total, coliform and E. coli counts for chilled patties from manufacturing plants ranged from 4.4 to 5.1, 1.7 to 2.3 and 0.9 to 1.5, respectively. Log A values for frozen patties from manufacturing plants were between < 0.1 and 0.5 log10 units less than the equivalent values for chilled patties. Log A values for total, coliform and E. coli counts for frozen patties on retail sale ranged from 3.8 to 8.5, < 0.5 to 3.6 and < 0 to 1.9, respectively. The equivalent ranges for chilled patties on retail sale were 4.8 to 8.5, 1.8 to 3.7 and 1.4 to 2.7, respectively. The findings indicate that the general hygienic condition of hamburgers patties could be improved by their being manufactured from only manufacturing beef of superior hygienic quality, and by the better management of chilled patties at retail outlets.

  16. Measuring Resistance to Change at the Within-Session Level

    ERIC Educational Resources Information Center

    Tonneau, Francois; Rios, Americo; Cabrera, Felipe

    2006-01-01

    Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases…

  17. The Effects of Designated Pollutants on Plants

    DTIC Science & Technology

    1978-11-01

    two marigold . . . . . . . . . . . . . . . . . . . . . . . . . 44 21. Probit analysis of five plant species: petunia , bean, radish, salvia and tomato...Tagetes patula L. French dwarf double goldie Marigold Tagetes erecta L. American,Senator Dirksen Petunia Petunia hybrida Vilm. White cascade Radish...00 s0 too 200 4w0 1000 1 20 3O 060 100 20 00 1000 HCL CONCENTRATION (MG Mŗ ) Figure 21. Probit analysis of five plant species: 16-day- petunia , 25-day

  18. Development and Implementation of a Telecommuting Evaluation Framework, and Modeling the Executive Telecommuting Adoption Process

    NASA Astrophysics Data System (ADS)

    Vora, V. P.; Mahmassani, H. S.

    2002-02-01

    This work proposes and implements a comprehensive evaluation framework to document the telecommuter, organizational, and societal impacts of telecommuting through telecommuting programs. Evaluation processes and materials within the outlined framework are also proposed and implemented. As the first component of the evaluation process, the executive survey is administered within a public sector agency. The survey data is examined through exploratory analysis and is compared to a previous survey of private sector executives. The ordinal probit, dynamic probit, and dynamic generalized ordinal probit (DGOP) models of telecommuting adoption are calibrated to identify factors which significantly influence executive adoption preferences and to test the robustness of such factors. The public sector DGOP model of executive willingness to support telecommuting under different program scenarios is compared with an equivalent private sector DGOP model. Through the telecommuting program, a case study of telecommuting travel impacts is performed to further substantiate research.

  19. Empirical analysis on the runners' velocity distribution in city marathons

    NASA Astrophysics Data System (ADS)

    Lin, Zhenquan; Meng, Fan

    2018-01-01

    In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.

  20. Far-infrared properties of cluster galaxies

    NASA Technical Reports Server (NTRS)

    Bicay, M. D.; Giovanelli, R.

    1987-01-01

    Far-infrared properties are derived for a sample of over 200 galaxies in seven clusters: A262, Cancer, A1367, A1656 (Coma), A2147, A2151 (Hercules), and Pegasus. The IR-selected sample consists almost entirely of IR normal galaxies, with Log of L(FIR) = 9.79 solar luminosities, Log of L(FIR)/L(B) = 0,79, and Log of S(100 microns)/S(60 microns) = 0.42. None of the sample galaxies has Log of L(FIR) greater than 11.0 solar luminosities, and only one has a FIR-to-blue luminosity ratio greater than 10. No significant differences are found in the FIR properties of HI-deficient and HI-normal cluster galaxies.

  1. A Comparison of Limited-Information and Full-Information Methods in M"plus" for Estimating Item Response Theory Parameters for Nonnormal Populations

    ERIC Educational Resources Information Center

    DeMars, Christine E.

    2012-01-01

    In structural equation modeling software, either limited-information (bivariate proportions) or full-information item parameter estimation routines could be used for the 2-parameter item response theory (IRT) model. Limited-information methods assume the continuous variable underlying an item response is normally distributed. For skewed and…

  2. 77 FR 24845 - Approval and Promulgation of Implementation Plans; South Dakota; Regional Haze State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-26

    ... various pollution controls in its BART analysis for Big Stone I, its cost impact analysis is skewed in... took into account the State's consideration of environmental impacts when reviewing the Big Stone I SO... shutdown are part of normal operations at facilities like Big Stone, and because these emissions impact...

  3. Evaluation of a New Mean Scaled and Moment Adjusted Test Statistic for SEM

    ERIC Educational Resources Information Center

    Tong, Xiaoxiao; Bentler, Peter M.

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and 2 well-known robust test…

  4. Examination of Polytomous Items' Psychometric Properties According to Nonparametric Item Response Theory Models in Different Test Conditions

    ERIC Educational Resources Information Center

    Sengul Avsar, Asiye; Tavsancil, Ezel

    2017-01-01

    This study analysed polytomous items' psychometric properties according to nonparametric item response theory (NIRT) models. Thus, simulated datasets--three different test lengths (10, 20 and 30 items), three sample distributions (normal, right and left skewed) and three samples sizes (100, 250 and 500)--were generated by conducting 20…

  5. The Pattern of Visual Fixation Eccentricity and Instability in Optic Neuropathy and Its Spatial Relationship to Retinal Ganglion Cell Layer Thickness.

    PubMed

    Mallery, Robert M; Poolman, Pieter; Thurtell, Matthew J; Wang, Jui-Kai; Garvin, Mona K; Ledolter, Johannes; Kardon, Randy H

    2016-07-01

    The purpose of this study was to assess whether clinically useful measures of fixation instability and eccentricity can be derived from retinal tracking data obtained during optical coherence tomography (OCT) in patients with optic neuropathy (ON) and to develop a method for relating fixation to the retinal ganglion cell complex (GCC) thickness. Twenty-nine patients with ON underwent macular volume OCT with 30 seconds of confocal scanning laser ophthalmoscope (cSLO)-based eye tracking during fixation. Kernel density estimation quantified fixation instability and fixation eccentricity from the distribution of fixation points on the retina. Preferred ganglion cell layer loci (PGCL) and their relationship to the GCC thickness map were derived, accounting for radial displacement of retinal ganglion cell soma from their corresponding cones. Fixation instability was increased in ON eyes (0.21 deg2) compared with normal eyes (0.06982 deg2; P < 0.001), and fixation eccentricity was increased in ON eyes (0.48°) compared with normal eyes (0.24°; P = 0.03). Fixation instability and eccentricity each correlated moderately with logMAR acuity and were highly predictive of central visual field loss. Twenty-six of 35 ON eyes had PGCL skewed toward local maxima of the GCC thickness map. Patients with bilateral dense central scotomas had PGCL in homonymous retinal locations with respect to the fovea. Fixation instability and eccentricity measures obtained during cSLO-OCT assess the function of perifoveal retinal elements and predict central visual field loss in patients with ON. A model relating fixation to the GCC thickness map offers a method to assess the structure-function relationship between fixation and areas of preserved GCC in patients with ON.

  6. The Pattern of Visual Fixation Eccentricity and Instability in Optic Neuropathy and Its Spatial Relationship to Retinal Ganglion Cell Layer Thickness

    PubMed Central

    M. Mallery, Robert; Poolman, Pieter; J. Thurtell, Matthew; Wang, Jui-Kai; K. Garvin, Mona; Ledolter, Johannes; Kardon, Randy H.

    2016-01-01

    Purpose The purpose of this study was to assess whether clinically useful measures of fixation instability and eccentricity can be derived from retinal tracking data obtained during optical coherence tomography (OCT) in patients with optic neuropathy (ON) and to develop a method for relating fixation to the retinal ganglion cell complex (GCC) thickness. Methods Twenty-nine patients with ON underwent macular volume OCT with 30 seconds of confocal scanning laser ophthalmoscope (cSLO)-based eye tracking during fixation. Kernel density estimation quantified fixation instability and fixation eccentricity from the distribution of fixation points on the retina. Preferred ganglion cell layer loci (PGCL) and their relationship to the GCC thickness map were derived, accounting for radial displacement of retinal ganglion cell soma from their corresponding cones. Results Fixation instability was increased in ON eyes (0.21 deg2) compared with normal eyes (0.06982 deg2; P < 0.001), and fixation eccentricity was increased in ON eyes (0.48°) compared with normal eyes (0.24°; P = 0.03). Fixation instability and eccentricity each correlated moderately with logMAR acuity and were highly predictive of central visual field loss. Twenty-six of 35 ON eyes had PGCL skewed toward local maxima of the GCC thickness map. Patients with bilateral dense central scotomas had PGCL in homonymous retinal locations with respect to the fovea. Conclusions Fixation instability and eccentricity measures obtained during cSLO-OCT assess the function of perifoveal retinal elements and predict central visual field loss in patients with ON. A model relating fixation to the GCC thickness map offers a method to assess the structure–function relationship between fixation and areas of preserved GCC in patients with ON. PMID:27409502

  7. A random effects meta-analysis model with Box-Cox transformation.

    PubMed

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.

  8. Higher order moments, structure functions and spectral ratios in near- and far-wakes of a wind turbine array

    NASA Astrophysics Data System (ADS)

    Ali, Naseem; Aseyev, A.; McCraney, J.; Vuppuluri, V.; Abbass, O.; Al Jubaree, T.; Melius, M.; Cal, R. B.

    2014-11-01

    Hot-wire measurements obtained in a 3 × 3 wind turbine array boundary layer are utilized to analyze higher order statistics which include skewness, kurtosis as well as the ratios of structure functions and spectra. The ratios consist of wall-normal to streamwise components for both quantities. The aim is to understand the degree of anisotropy in the flow for the near- and far-wakes of the flow field where profiles at one diameter and five diameters are considered, respectively. The skewness at top tip for both wakes show a negative skewness while below the turbine canopy, this terms are positive. The kurtosis shows a Gaussian behavior in the near-wake immediately at hub-height. In addition, the effect due to the passage of the rotor in tandem with the shear layer at the top tip renders relatively high differences in the fourth order moment. The second order structure function and spectral ratios are found to exhibit anisotropic behavior at the top and bottom-tips for the large scales. Mixed structure functions and co-spectra are also considered in the context of isotropy.

  9. Differences in Online Consumer Ratings of Health Care Providers Across Medical, Surgical, and Allied Health Specialties: Observational Study of 212,933 Providers.

    PubMed

    Daskivich, Timothy; Luu, Michael; Noah, Benjamin; Fuller, Garth; Anger, Jennifer; Spiegel, Brennan

    2018-05-09

    Health care consumers are increasingly using online ratings to select providers, but differences in the distribution of scores across specialties and skew of the data have the potential to mislead consumers about the interpretation of ratings. The objective of our study was to determine whether distributions of consumer ratings differ across specialties and to provide specialty-specific data to assist consumers and clinicians in interpreting ratings. We sampled 212,933 health care providers rated on the Healthgrades consumer ratings website, representing 29 medical specialties (n=128,678), 15 surgical specialties (n=72,531), and 6 allied health (nonmedical, nonnursing) professions (n=11,724) in the United States. We created boxplots depicting distributions and tested the normality of overall patient satisfaction scores. We then determined the specialty-specific percentile rank for scores across groupings of specialties and individual specialties. Allied health providers had higher median overall satisfaction scores (4.5, interquartile range [IQR] 4.0-5.0) than physicians in medical specialties (4.0, IQR 3.3-4.5) and surgical specialties (4.2, IQR 3.6-4.6, P<.001). Overall satisfaction scores were highly left skewed (normal between -0.5 and 0.5) for all specialties, but skewness was greatest among allied health providers (-1.23, 95% CI -1.280 to -1.181), followed by surgical (-0.77, 95% CI -0.787 to -0.755) and medical specialties (-0.64, 95% CI -0.648 to -0.628). As a result of the skewness, the percentages of overall satisfaction scores less than 4 were only 23% for allied health, 37% for surgical specialties, and 50% for medical specialties. Percentile ranks for overall satisfaction scores varied across specialties; percentile ranks for scores of 2 (0.7%, 2.9%, 0.8%), 3 (5.8%, 16.6%, 8.1%), 4 (23.0%, 50.3%, 37.3%), and 5 (63.9%, 89.5%, 86.8%) differed for allied health, medical specialties, and surgical specialties, respectively. Online consumer ratings of health care providers are highly left skewed, fall within narrow ranges, and differ by specialty, which precludes meaningful interpretation by health care consumers. Specialty-specific percentile ranks may help consumers to more meaningfully assess online physician ratings. ©Timothy Daskivich, Michael Luu, Benjamin Noah, Garth Fuller, Jennifer Anger, Brennan Spiegel. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 09.05.2018.

  10. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis

    PubMed Central

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Background: Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. Methods: In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. Results: The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Conclusion: Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended. PMID:26793655

  11. Prediction of unwanted pregnancies using logistic regression, probit regression and discriminant analysis.

    PubMed

    Ebrahimzadeh, Farzad; Hajizadeh, Ebrahim; Vahabi, Nasim; Almasian, Mohammad; Bakhteyar, Katayoon

    2015-01-01

    Unwanted pregnancy not intended by at least one of the parents has undesirable consequences for the family and the society. In the present study, three classification models were used and compared to predict unwanted pregnancies in an urban population. In this cross-sectional study, 887 pregnant mothers referring to health centers in Khorramabad, Iran, in 2012 were selected by the stratified and cluster sampling; relevant variables were measured and for prediction of unwanted pregnancy, logistic regression, discriminant analysis, and probit regression models and SPSS software version 21 were used. To compare these models, indicators such as sensitivity, specificity, the area under the ROC curve, and the percentage of correct predictions were used. The prevalence of unwanted pregnancies was 25.3%. The logistic and probit regression models indicated that parity and pregnancy spacing, contraceptive methods, household income and number of living male children were related to unwanted pregnancy. The performance of the models based on the area under the ROC curve was 0.735, 0.733, and 0.680 for logistic regression, probit regression, and linear discriminant analysis, respectively. Given the relatively high prevalence of unwanted pregnancies in Khorramabad, it seems necessary to revise family planning programs. Despite the similar accuracy of the models, if the researcher is interested in the interpretability of the results, the use of the logistic regression model is recommended.

  12. Using fixed-parameter and random-parameter ordered regression models to identify significant factors that affect the severity of drivers' injuries in vehicle-train collisions.

    PubMed

    Dabbour, Essam; Easa, Said; Haider, Murtaza

    2017-10-01

    This study attempts to identify significant factors that affect the severity of drivers' injuries when colliding with trains at railroad-grade crossings by analyzing the individual-specific heterogeneity related to those factors over a period of 15 years. Both fixed-parameter and random-parameter ordered regression models were used to analyze records of all vehicle-train collisions that occurred in the United States from January 1, 2001 to December 31, 2015. For fixed-parameter ordered models, both probit and negative log-log link functions were used. The latter function accounts for the fact that lower injury severity levels are more probable than higher ones. Separate models were developed for heavy and light-duty vehicles. Higher train and vehicle speeds, female, and young drivers (below the age of 21 years) were found to be consistently associated with higher severity of drivers' injuries for both heavy and light-duty vehicles. Furthermore, favorable weather, light-duty trucks (including pickup trucks, panel trucks, mini-vans, vans, and sports-utility vehicles), and senior drivers (above the age of 65 years) were found be consistently associated with higher severity of drivers' injuries for light-duty vehicles only. All other factors (e.g. air temperature, the type of warning devices, darkness conditions, and highway pavement type) were found to be temporally unstable, which may explain the conflicting findings of previous studies related to those factors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A Nanoflare-Based Cellular Automaton Model and the Observed Properties of the Coronal Plasma

    NASA Technical Reports Server (NTRS)

    Lopez-Fuentes, Marcelo; Klimchuk, James Andrew

    2016-01-01

    We use the cellular automaton model described in Lopez Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode/XRT and SDOAIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasma in AR coronal loops. The typical intensity fluctuations have amplitudes of 10 percent - 15 percent both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.

  14. A NANOFLARE-BASED CELLULAR AUTOMATON MODEL AND THE OBSERVED PROPERTIES OF THE CORONAL PLASMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuentes, Marcelo López; Klimchuk, James A., E-mail: lopezf@iafe.uba.ar

    2016-09-10

    We use the cellular automaton model described in López Fuentes and Klimchuk to study the evolution of coronal loop plasmas. The model, based on the idea of a critical misalignment angle in tangled magnetic fields, produces nanoflares of varying frequency with respect to the plasma cooling time. We compare the results of the model with active region (AR) observations obtained with the Hinode /XRT and SDO /AIA instruments. The comparison is based on the statistical properties of synthetic and observed loop light curves. Our results show that the model reproduces the main observational characteristics of the evolution of the plasmamore » in AR coronal loops. The typical intensity fluctuations have amplitudes of 10%–15% both for the model and the observations. The sign of the skewness of the intensity distributions indicates the presence of cooling plasma in the loops. We also study the emission measure (EM) distribution predicted by the model and obtain slopes in log(EM) versus log(T) between 2.7 and 4.3, in agreement with published observational values.« less

  15. Dose-volume effects in pathologic lymph nodes in locally advanced cervical cancer.

    PubMed

    Bacorro, Warren; Dumas, Isabelle; Escande, Alexandre; Gouy, Sebastien; Bentivegna, Enrica; Morice, Philippe; Haie-Meder, Christine; Chargari, Cyrus

    2018-03-01

    In cervical cancer patients, dose-volume relationships have been demonstrated for tumor and organs-at-risk, but not for pathologic nodes. The nodal control probability (NCP) according to dose/volume parameters was investigated. Patients with node-positive cervical cancer treated curatively with external beam radiotherapy (EBRT) and image-guided brachytherapy (IGABT) were identified. Nodal doses during EBRT, IGABT and boost were converted to 2-Gy equivalent (α/β = 10 Gy) and summed. Pathologic nodes were followed individually from diagnosis to relapse. Statistical analyses comprised log-rank tests (univariate analyses), Cox proportional model (factors with p ≤ 0.1 in univariate) and Probit analyses. A total of 108 patients with 254 unresected pathological nodes were identified. The mean nodal volume at diagnosis was 3.4 ± 5.8 cm 3 . The mean total nodal EQD2 doses were 55.3 ± 5.6 Gy. Concurrent chemotherapy was given in 96%. With a median follow-up of 33.5 months, 20 patients (18.5%) experienced relapse in nodes considered pathologic at diagnosis. Overall nodal recurrence rate was 9.1% (23/254). On univariate analyses, nodal volume (threshold: 3 cm 3 , p < .0001) and lymph node dose (≥57.5 Gy α/β10 , p = .039) were significant for nodal control. The use of simultaneous boost was borderline for significance (p = .07). On multivariate analysis, volume (HR = 8.2, 4.0-16.6, p < .0001) and dose (HR = 2, 1.05-3.9, p = .034) remained independent factors. Probit analysis combining dose and volume showed significant relationships with NCP, with increasing gap between the curves with higher nodal volumes. A nodal dose-volume effect on NCP is demonstrated for the first time, with increasing NCP benefit of additional doses to higher-volume nodes. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Cocoa Farmers’ Compliance with Safety Precautions in Spraying Agrochemicals and Use of Personal Protective Equipment (PPE) in Cameroon

    PubMed Central

    2018-01-01

    The inability of farmers to comply with essential precautions in the course of spraying agrochemicals remains a policy dilemma, especially in developing countries. The objectives of this paper were to assess compliance of cocoa farmers with agrochemical safety measures, analyse the factors explaining involvement of cocoa farmers in the practice of reusing agrochemical containers and wearing of personal protective equipment (PPE). Data were collected with structured questionnaires from 667 cocoa farmers from the Centre and South West regions in Cameroon. Data analyses were carried out with Probit regression and Negative Binomial regression models. The results showed that average cocoa farm sizes were 3.55 ha and 2.82 ha in South West and Centre regions, respectively, and 89.80% and 42.64% complied with manufacturers’ instructions in the use of insecticides. Eating or drinking while spraying insecticides and fungicides was reported by 4.20% and 5.10% of all farmers in the two regions, respectively. However, 37.78% and 57.57% of all farmers wore hand gloves and safety boots while spraying insecticides in the South West and Centre regions of Cameroon, respectively. In addition, 7.80% of all the farmers would wash agrochemical containers and use them at home, while 42.43% would wash and use them on their farms. Probit regression results showed that probability of reusing agrochemical containers was significantly influenced (p < 0.05) by region of residence of cocoa farmers, gender, possession of formal education and farming as primary occupation. The Negative Binomial regression results showed that the log of number PPE worn was significantly influenced (p < 0.10) by region, marital status, attainment of formal education, good health, awareness of manufacturers’ instructions, land area and contact index. It was among others concluded that efforts to train farmers on the need to be familiar with manufacturers’ instructions and use PPE would enhance their safety in the course of spraying agrochemicals. PMID:29438333

  17. Cocoa Farmers' Compliance with Safety Precautions in Spraying Agrochemicals and Use of Personal Protective Equipment (PPE) in Cameroon.

    PubMed

    Oyekale, Abayomi Samuel

    2018-02-13

    The inability of farmers to comply with essential precautions in the course of spraying agrochemicals remains a policy dilemma, especially in developing countries. The objectives of this paper were to assess compliance of cocoa farmers with agrochemical safety measures, analyse the factors explaining involvement of cocoa farmers in the practice of reusing agrochemical containers and wearing of personal protective equipment (PPE). Data were collected with structured questionnaires from 667 cocoa farmers from the Centre and South West regions in Cameroon. Data analyses were carried out with Probit regression and Negative Binomial regression models. The results showed that average cocoa farm sizes were 3.55 ha and 2.82 ha in South West and Centre regions, respectively, and 89.80% and 42.64% complied with manufacturers' instructions in the use of insecticides. Eating or drinking while spraying insecticides and fungicides was reported by 4.20% and 5.10% of all farmers in the two regions, respectively. However, 37.78% and 57.57% of all farmers wore hand gloves and safety boots while spraying insecticides in the South West and Centre regions of Cameroon, respectively. In addition, 7.80% of all the farmers would wash agrochemical containers and use them at home, while 42.43% would wash and use them on their farms. Probit regression results showed that probability of reusing agrochemical containers was significantly influenced ( p < 0.05) by region of residence of cocoa farmers, gender, possession of formal education and farming as primary occupation. The Negative Binomial regression results showed that the log of number PPE worn was significantly influenced ( p < 0.10) by region, marital status, attainment of formal education, good health, awareness of manufacturers' instructions, land area and contact index. It was among others concluded that efforts to train farmers on the need to be familiar with manufacturers' instructions and use PPE would enhance their safety in the course of spraying agrochemicals.

  18. A modified weighted function method for parameter estimation of Pearson type three distribution

    NASA Astrophysics Data System (ADS)

    Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo

    2014-04-01

    In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.

  19. Methods for assessing long-term mean pathogen count in drinking water and risk management implications.

    PubMed

    Englehardt, James D; Ashbolt, Nicholas J; Loewenstine, Chad; Gadzinski, Erik R; Ayenu-Prah, Albert Y

    2012-06-01

    Recently pathogen counts in drinking and source waters were shown theoretically to have the discrete Weibull (DW) or closely related discrete growth distribution (DGD). The result was demonstrated versus nine short-term and three simulated long-term water quality datasets. These distributions are highly skewed such that available datasets seldom represent the rare but important high-count events, making estimation of the long-term mean difficult. In the current work the methods, and data record length, required to assess long-term mean microbial count were evaluated by simulation of representative DW and DGD waterborne pathogen count distributions. Also, microbial count data were analyzed spectrally for correlation and cycles. In general, longer data records were required for more highly skewed distributions, conceptually associated with more highly treated water. In particular, 500-1,000 random samples were required for reliable assessment of the population mean ±10%, though 50-100 samples produced an estimate within one log (45%) below. A simple correlated first order model was shown to produce count series with 1/f signal, and such periodicity over many scales was shown in empirical microbial count data, for consideration in sampling. A tiered management strategy is recommended, including a plan for rapid response to unusual levels of routinely-monitored water quality indicators.

  20. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  1. Is Coefficient Alpha Robust to Non-Normal Data?

    PubMed Central

    Sheng, Yanyan; Sheng, Zhaohui

    2011-01-01

    Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306

  2. A New Bond Albedo for Performing Orbital Debris Brightness to Size Transformations

    NASA Technical Reports Server (NTRS)

    Mulrooney, Mark K.; Matney, Mark J.

    2008-01-01

    We have developed a technique for estimating the intrinsic size distribution of orbital debris objects via optical measurements alone. The process is predicated on the empirically observed power-law size distribution of debris (as indicated by radar RCS measurements) and the log-normal probability distribution of optical albedos as ascertained from phase (Lambertian) and range-corrected telescopic brightness measurements. Since the observed distribution of optical brightness is the product integral of the size distribution of the parent [debris] population with the albedo probability distribution, it is a straightforward matter to transform a given distribution of optical brightness back to a size distribution by the appropriate choice of a single albedo value. This is true because the integration of a powerlaw with a log-normal distribution (Fredholm Integral of the First Kind) yields a Gaussian-blurred power-law distribution with identical power-law exponent. Application of a single albedo to this distribution recovers a simple power-law [in size] which is linearly offset from the original distribution by a constant whose value depends on the choice of the albedo. Significantly, there exists a unique Bond albedo which, when applied to an observed brightness distribution, yields zero offset and therefore recovers the original size distribution. For physically realistic powerlaws of negative slope, the proper choice of albedo recovers the parent size distribution by compensating for the observational bias caused by the large number of small objects that appear anomalously large (bright) - and thereby skew the small population upward by rising above the detection threshold - and the lower number of large objects that appear anomalously small (dim). Based on this comprehensive analysis, a value of 0.13 should be applied to all orbital debris albedo-based brightness-to-size transformations regardless of data source. Its prima fascia genesis, derived and constructed from the current RCS to size conversion methodology (SiBAM Size-Based Estimation Model) and optical data reduction standards, assures consistency in application with the prior canonical value of 0.1. Herein we present the empirical and mathematical arguments for this approach and by example apply it to a comprehensive set of photometric data acquired via NASA's Liquid Mirror Telescopes during the 2000-2001 observing season.

  3. Tolerance of ciliated protozoan Paramecium bursaria (Protozoa, Ciliophora) to ammonia and nitrites

    NASA Astrophysics Data System (ADS)

    Xu, Henglong; Song, Weibo; Lu, Lu; Alan, Warren

    2005-09-01

    The tolerance to ammonia and nitrites in freshwater ciliate Paramecium bursaria was measured in a conventional open system. The ciliate was exposed to different concentrations of ammonia and nitrites for 2h and 12h in order to determine the lethal concentrations. Linear regression analysis revealed that the 2h-LC50 value for ammonia was 95.94 mg/L and for nitrite 27.35 mg/L using probit scale method (with 95% confidence intervals). There was a linear correlation between the mortality probit scale and logarithmic concentration of ammonia which fit by a regression equation y=7.32 x 9.51 ( R 2=0.98; y, mortality probit scale; x, logarithmic concentration of ammonia), by which 2 h-LC50 value for ammonia was found to be 95.50 mg/L. A linear correlation between mortality probit scales and logarithmic concentration of nitrite is also followed the regression equation y=2.86 x+0.89 ( R 2=0.95; y, mortality probit scale; x, logarithmic concentration of nitrite). The regression analysis of toxicity curves showed that the linear correlation between exposed time of ammonia-N LC50 value and ammonia-N LC50 value followed the regression equation y=2 862.85 e -0.08 x ( R 2=0.95; y, duration of exposure to LC50 value; x, LC50 value), and that between exposed time of nitrite-N LC50 value and nitrite-N LC50 value followed the regression equation y=127.15 e -0.13 x ( R 2=0.91; y, exposed time of LC50 value; x, LC50 value). The results demonstrate that the tolerance to ammonia in P. bursaria is considerably higher than that of the larvae or juveniles of some metozoa, e.g. cultured prawns and oysters. In addition, ciliates, as bacterial predators, are likely to play a positive role in maintaining and improving water quality in aquatic environments with high-level ammonium, such as sewage treatment systems.

  4. An integrative approach to assess X-chromosome inactivation using allele-specific expression with applications to epithelial ovarian cancer.

    PubMed

    Larson, Nicholas B; Fogarty, Zachary C; Larson, Melissa C; Kalli, Kimberly R; Lawrenson, Kate; Gayther, Simon; Fridley, Brooke L; Goode, Ellen L; Winham, Stacey J

    2017-12-01

    X-chromosome inactivation (XCI) epigenetically silences transcription of an X chromosome in females; patterns of XCI are thought to be aberrant in women's cancers, but are understudied due to statistical challenges. We develop a two-stage statistical framework to assess skewed XCI and evaluate gene-level patterns of XCI for an individual sample by integration of RNA sequence, copy number alteration, and genotype data. Our method relies on allele-specific expression (ASE) to directly measure XCI and does not rely on male samples or paired normal tissue for comparison. We model ASE using a two-component mixture of beta distributions, allowing estimation for a given sample of the degree of skewness (based on a composite likelihood ratio test) and the posterior probability that a given gene escapes XCI (using a Bayesian beta-binomial mixture model). To illustrate the utility of our approach, we applied these methods to data from tumors of ovarian cancer patients. Among 99 patients, 45 tumors were informative for analysis and showed evidence of XCI skewed toward a particular parental chromosome. For 397 X-linked genes, we observed tumor XCI patterns largely consistent with previously identified consensus states based on multiple normal tissue types. However, 37 genes differed in XCI state between ovarian tumors and the consensus state; 17 genes aberrantly escaped XCI in ovarian tumors (including many oncogenes), whereas 20 genes were unexpectedly inactivated in ovarian tumors (including many tumor suppressor genes). These results provide evidence of the importance of XCI in ovarian cancer and demonstrate the utility of our two-stage analysis. © 2017 WILEY PERIODICALS, INC.

  5. Correlation of histogram analysis of apparent diffusion coefficient with uterine cervical pathologic finding.

    PubMed

    Lin, Yuning; Li, Hui; Chen, Ziqian; Ni, Ping; Zhong, Qun; Huang, Huijuan; Sandrasegaran, Kumar

    2015-05-01

    The purpose of this study was to investigate the application of histogram analysis of apparent diffusion coefficient (ADC) in characterizing pathologic features of cervical cancer and benign cervical lesions. This prospective study was approved by the institutional review board, and written informed consent was obtained. Seventy-three patients with cervical cancer (33-69 years old; 35 patients with International Federation of Gynecology and Obstetrics stage IB cervical cancer) and 38 patients (38-61 years old) with normal cervix or cervical benign lesions (control group) were enrolled. All patients underwent 3-T diffusion-weighted imaging (DWI) with b values of 0 and 800 s/mm(2). ADC values of the entire tumor in the patient group and the whole cervix volume in the control group were assessed. Mean ADC, median ADC, 25th and 75th percentiles of ADC, skewness, and kurtosis were calculated. Histogram parameters were compared between different pathologic features, as well as between stage IB cervical cancer and control groups. Mean ADC, median ADC, and 25th percentile of ADC were significantly higher for adenocarcinoma (p = 0.021, 0.006, and 0.004, respectively), and skewness was significantly higher for squamous cell carcinoma (p = 0.011). Median ADC was statistically significantly higher for well or moderately differentiated tumors (p = 0.044), and skewness was statistically significantly higher for poorly differentiated tumors (p = 0.004). No statistically significant difference of ADC histogram was observed between lymphovascular space invasion subgroups. All histogram parameters differed significantly between stage IB cervical cancer and control groups (p < 0.05). Distribution of ADCs characterized by histogram analysis may help to distinguish early-stage cervical cancer from normal cervix or cervical benign lesions and may be useful for evaluating the different pathologic features of cervical cancer.

  6. Geostatistical interpolation of available copper in orchard soil as influenced by planting duration.

    PubMed

    Fu, Chuancheng; Zhang, Haibo; Tu, Chen; Li, Lianzhen; Luo, Yongming

    2018-01-01

    Mapping the spatial distribution of available copper (A-Cu) in orchard soils is important in agriculture and environmental management. However, data on the distribution of A-Cu in orchard soils is usually highly variable and severely skewed due to the continuous input of fungicides. In this study, ordinary kriging combined with planting duration (OK_PD) is proposed as a method for improving the interpolation of soil A-Cu. Four normal distribution transformation methods, namely, the Box-Cox, Johnson, rank order, and normal score methods, were utilized prior to interpolation. A total of 317 soil samples were collected in the orchards of the Northeast Jiaodong Peninsula. Moreover, 1472 orchards were investigated to obtain a map of planting duration using Voronoi tessellations. The soil A-Cu content ranged from 0.09 to 106.05 with a mean of 18.10 mg kg -1 , reflecting the high availability of Cu in the soils. Soil A-Cu concentrations exhibited a moderate spatial dependency and increased significantly with increasing planting duration. All the normal transformation methods successfully decreased the skewness and kurtosis of the soil A-Cu and the associated residuals, and also computed more robust variograms. OK_PD could generate better spatial prediction accuracy than ordinary kriging (OK) for all transformation methods tested, and it also provided a more detailed map of soil A-Cu. Normal score transformation produced satisfactory accuracy and showed an advantage in ameliorating smoothing effect derived from the interpolation methods. Thus, normal score transformation prior to kriging combined with planting duration (NSOK_PD) is recommended for the interpolation of soil A-Cu in this area.

  7. Differences in Connection Strength between Mental Symptoms Might Be Explained by Differences in Variance: Reanalysis of Network Data Did Not Confirm Staging.

    PubMed

    Terluin, Berend; de Boer, Michiel R; de Vet, Henrica C W

    2016-01-01

    The network approach to psychopathology conceives mental disorders as sets of symptoms causally impacting on each other. The strengths of the connections between symptoms are key elements in the description of those symptom networks. Typically, the connections are analysed as linear associations (i.e., correlations or regression coefficients). However, there is insufficient awareness of the fact that differences in variance may account for differences in connection strength. Differences in variance frequently occur when subgroups are based on skewed data. An illustrative example is a study published in PLoS One (2013;8(3):e59559) that aimed to test the hypothesis that the development of psychopathology through "staging" was characterized by increasing connection strength between mental states. Three mental states (negative affect, positive affect, and paranoia) were studied in severity subgroups of a general population sample. The connection strength was found to increase with increasing severity in six of nine models. However, the method used (linear mixed modelling) is not suitable for skewed data. We reanalysed the data using inverse Gaussian generalized linear mixed modelling, a method suited for positively skewed data (such as symptoms in the general population). The distribution of positive affect was normal, but the distributions of negative affect and paranoia were heavily skewed. The variance of the skewed variables increased with increasing severity. Reanalysis of the data did not confirm increasing connection strength, except for one of nine models. Reanalysis of the data did not provide convincing evidence in support of staging as characterized by increasing connection strength between mental states. Network researchers should be aware that differences in connection strength between symptoms may be caused by differences in variances, in which case they should not be interpreted as differences in impact of one symptom on another symptom.

  8. Sperm Retrieval in Patients with Klinefelter Syndrome: A Skewed Regression Model Analysis.

    PubMed

    Chehrazi, Mohammad; Rahimiforoushani, Abbas; Sabbaghian, Marjan; Nourijelyani, Keramat; Sadighi Gilani, Mohammad Ali; Hoseini, Mostafa; Vesali, Samira; Yaseri, Mehdi; Alizadeh, Ahad; Mohammad, Kazem; Samani, Reza Omani

    2017-01-01

    The most common chromosomal abnormality due to non-obstructive azoospermia (NOA) is Klinefelter syndrome (KS) which occurs in 1-1.72 out of 500-1000 male infants. The probability of retrieving sperm as the outcome could be asymmetrically different between patients with and without KS, therefore logistic regression analysis is not a well-qualified test for this type of data. This study has been designed to evaluate skewed regression model analysis for data collected from microsurgical testicular sperm extraction (micro-TESE) among azoospermic patients with and without non-mosaic KS syndrome. This cohort study compared the micro-TESE outcome between 134 men with classic KS and 537 men with NOA and normal karyotype who were referred to Royan Institute between 2009 and 2011. In addition to our main outcome, which was sperm retrieval, we also used logistic and skewed regression analyses to compare the following demographic and hormonal factors: age, level of follicle stimulating hormone (FSH), luteinizing hormone (LH), and testosterone between the two groups. A comparison of the micro-TESE between the KS and control groups showed a success rate of 28.4% (38/134) for the KS group and 22.2% (119/537) for the control group. In the KS group, a significantly difference (P<0.001) existed between testosterone levels for the successful sperm retrieval group (3.4 ± 0.48 mg/mL) compared to the unsuccessful sperm retrieval group (2.33 ± 0.23 mg/mL). The index for quasi Akaike information criterion (QAIC) had a goodness of fit of 74 for the skewed model which was lower than logistic regression (QAIC=85). According to the results, skewed regression is more efficient in estimating sperm retrieval success when the data from patients with KS are analyzed. This finding should be investigated by conducting additional studies with different data structures.

  9. Reliability of provocative tests of motion sickness susceptibility

    NASA Technical Reports Server (NTRS)

    Calkins, D. S.; Reschke, M. F.; Kennedy, R. S.; Dunlop, W. P.

    1987-01-01

    Test-retest reliability values were derived from motion sickness susceptibility scores obtained from two successive exposures to each of three tests: (1) Coriolis sickness sensitivity test; (2) staircase velocity movement test; and (3) parabolic flight static chair test. The reliability of the three tests ranged from 0.70 to 0.88. Normalizing values from predictors with skewed distributions improved the reliability.

  10. Simulating Univariate and Multivariate Burr Type IIII and Type XII Distributions through the Method of L-Moments

    ERIC Educational Resources Information Center

    Pant, Mohan Dev

    2011-01-01

    The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and…

  11. Handling Bias from Individual Differences in between-Subject Holistic Experimental Designs.

    DTIC Science & Technology

    1985-10-30

    leptukurtic) or flat ( platykurtic ) in the neighborhood of the mode. For normal distributions, B1-0 and B2-3. When Bl deviates from 0, the data is skewed. For B2...were obtained. This data is symmetrical, though peaked. The platykurtic data was obtained by taking the cube root of the discrete values on the table of

  12. Convergence of leaf display and photosynthetic characteristics of understory Abies amabilis and Tsuga Heterophylla in an old-growth forest in southwestern Washington State, USA

    Treesearch

    Hiroaki Ishii; Ken-Ichi Yoshimura; Akira Mori

    2009-01-01

    The branching pattern of A. amabilis was regular (normal shoot-length distribution, less variable branching angle and bifurcation ratio), whereas that of T. heterophylla was more plastic (positively skewed shoot-length distribution, more variable branching angle and bifurcation ratio). The two species had similar shoot...

  13. Effects of rotation on coolant passage heat transfer. Volume 2: Coolant passages with trips normal and skewed to the flow

    NASA Technical Reports Server (NTRS)

    Johnson, B. V.; Wagner, J. H.; Steuber, G. D.

    1993-01-01

    An experimental program was conducted to investigate heat transfer and pressure loss characteristics of rotating multipass passages, for configurations and dimensions typical of modem turbine blades. This experimental program is one part of the NASA Hot Section Technology (HOST) Initiative, which has as its overall objective the development and verification of improved analysis methods that will form the basis for a design system that will produce turbine components with improved durability. The objective of this program was the generation of a data base of heat transfer and pressure loss data required to develop heat transfer correlations and to assess computational fluid dynamic techniques for rotating coolant passages. The experimental work was broken down into two phases. Phase 1 consists of experiments conducted in a smooth wall large scale heat transfer model. A detailed discussion of these results was presented in volume 1 of a NASA Report. In Phase 2 the large scale model was modified to investigate the effects of skewed and normal passage turbulators. The results of Phase 2 along with comparison to Phase 1 is the subject of this Volume 2 NASA Report.

  14. Growth pattern and age at menarche of obese girls in a transitional society.

    PubMed

    Jaruratanasirikul, S; Mo-suwan, L; Lebel, L

    1997-01-01

    Childhood obesity is an increasing problem in a transitional society such as Thailand. To study physical growth and puberty in obese children, a cross-sectional survey of growth and age at menarche was carried out in schoolgirls aged between 8 and 16 years old. The 3,120 girls were divided into two groups based on weight-for-height criteria. Girls with weight-for-height between 80 and 120% were classified as normal stature (2,625; 84.1%) and those more than 120% were obese (495; 15.9%). Using probit analysis, age at menarche in obese girls was 0.9 year earlier than normal stature girls (11.5 years vs 12.4 years). At age 12, obese girls were reaching menarche 2.8 times more when compared with the normal stature girls. In terms of growth pattern, obese girls were taller and grew faster during the prepubertal period, and then reached their final height earlier than the normal stature girls (13 years vs 15 years). The final height in obese girls was significantly shorter (153.0 cm and 155.0 cm, p = 0.01). We conclude that: 1) obese girls grow faster, have earlier menarche and then stop growing earlier, and 2) obese girls tend to be shorter as adults, compared with normal stature girls.

  15. Adaptation to Skew Distortions of Natural Scenes and Retinal Specificity of Its Aftereffects

    PubMed Central

    Habtegiorgis, Selam W.; Rifai, Katharina; Lappe, Markus; Wahl, Siegfried

    2017-01-01

    Image skew is one of the prominent distortions that exist in optical elements, such as in spectacle lenses. The present study evaluates adaptation to image skew in dynamic natural images. Moreover, the cortical levels involved in skew coding were probed using retinal specificity of skew adaptation aftereffects. Left and right skewed natural image sequences were shown to observers as adapting stimuli. The point of subjective equality (PSE), i.e., the skew amplitude in simple geometrical patterns that is perceived to be unskewed, was used to quantify the aftereffect of each adapting skew direction. The PSE, in a two-alternative forced choice paradigm, shifted toward the adapting skew direction. Moreover, significant adaptation aftereffects were obtained not only at adapted, but also at non-adapted retinal locations during fixation. Skew adaptation information was transferred partially to non-adapted retinal locations. Thus, adaptation to skewed natural scenes induces coordinated plasticity in lower and higher cortical areas of the visual pathway. PMID:28751870

  16. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution.

    PubMed

    Rigby, Robert A; Stasinopoulos, D Mikis

    2004-10-15

    The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.

  17. A Feasibility Study of Expanding the F404 Aircraft Engine Repair Capability at the Aircraft Intermediate Maintenance Department

    DTIC Science & Technology

    1993-06-01

    1 A. OBJECTIVES ............. .... .................. 1 B. HISTORY ................... .................... 2 C...utilization, and any additional manpower requirements at the "selected" AIMD’s. B. HISTORY Until late 1991 both NADEP JAX and NADEP North Island (NORIS...TRIANGULAR OR ALL LOG NORMAL DISTRIBUTIONS FOR SERVICE TIMES AT AIND CECIL FIELD maintenance/ Triangular Log Normal MAZDA Difference Differe•ce Supply

  18. What are the implications of rapid global warming for landslide-triggered turbidity current activity?

    NASA Astrophysics Data System (ADS)

    Clare, Michael; Peter, Talling; James, Hunt

    2014-05-01

    A geologically short-lived (~170kyr) episode of global warming occurred at ~55Ma, termed the Initial Eocene Thermal Maximum (IETM). Global temperatures rose by up to 8oC over only ~10kyr and a massive perturbation of the global carbon cycle occurred; creating a negative carbon isotopic (~-4% δ13C) excursion in sedimentary records. This interval has relevance to study of future climate change and its influence on geohazards including submarine landslides and turbidity currents. We analyse the recurrence frequency of turbidity currents, potentially initiated from large-volume slope failures. The study focuses on two sedimentary intervals that straddle the IETM and we discuss implications for turbidity current triggering. We present the results of statistical analyses (regression, generalised linear model, and proportional hazards model) for extensive turbidite records from an outcrop at Zumaia in NE Spain (N=285; 54.0 to 56.5 Ma) and based on ODP site 1068 on the Iberian Margin (N=1571; 48.2 to 67.6 Ma). The sedimentary sequences provide clear differentiation between hemipelagic and turbiditic mud with only negligible evidence of erosion. We infer dates for turbidites by converting hemipelagic bed thicknesses to time using interval-averaged accumulation rates. Multi-proxy dating techniques provide good age constraint. The background trend for the Zumaia record shows a near-exponential distribution of turbidite recurrence intervals, while the Iberian Margin shows a log-normal response. This is interpreted to be related to regional time-independence (exponential) and the effects of additive processes (log-normal). We discuss how a log-normal response may actually be generated over geological timescales from multiple shorter periods of random turbidite recurrence. The IETM interval shows a dramatic departure from both these background trends, however. This is marked by prolonged hiatuses (0.1 and 0.6 Myr duration) in turbidity current activity in contrast to the arithmetic mean recurrence, λ, for the full records (λ=0.007 and 0.0125 Myr). This period of inactivity is coincident with a dramatic carbon isotopic excursion (i.e. warmest part of the IETM) and heavily skews statistical analyses for both records. Dramatic global warming appears to exert a strong control on inhibiting turbidity current activity; whereas the effects of sea level change are not shown to be statistically significant. Rapid global warming is often implicated as a potential landslide trigger, due to dissociation of gas hydrates in response to elevated ocean temperatures. Other studies have suggested that intense global warming may actually be attributed to the atmospheric release of gas hydrates following catastrophic failure of large parts of a continental slope. Either way, a greater intensity of landslide and resultant turbidity current activity would be expected during the IETM; however, our findings are to the contrary. We offer some explanations in relation to potential triggers. Our work suggests that previous rapid global warming at the IETM did not trigger more frequent turbidity currents. This has direct relevance to future assessments relating to landslide-triggered tsunami hazard, and breakage of subsea cables by turbidity currents.

  19. Erosion associated with cable and tractor logging in northwestern California

    Treesearch

    R. M. Rice; P. A. Datzman

    1981-01-01

    Abstract - Erosion and site conditions were measured at 102 logged plots in northwestern California. Erosion averaged 26.8 m 3 /ha. A log-normal distribution was a better fit to the data. The antilog of the mean of the logarithms of erosion was 3.2 m 3 /ha. The Coast District Erosion Hazard Rating was a poor predictor of erosion related to logging. In a new equation...

  20. The Affective Impact of Financial Skewness on Neural Activity and Choice

    PubMed Central

    Wu, Charlene C.; Bossaerts, Peter; Knutson, Brian

    2011-01-01

    Few finance theories consider the influence of “skewness” (or large and asymmetric but unlikely outcomes) on financial choice. We investigated the impact of skewed gambles on subjects' neural activity, self-reported affective responses, and subsequent preferences using functional magnetic resonance imaging (FMRI). Neurally, skewed gambles elicited more anterior insula activation than symmetric gambles equated for expected value and variance, and positively skewed gambles also specifically elicited more nucleus accumbens (NAcc) activation than negatively skewed gambles. Affectively, positively skewed gambles elicited more positive arousal and negatively skewed gambles elicited more negative arousal than symmetric gambles equated for expected value and variance. Subjects also preferred positively skewed gambles more, but negatively skewed gambles less than symmetric gambles of equal expected value. Individual differences in both NAcc activity and positive arousal predicted preferences for positively skewed gambles. These findings support an anticipatory affect account in which statistical properties of gambles—including skewness—can influence neural activity, affective responses, and ultimately, choice. PMID:21347239

  1. Study on probability distribution of prices in electricity market: A case study of zhejiang province, china

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.

    2009-05-01

    The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.

  2. A nonparametric spatial scan statistic for continuous data.

    PubMed

    Jung, Inkyung; Cho, Ho Jin

    2015-10-20

    Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.

  3. Vertical Scaling with the Rasch Model Utilizing Default and Tight Convergence Settings with WINSTEPS and BILOG-MG

    ERIC Educational Resources Information Center

    Custer, Michael; Omar, Md Hafidz; Pomplun, Mark

    2006-01-01

    This study compared vertical scaling results for the Rasch model from BILOG-MG and WINSTEPS. The item and ability parameters for the simulated vocabulary tests were scaled across 11 grades; kindergarten through 10th. Data were based on real data and were simulated under normal and skewed distribution assumptions. WINSTEPS and BILOG-MG were each…

  4. Skewed steel bridges, part ii : cross-frame and connection design to ensure brace effectiveness : technical summary.

    DOT National Transportation Integrated Search

    2017-08-01

    Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...

  5. Skewed steel bridges, part ii : cross-frame and connection design to ensure brace effectiveness : final report.

    DOT National Transportation Integrated Search

    2017-08-01

    Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...

  6. Tips and Tricks for Successful Application of Statistical Methods to Biological Data.

    PubMed

    Schlenker, Evelyn

    2016-01-01

    This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.

  7. Using color histogram normalization for recovering chromatic illumination-changed images.

    PubMed

    Pei, S C; Tseng, C L; Wu, C C

    2001-11-01

    We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.

  8. Effect of Resonator Axis Skew on Normal Incidence Impedance

    NASA Technical Reports Server (NTRS)

    Parrott, Tony L.; Jones, Michael G.; Homeijer, Brian

    2003-01-01

    High by-pass turbofan engines have fewer fan blades and lower rotation speeds than their predecessors. Consequently, the noise suppression at the low frequency end of the noise spectra has become an increasing concern. This has led to a renewed emphasis on improving noise suppression efficiency of passive, duct liner treatments at the lower frequencies. For a variety of reasons, passive liners are comprised of locally-reacting, resonant absorbers. One reason for this design choice is to satisfy operational and economic requirements. The simplest liner design consists of a single layer of honeycomb core sandwiched between a porous facesheet and an impervious backing plate. These resonant absorbing structures are integrated into the nacelle wall and are very ef- ficient over a limited bandwidth centered on their resonance frequency. Increased noise suppression bandwidth and greater suppression at lower frequencies is typically achieved for conventional liners by increasing the liner depth and incorporating thin porous septa into the honeycomb core. However, constraints on liner depth in modern high by-pass engine nacelles severely limit the suppression bandwidth extension to lower frequencies. Also, current honeycomb core liners may not be suitable for irregular geometric volumes heretofore not considered. It is of interest, therefore, to find ways to circumvent liner depth restrictions and resonator cavity shape constraints. One way to increase effective liner depth is to skew the honeycomb core axis relative to the porous facesheet surface. Other possibilities are to alter resonator cavity shape, e.g. high aspect ratio, narrow channels that possibly include right angle bends, 180. channel fold-backs, and splayed channel walls to conform to irregular geometric constraints. These possibilities constitute the practical motivation for expanding impedance modeling capability to include unconventional resonator orientations and shapes. The work reported in this paper is in the nature of a progress report and is limited to examining the implications of resonator axis skew on the composite normal incidence impedance of an array of resonator channels. Specifically, experimental results are compared with a modified impedance prediction model for highaspect- ratio, rectangular, resonator channels with varying amounts of skew relative to the incident particle velocity. It is shown that for resonator channel widths of 1 to 2 mm, aspect ratios of 25 to 50, and skew angles of zero to sixty degrees, the surface impedance of test models can be predicted with good accuracy. Predicted resistances and reactances are particularly well correlated through the first resonance and first anti-resonance for all six test models investigated. Beyond the first anti-resonance, the impedance prediction model loses the ability to predict details of resistance and reactance but still predicts the mean trends very well.

  9. Dynamic Modeling from Flight Data with Unknown Time Skews

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2016-01-01

    A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.

  10. Sex differences in the drivers of reproductive skew in a cooperative breeder.

    PubMed

    Nelson-Flower, Martha J; Flower, Tom P; Ridley, Amanda R

    2018-04-16

    Many cooperatively breeding societies are characterized by high reproductive skew, such that some socially dominant individuals breed, while socially subordinate individuals provide help. Inbreeding avoidance serves as a source of reproductive skew in many high-skew societies, but few empirical studies have examined sources of skew operating alongside inbreeding avoidance or compared individual attempts to reproduce (reproductive competition) with individual reproductive success. Here, we use long-term genetic and observational data to examine factors affecting reproductive skew in the high-skew cooperatively breeding southern pied babbler (Turdoides bicolor). When subordinates can breed, skew remains high, suggesting factors additional to inbreeding avoidance drive skew. Subordinate females are more likely to compete to breed when older or when ecological constraints on dispersal are high, but heavy subordinate females are more likely to successfully breed. Subordinate males are more likely to compete when they are older, during high ecological constraints, or when they are related to the dominant male, but only the presence of within-group unrelated subordinate females predicts subordinate male breeding success. Reproductive skew is not driven by reproductive effort, but by forces such as intrinsic physical limitations and intrasexual conflict (for females) or female mate choice, male mate-guarding and potentially reproductive restraint (for males). Ecological conditions or "outside options" affect the occurrence of reproductive conflict, supporting predictions of recent synthetic skew models. Inbreeding avoidance together with competition for access to reproduction may generate high skew in animal societies, and disparate processes may be operating to maintain male vs. female reproductive skew in the same species. © 2018 John Wiley & Sons Ltd.

  11. Model-based Bayesian inference for ROC data analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.

  12. Effects of a primordial magnetic field with log-normal distribution on the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Yamazaki, Dai G.; Ichiki, Kiyotomo; Takahashi, Keitaro

    2011-12-01

    We study the effect of primordial magnetic fields (PMFs) on the anisotropies of the cosmic microwave background (CMB). We assume the spectrum of PMFs is described by log-normal distribution which has a characteristic scale, rather than power-law spectrum. This scale is expected to reflect the generation mechanisms and our analysis is complementary to previous studies with power-law spectrum. We calculate power spectra of energy density and Lorentz force of the log-normal PMFs, and then calculate CMB temperature and polarization angular power spectra from scalar, vector, and tensor modes of perturbations generated from such PMFs. By comparing these spectra with WMAP7, QUaD, CBI, Boomerang, and ACBAR data sets, we find that the current CMB data set places the strongest constraint at k≃10-2.5Mpc-1 with the upper limit B≲3nG.

  13. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  14. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  15. Assessment of Poisson, probit and linear models for genetic analysis of presence and number of black spots in Corriedale sheep.

    PubMed

    Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D

    2011-04-01

    Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep. © 2010 Blackwell Verlag GmbH.

  16. Fatigue shifts and scatters heart rate variability in elite endurance athletes.

    PubMed

    Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire

    2013-01-01

    This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in 'fatigue' or in 'no-fatigue' state in 'real life' conditions. 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms(2) and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). 172 trials were identified as in a 'fatigue' and 891 as in 'no-fatigue' state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between 'fatigue' and 'no-fatigue': HRSU (+6.27±0.61 bpm), logTPSU (-0.36±0.04), logLFSU (-0.27±0.04), logHFSU (-0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (-9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (-0.28±0.03), logLFST (-0.29±0.03), logHFST (-0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the 'fatigue' state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern.

  17. Hybrid excited claw pole generator with skewed and non-skewed permanent magnets

    NASA Astrophysics Data System (ADS)

    Wardach, Marcin

    2017-12-01

    This article contains simulation results of the Hybrid Excited Claw Pole Generator with skewed and non-skewed permanent magnets on rotor. The experimental machine has claw poles on two rotor sections, between which an excitation control coil is located. The novelty of this machine is existence of non-skewed permanent magnets on claws of one part of the rotor and skewed permanent magnets on the second one. The paper presents the construction of the machine and analysis of the influence of the PM skewing on the cogging torque and back-emf. Simulation studies enabled the determination of the cogging torque and the back-emf rms for both: the strengthening and the weakening of magnetic field. The influence of the magnets skewing on the cogging torque and the back-emf rms have also been analyzed.

  18. A Posteriori Correction of Forecast and Observation Error Variances

    NASA Technical Reports Server (NTRS)

    Rukhovets, Leonid

    2005-01-01

    Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.

  19. Software Design Document MCC CSCI (1). Volume 1 Sections 1.0-2.18

    DTIC Science & Technology

    1991-06-01

    AssociationUserProtocol /simnet/common!include/prot ____________________ ____________________ ocol/p assoc.h Primitive long Standard C type...Information. 2.2.1.4.2 ProcessMessage ProcessMessage processes a message from another process. type describes the message as either one-way, a synchronous or...Macintosh Consoles. This is sometimes necessary due to normal clock skew so that operations among the MCC components will remain synchronized . This

  20. Effect of cross grain on stress waves in lumber

    Treesearch

    C.C. Gerhards

    1980-01-01

    An evaluation is made of the effect of cross grain on the transit time of longitudinal compression stress waves in Douglas-fir 2 by 8 lumber. Cross grain causes the stress wave to advance with a front or contour skewed in the direction of the grain angle, rather than to advance with a front normal to the long axis of lumber. Thus, the timing of the stress wave in...

  1. Topics in Statistical Calibration

    DTIC Science & Technology

    2014-03-27

    on a parametric bootstrap where, instead of sampling directly from the residuals , samples are drawn from a normal distribution. This procedure will...addition to centering them (Davison and Hinkley, 1997). When there are outliers in the residuals , the bootstrap distribution of x̂0 can become skewed or...based and inversion methods using the linear mixed-effects model. Then, a simple parametric bootstrap algorithm is proposed that can be used to either

  2. The Arabidopsis SKU5 gene encodes an extracellular glycosyl phosphatidylinositol-anchored glycoprotein involved in directional root growth

    NASA Technical Reports Server (NTRS)

    Sedbrook, John C.; Carroll, Kathleen L.; Hung, Kai F.; Masson, Patrick H.; Somerville, Chris R.

    2002-01-01

    To investigate how roots respond to directional cues, we characterized a T-DNA-tagged Arabidopsis mutant named sku5 in which the roots skewed and looped away from the normal downward direction of growth on inclined agar surfaces. sku5 roots and etiolated hypocotyls were slightly shorter than normal and exhibited a counterclockwise (left-handed) axial rotation bias. The surface-dependent skewing phenotype disappeared when the roots penetrated the agar surface, but the axial rotation defect persisted, revealing that these two directional growth processes are separable. The SKU5 gene belongs to a 19-member gene family designated SKS (SKU5 Similar) that is related structurally to the multiple-copper oxidases ascorbate oxidase and laccase. However, the SKS proteins lack several of the conserved copper binding motifs characteristic of copper oxidases, and no enzymatic function could be assigned to the SKU5 protein. Analysis of plants expressing SKU5 reporter constructs and protein gel blot analysis showed that SKU5 was expressed most strongly in expanding tissues. SKU5 was glycosylated and modified by glycosyl phosphatidylinositol and localized to both the plasma membrane and the cell wall. Our observations suggest that SKU5 affects two directional growth processes, possibly by participating in cell wall expansion.

  3. Blood pressure in head‐injured patients

    PubMed Central

    Mitchell, Patrick; Gregson, Barbara A; Piper, Ian; Citerio, Giuseppe; Mendelow, A David; Chambers, Iain R

    2007-01-01

    Objective To determine the statistical characteristics of blood pressure (BP) readings from a large number of head‐injured patients. Methods The BrainIT group has collected high time‐resolution physiological and clinical data from head‐injured patients who require intracranial pressure (ICP) monitoring. The statistical features of this dataset of BP measurements with time resolution of 1 min from 200 patients is examined. The distributions of BP measurements and their relationship with simultaneous ICP measurements are described. Results The distributions of mean, systolic and diastolic readings are close to normal with modest skewing towards higher values. There is a trend towards an increase in blood pressure with advancing age, but this is not significant. Simultaneous blood pressure and ICP values suggest a triphasic relationship with a BP rising at 0.28 mm Hg/mm Hg of ICP, for ICP up to 32 mm Hg, and 0.9 mm Hg/mm Hg of ICP for ICP from 33 to 55 mm Hg, and falling sharply with rising ICP for ICP >55 mm Hg. Conclusions Patients with head injury appear to have a near normal distribution of blood pressure readings that are skewed towards higher values. The relationship between BP and ICP may be triphasic. PMID:17138594

  4. On the Effects of Wind Turbine Wake Skew Caused by Wind Veer: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, Matthew J; Sirnivas, Senu

    Because of Coriolis forces caused by the Earth's rotation, the structure of the atmospheric boundary layer often contains wind-direction change with height, also known as wind-direction veer. Under low turbulence conditions, such as in stably stratified atmospheric conditions, this veer can be significant, even across the vertical extent of a wind turbine's rotor disk. The veer then causes the wind turbine wake to skew as it advects downstream. This wake skew has been observed both experimentally and numerically. In this work, we attempt to examine the wake skewing process in some detail, and quantify how differently a skewed wake versusmore » a non skewed wake affects a downstream turbine. We do this by performing atmospheric large-eddy simulations to create turbulent inflow winds with and without veer. In the veer case, there is a roughly 8 degree wind direction change across the turbine rotor. We then perform subsequent large-eddy simulations using these inflow data with an actuator line rotor model to create wakes. The turbine modeled is a large, modern, offshore, multimegawatt turbine. We examine the unsteady wake data in detail and show that the skewed wake recovers faster than the non skewed wake. We also show that the wake deficit does not skew to the same degree that a passive tracer would if subject to veered inflow. Last, we use the wake data to place a hypothetical turbine 9 rotor diameters downstream by running aeroelastic simulations with the simulated wake data. We see differences in power and loads if this downstream turbine is subject to a skewed or non skewed wake. We feel that the differences observed between the skewed and nonskewed wake are important enough that the skewing effect should be included in engineering wake models.« less

  5. On the Effects of Wind Turbine Wake Skew Caused by Wind Veer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchfield, Matthew J; Sirnivas, Senu

    Because of Coriolis forces caused by the Earth's rotation, the structure of the atmospheric boundary layer often contains wind-direction change with height, also known as wind-direction veer. Under low turbulence conditions, such as in stably stratified atmospheric conditions, this veer can be significant, even across the vertical extent of a wind turbine's rotor disk. The veer then causes the wind turbine wake to skew as it advects downstream. This wake skew has been observed both experimentally and numerically. In this work, we attempt to examine the wake skewing process in some detail, and quantify how differently a skewed wake versusmore » a non skewed wake affects a downstream turbine. We do this by performing atmospheric large-eddy simulations to create turbulent inflow winds with and without veer. In the veer case, there is a roughly 8 degree wind direction change across the turbine rotor. We then perform subsequent large-eddy simulations using these inflow data with an actuator line rotor model to create wakes. The turbine modeled is a large, modern, offshore, multimegawatt turbine. We examine the unsteady wake data in detail and show that the skewed wake recovers faster than the non skewed wake. We also show that the wake deficit does not skew to the same degree that a passive tracer would if subject to veered inflow. Last, we use the wake data to place a hypothetical turbine 9 rotor diameters downstream by running aeroelastic simulations with the simulated wake data. We see differences in power and loads if this downstream turbine is subject to a skewed or non skewed wake. We feel that the differences observed between the skewed and nonskewed wake are important enough that the skewing effect should be included in engineering wake models.« less

  6. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    PubMed

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  7. Assessment of pubertal development in Egyptian girls.

    PubMed

    Hosny, Laila A; El-Ruby, Mona O; Zaki, Moushira E; Aglan, Mona S; Zaki, Maha S; El Gammal, Mona A; Mazen, Inas M

    2005-06-01

    Puberty is a significant event of human growth and maturation associated with marked physiological and psychological changes. The aim of this study was to assess normal pubertal development in Egyptian girls to define normal, precocious and delayed puberty. The present study included a cross-sectional sample of 1,550 normal Egyptian girls of high and middle socioeconomic class living in Cairo. Their ages ranged from 6.5 to 18.5 years. Pubertal assessment was made according to Tanner staging. The mean menarcheal age (MMA) was estimated using probit analysis. Weight and height were measured and body mass index (BMI) was calculated. The mean age at breast bud stage (B2) was 10.71+/-1.6, pubic hair stage (PH2) was 10.46+/-1.36, while axillary hair stage (A2) was 11.65+/-1.62 and MMA was 12.44 years. The mean age at attainment of puberty was compared with those of other Egyptian studies and other populations. Girls of the present study started pubertal development and achieved menarche earlier than those of previous Egyptian studies confirming a secular trend. Differences between the present study and other worldwide studies can be attributed to various genetic, racial, geographical, nutritional, and secular trend factors.

  8. [Distribution of individuals by spontaneous frequencies of lymphocytes with micronuclei. Particularity and consequences].

    PubMed

    Serebrianyĭ, A M; Akleev, A V; Aleshchenko, A V; Antoshchina, M M; Kudriashova, O V; Riabchenko, N I; Semenova, L P; Pelevina, I I

    2011-01-01

    By micronucleus (MN) assay with cytokinetic cytochalasin B block, the mean frequency of blood lymphocytes with MN has been determined in 76 Moscow inhabitants, 35 people from Obninsk and 122 from Chelyabinsk region. In contrast to the distribution of individuals on spontaneous frequency of cells with aberrations, which was shown to be binomial (Kusnetzov et al., 1980), the distribution of individuals on the spontaneous frequency of cells with MN in all three massif can be acknowledged as log-normal (chi2 test). Distribution of individuals in the joined massifs (Moscow and Obninsk inhabitants) and in the unique massif of all inspected with great reliability must be acknowledged as log-normal (0.70 and 0.86 correspondingly), but it cannot be regarded as Poisson, binomial or normal. Taking into account that log-normal distribution of children by spontaneous frequency of lymphocytes with MN has been observed by the inspection of 473 children from different kindergartens in Moscow we can make the conclusion that log-normal is regularity inherent in this type of damage of lymphocytes genome. On the contrary the distribution of individuals on induced by irradiation in vitro lymphocytes with MN frequency in most cases must be acknowledged as normal. This distribution character points out that damage appearance in the individual (genomic instability) in a single lymphocytes increases the probability of the damage appearance in another lymphocytes. We can propose that damaged stem cells lymphocyte progenitor's exchange by information with undamaged cells--the type of the bystander effect process. It can also be supposed that transmission of damage to daughter cells occurs in the time of stem cells division.

  9. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    PubMed

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  10. Fluctuations in email size

    NASA Astrophysics Data System (ADS)

    Matsubara, Yoshitsugu; Musashi, Yasuo

    2017-12-01

    The purpose of this study is to explain fluctuations in email size. We have previously investigated the long-term correlations between email send requests and data flow in the system log of the primary staff email server at a university campus, finding that email size frequency follows a power-law distribution with two inflection points, and that the power-law property weakens the correlation of the data flow. However, the mechanism underlying this fluctuation is not completely understood. We collected new log data from both staff and students over six academic years and analyzed the frequency distribution thereof, focusing on the type of content contained in the emails. Furthermore, we obtained permission to collect "Content-Type" log data from the email headers. We therefore collected the staff log data from May 1, 2015 to July 31, 2015, creating two subdistributions. In this paper, we propose a model to explain these subdistributions, which follow log-normal-like distributions. In the log-normal-like model, email senders -consciously or unconsciously- regulate the size of new email sentences according to a normal distribution. The fitting of the model is acceptable for these subdistributions, and the model demonstrates power-law properties for large email sizes. An analysis of the length of new email sentences would be required for further discussion of our model; however, to protect user privacy at the participating organization, we left this analysis for future work. This study provides new knowledge on the properties of email sizes, and our model is expected to contribute to the decision on whether to establish upper size limits in the design of email services.

  11. Network Skewness Measures Resilience in Lake Ecosystems

    NASA Astrophysics Data System (ADS)

    Langdon, P. G.; Wang, R.; Dearing, J.; Zhang, E.; Doncaster, P.; Yang, X.; Yang, H.; Dong, X.; Hu, Z.; Xu, M.; Yanjie, Z.; Shen, J.

    2017-12-01

    Changes in ecosystem resilience defy straightforward quantification from biodiversity metrics, which ignore influences of community structure. Naturally self-organized network structures show positive skewness in the distribution of node connections. Here we test for skewness reduction in lake diatom communities facing anthropogenic stressors, across a network of 273 lakes in China containing 452 diatom species. Species connections show positively skewed distributions in little-impacted lakes, switching to negative skewness in lakes associated with human settlement, surrounding land-use change, and higher phosphorus concentration. Dated sediment cores reveal a down-shifting of network skewness as human impacts intensify, and reversal with recovery from disturbance. The appearance and degree of negative skew presents a new diagnostic for quantifying system resilience and impacts from exogenous forcing on ecosystem communities.

  12. Intracellular activity of antibiotics in a model of human THP-1 macrophages infected by a Staphylococcus aureus small-colony variant strain isolated from a cystic fibrosis patient: pharmacodynamic evaluation and comparison with isogenic normal-phenotype and revertant strains.

    PubMed

    Nguyen, Hoang Anh; Denis, Olivier; Vergison, Anne; Theunis, Anne; Tulkens, Paul M; Struelens, Marc J; Van Bambeke, Françoise

    2009-04-01

    Small-colony variant (SCV) strains of Staphylococcus aureus show reduced antibiotic susceptibility and intracellular persistence, potentially explaining therapeutic failures. The activities of oxacillin, fusidic acid, clindamycin, gentamicin, rifampin, vancomycin, linezolid, quinupristin-dalfopristin, daptomycin, tigecycline, moxifloxacin, telavancin, and oritavancin have been examined in THP-1 macrophages infected by a stable thymidine-dependent SCV strain in comparison with normal-phenotype and revertant isogenic strains isolated from the same cystic fibrosis patient. The SCV strain grew slowly extracellularly and intracellularly (1- and 0.2-log CFU increase in 24 h, respectively). In confocal and electron microscopy, SCV and the normal-phenotype bacteria remain confined in acid vacuoles. All antibiotics tested, except tigecycline, caused a net reduction in bacterial counts that was both time and concentration dependent. At an extracellular concentration corresponding to the maximum concentration in human serum (total drug), oritavancin caused a 2-log CFU reduction at 24 h; rifampin, moxifloxacin, and quinupristin-dalfopristin caused a similar reduction at 72 h; and all other antibiotics had only a static effect at 24 h and a 1-log CFU reduction at 72 h. In concentration dependence experiments, response to oritavancin was bimodal (two successive plateaus of -0.4 and -3.1 log CFU); tigecycline, moxifloxacin, and rifampin showed maximal effects of -1.1 to -1.7 log CFU; and the other antibiotics produced results of -0.6 log CFU or less. Addition of thymidine restored intracellular growth of the SCV strain but did not modify the activity of antibiotics (except quinupristin-dalfopristin). All drugs (except tigecycline and oritavancin) showed higher intracellular activity against normal or revertant phenotypes than against SCV strains. The data may help rationalizing the design of further studies with intracellular SCV strains.

  13. Portable acuity screening for any school: validation of patched HOTV with amblyopic patients and Bangerter normals.

    PubMed

    Tsao Wu, Maya; Armitage, M Diane; Trujillo, Claire; Trujillo, Anna; Arnold, Laura E; Tsao Wu, Lauren; Arnold, Robert W

    2017-12-04

    We needed to validate and calibrate our portable acuity screening tools so amblyopia could be detected quickly and effectively at school entry. Spiral-bound flip cards and download pdf surround HOTV acuity test box with critical lines were combined with a matching card. Amblyopic patients performed critical line, then threshold acuity which was then compared to patched E-ETDRS acuity. 5 normal subjects wore Bangerter foil goggles to simulate blur for comparative validation. The 31 treated amblyopic eyes showed: logMAR HOTV = 0.97(logMAR E-ETDRS)-0.04 r2 = 0.88. All but two (6%) fell less than 2 lines difference. The five showed logMAR HOTV = 1.09 ((logMAR E-ETDRS) + .15 r2 = 0.63. The critical-line, test box was 98% efficient at screening within one line of 20/40. These tools reliably detected acuity in treated amblyopic patients and Bangerter blurred normal subjects. These free and affordable tools provide sensitive screening for amblyopia in children from public, private and home schools. Changing "pass" criteria to 4 out of 5 would improve sensitivity with somewhat slower testing for all students.

  14. Continuity diaphragm for skewed continuous span precast prestressed concrete girder bridges.

    DOT National Transportation Integrated Search

    2004-10-01

    Continuity diaphragms used on skewed bents in prestressed girder bridges cause difficulties in detailing and : construction. Details for bridges with large diaphragm skew angles (>30) have not been a problem for LA DOTD. : However, as the skew angl...

  15. Investigating uplift in the South-Western Barents Sea using sonic and density well log measurements

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Ellis, M.

    2014-12-01

    Sediments in the Barents Sea have undergone large amounts of uplift due to Plio-Pleistoncene deglaciation as well as Palaeocene-Eocene Atlantic rifting. Uplift affects the reservoir quality, seal capacity and fluid migration. Therefore, it is important to gain reliable uplift estimates in order to evaluate the petroleum prospectivity properly. To this end, a number of quantification methods have been proposed, such as Apatite Fission Track Analysis (AFTA), and integration of seismic surveys with well log data. AFTA usually provides accurate uplift estimates, but the data is limited due to its high cost. While the seismic survey can provide good uplift estimate when well data is available for calibration, the uncertainty can be large in areas where there is little to no well data. We estimated South-Western Barents Sea uplift based on well data from the Norwegian Petroleum Directorate. Primary assumptions include time-irreversible shale compaction trends and a universal normal compaction trend for a specified formation. Sonic and density logs from two Cenozoic shale formation intervals, Kolmule and Kolje, were used for the study. For each formation, we studied logs of all released wells, and established exponential normal compaction trends based on a single well. That well was then deemed the reference well, and relative uplift can be calculated at other well locations based on the offset from the normal compaction trend. We found that the amount of uplift increases along the SW to NE direction, with a maximum difference of 1,447 m from the Kolje FM estimate, and 699 m from the Kolmule FM estimate. The average standard deviation of the estimated uplift is 130 m for the Kolje FM, and 160 m for the Kolmule FM using the density log. While results from density logs and sonic logs have good agreement in general, the density log provides slightly better results in terms of higher consistency and lower standard deviation. Our results agree with published papers qualitatively with some differences in the actual amount of uplifts. The results are considered to be more accurate due to the higher resolution of the log scale data that was used.

  16. Dose-time relationships for post-irradiation cutaneous telangiectasia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, L.; Ubaldi, S.E.

    1977-01-01

    Seventy-five patients who had received electron beam radiation a year or more previously were studied. The irradiated skin portals were photographed and late reactions graded in terms of the number and severity of telangiectatic lesions observed. The skin dose, number of fractions, overall treatment time and irradiated volume were recorded in each case. A Strandqvist-type iso-effect line was derived for this response. A multi-probit search program also was used to derive best-fitting cell population kinetic parameters for the same data. From these parameters a comprehensive iso-effect table could be computed for a wide range of treatment schedules including daily treatmentmore » as well as fractionation at shorter and longer intervals; this provided a useful set of normal tissue tolerance limits for late effects.« less

  17. Effects of supplementary private health insurance on physician visits in Korea.

    PubMed

    Kang, Sungwook; You, Chang Hoon; Kwon, Young Dae; Oh, Eun-Hwan

    2009-12-01

    The coverage of social health insurance has remained limited, despite it being compulsory in Korea. Accordingly, Koreans have come to rely upon supplementary private health insurance (PHI) to cover their medical costs. We examined the effects of supplementary PHI on physician visits in Korea. This study used individual data from 11,043 respondents who participated in the Korean Labor and Income Panel Survey in 2001. We conducted a single probit model to identify the relationship between PHI and physician visits, with adjustment for the following covariates: demographic characteristics, socioeconomic status, health status, and health-related behavior. Finally, we performed a bivariate probit model to examine the true effect of PHI on physician visits, with adjustment for the above covariates plus unobservable covariates that might affect not only physician visit, but also the purchase of PHI. We found that about 38% of all respondents had one or more private health plans. Forty-five percent of all respondents visited one or more physicians, and 49% of those who were privately insured had physician visits compared with 42% of the uninsured. The single probit model showed that those with PHI were about 14 percentage points more likely to visit physicians than those who do not have PHI. However, this distinction disappears in the bivariate probit model. This result might have been a consequence of the nature of private health plans in Korea. Private insurance companies pay a fixed amount directly to their enrollees in case of illness/injury, and the individuals are responsible subsequently for purchasing their own healthcare services. This study demonstrated the potential of Korean PHI to address the problem of moral hazard. These results serve as a reference for policy makers when considering how to finance healthcare services, as well as to contain healthcare expenditure.

  18. The effect of forward skewed rotor blades on aerodynamic and aeroacoustic performance of axial-flow fan

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Zhong, Fangyuan

    Based on comparative experiment, this paper deals with using tangentially skewed rotor blades in axial-flow fan. It is seen from the comparison of the overall performance of the fan with skewed bladed rotor and radial bladed rotor that the skewed blades operate more efficiently than the radial blades, especially at low volume flows. Meanwhile, decrease in pressure rise and flow rate of axial-flow fan with skewed rotor blades is found. The rotor-stator interaction noise and broadband noise of axial-flow fan are reduced with skewed rotor blades. Forward skewed blades tend to reduce the accumulation of the blade boundary layer in the tip region resulting from the effect of centrifugal forces. The turning of streamlines from the outer radius region into inner radius region in blade passages due to the radial component of blade forces of skewed blades is the main reason for the decrease in pressure rise and flow rate.

  19. Lactobacilli Activate Human Dendritic Cells that Skew T Cells Toward T Helper 1 Polarization

    DTIC Science & Technology

    2005-01-06

    Species Modulate the Phenotype and Function of MDCs. Previous studies have shown that Lactobacillus plantarum and Lactobacillus rhamnosus can induce...cell immune responses at both systemic and mucosal sites. Many Lactobacillus species are normal members of the human gut microflora and most are regarded...several well defined strains, representing three species of Lactobacillus on human myeloid DCs (MDCs) and found that they modulated the phenotype and

  20. The Shock and Vibration Digest, Volume 16, Number 10

    DTIC Science & Technology

    1984-10-01

    shaped, arid a general polygonal-shaped membrane Fourier expansion -collocation method and the finite without symmetry They also derived, with the help...geometry is not applicable; therefore, a Fourier sine series expansion technique. The meth- not much work on the dynamic behavior of skew- od was applied...particular m.-de are obtained. This normal mode expansion form of deflection surface. The stability of motion approach has recently been used in a series of

  1. DNA Asymmetric Strand Bias Affects the Amino Acid Composition of Mitochondrial Proteins

    PubMed Central

    Min, Xiang Jia; Hickey, Donal A.

    2007-01-01

    Abstract Variations in GC content between genomes have been extensively documented. Genomes with comparable GC contents can, however, still differ in the apportionment of the G and C nucleotides between the two DNA strands. This asymmetric strand bias is known as GC skew. Here, we have investigated the impact of differences in nucleotide skew on the amino acid composition of the encoded proteins. We compared orthologous genes between animal mitochondrial genomes that show large differences in GC and AT skews. Specifically, we compared the mitochondrial genomes of mammals, which are characterized by a negative GC skew and a positive AT skew, to those of flatworms, which show the opposite skews for both GC and AT base pairs. We found that the mammalian proteins are highly enriched in amino acids encoded by CA-rich codons (as predicted by their negative GC and positive AT skews), whereas their flatworm orthologs were enriched in amino acids encoded by GT-rich codons (also as predicted from their skews). We found that these differences in mitochondrial strand asymmetry (measured as GC and AT skews) can have very large, predictable effects on the composition of the encoded proteins. PMID:17974594

  2. Selection on skewed characters and the paradox of stasis

    PubMed Central

    Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel

    2018-01-01

    Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modelling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold’s (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate samples size. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date – repeatedly described as more evolutionarily stable than expected –, so this skewness should be accounted for when investigating evolutionary dynamics in the wild. PMID:28921508

  3. A multicenter examination and strategic revisions of the Yale Global Tic Severity Scale.

    PubMed

    McGuire, Joseph F; Piacentini, John; Storch, Eric A; Murphy, Tanya K; Ricketts, Emily J; Woods, Douglas W; Walkup, John W; Peterson, Alan L; Wilhelm, Sabine; Lewin, Adam B; McCracken, James T; Leckman, James F; Scahill, Lawrence

    2018-05-08

    To examine the internal consistency and distribution of the Yale Global Tic Severity Scale (YGTSS) scores to inform modification of the measure. This cross-sectional study included 617 participants with a tic disorder (516 children and 101 adults), who completed an age-appropriate diagnostic interview and the YGTSS to evaluate tic symptom severity. The distributions of scores on YGTSS dimensions were evaluated for normality and skewness. For dimensions that were skewed across motor and phonic tics, a modified Delphi consensus process was used to revise selected anchor points. Children and adults had similar clinical characteristics, including tic symptom severity. All participants were examined together. Strong internal consistency was identified for the YGTSS Motor Tic score (α = 0.80), YGTSS Phonic Tic score (α = 0.87), and YGTSS Total Tic score (α = 0.82). The YGTSS Total Tic and Impairment scores exhibited relatively normal distributions. Several subscales and individual item scales departed from a normal distribution. Higher scores were more often used on the Motor Tic Number, Frequency, and Intensity dimensions and the Phonic Tic Frequency dimension. By contrast, lower scores were more often used on Motor Tic Complexity and Interference, and Phonic Tic Number, Intensity, Complexity, and Interference. The YGTSS exhibits good internal consistency across children and adults. The parallel findings across Motor and Phonic Frequency, Complexity, and Interference dimensions prompted minor revisions to the anchor point description to promote use of the full range of scores in each dimension. Specific minor revisions to the YGTSS Phonic Tic Symptom Checklist were also proposed. © 2018 American Academy of Neurology.

  4. Measurement of the permeability, perfusion, and histogram characteristics in relapsing-remitting multiple sclerosis using dynamic contrast-enhanced MRI with extended Tofts linear model.

    PubMed

    Yin, Ping; Xiong, Hua; Liu, Yi; Sah, Shambhu K; Zeng, Chun; Wang, Jingjie; Li, Yongmei; Hong, Nan

    2018-01-01

    To investigate the application value of using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) with extended Tofts linear model for relapsing-remitting multiple sclerosis (RRMS) and its correlation with expanded disability status scale (EDSS) scores and disease duration. Thirty patients with multiple sclerosis (MS) underwent conventional magnetic resonance imaging (MRI) and DCE-MRI with a 3.0 Tesla MR scanner. An extended Tofts linear model was used to quantitatively measure MR imaging biomarkers. The histogram parameters and correlation among imaging biomarkers, EDSS scores, and disease duration were also analyzed. The MR imaging biomarkers volume transfer constant (K trans ), volume of the extravascular extracellular space per unit volume of tissue (Ve), fractional plasma volume (V p ), cerebral blood flow (CBF), and cerebral blood volume (CBV) of contrast-enhancing (CE) lesions were significantly higher (P < 0.05) than those of nonenhancing (NE) lesions and normal-appearing white matter (NAWM) regions. The skewness of Ve value in CE lesions was more close to normal distribution. There was no significant correlation among the biomarkers with the EDSS scores and disease duration (P > 0.05). Our study demonstrates that the DCE-MRI with the extended Tofts linear model can measure the permeability and perfusion characteristic in MS lesions and in NAWM regions. The K trans , Ve, Vp, CBF, and CBV of CE lesions were significantly higher than that of NE lesions. The skewness of Ve value in CE lesions was more close to normal distribution, indicating that the histogram can be helpful to distinguish the pathology of MS lesions.

  5. A Measure of the Auditory-perceptual Quality of Strain from Electroglottographic Analysis of Continuous Dysphonic Speech: Application to Adductor Spasmodic Dysphonia.

    PubMed

    Somanath, Keerthan; Mau, Ted

    2016-11-01

    (1) To develop an automated algorithm to analyze electroglottographic (EGG) signal in continuous dysphonic speech, and (2) to identify EGG waveform parameters that correlate with the auditory-perceptual quality of strain in the speech of patients with adductor spasmodic dysphonia (ADSD). Software development with application in a prospective controlled study. EGG was recorded from 12 normal speakers and 12 subjects with ADSD reading excerpts from the Rainbow Passage. Data were processed by a new algorithm developed with the specific goal of analyzing continuous dysphonic speech. The contact quotient, pulse width, a new parameter peak skew, and various contact closing slope quotient and contact opening slope quotient measures were extracted. EGG parameters were compared between normal and ADSD speech. Within the ADSD group, intra-subject comparison was also made between perceptually strained syllables and unstrained syllables. The opening slope quotient SO7525 distinguished strained syllables from unstrained syllables in continuous speech within individual subjects with ADSD. The standard deviations, but not the means, of contact quotient, EGGW50, peak skew, and SO7525 were different between normal and ADSD speakers. The strain-stress pattern in continuous speech can be visualized as color gradients based on the variation of EGG parameter values. EGG parameters may provide a within-subject measure of vocal strain and serve as a marker for treatment response. The addition of EGG to multidimensional assessment may lead to improved characterization of the voice disturbance in ADSD. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  6. A measure of the auditory-perceptual quality of strain from electroglottographic analysis of continuous dysphonic speech: Application to adductor spasmodic dysphonia

    PubMed Central

    Somanath, Keerthan; Mau, Ted

    2016-01-01

    Objectives (1) To develop an automated algorithm to analyze electroglottographic (EGG) signal in continuous, dysphonic speech, and (2) to identify EGG waveform parameters that correlate with the auditory-perceptual quality of strain in the speech of patients with adductor spasmodic dysphonia (ADSD). Study Design Software development with application in a prospective controlled study. Methods EGG was recorded from 12 normal speakers and 12 subjects with ADSD reading excerpts from the Rainbow Passage. Data were processed by a new algorithm developed with the specific goal of analyzing continuous dysphonic speech. The contact quotient (CQ), pulse width (EGGW), a new parameter peak skew, and various contact closing slope quotient (SC) and contact opening slope quotient (SO) measures were extracted. EGG parameters were compared between normal and ADSD speech. Within the ADSD group, intra-subject comparison was also made between perceptually strained syllables and unstrained syllables. Results The opening slope quotient SO7525 distinguished strained syllables from unstrained syllables in continuous speech within individual ADSD subjects. The standard deviations, but not the means, of CQ, EGGW50, peak skew, and SO7525 were different between normal and ADSD speakers. The strain-stress pattern in continuous speech can be visualized as color gradients based on the variation of EGG parameter values. Conclusions EGG parameters may provide a within-subject measure of vocal strain and serve as a marker for treatment response. The addition of EGG to multi-dimensional assessment may lead to improved characterization of the voice disturbance in ADSD. PMID:26739857

  7. Log-normal distribution of the trace element data results from a mixture of stocahstic input and deterministic internal dynamics.

    PubMed

    Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya

    2002-04-01

    In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.

  8. Predicting clicks of PubMed articles.

    PubMed

    Mao, Yuqing; Lu, Zhiyong

    2013-01-01

    Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed.

  9. Predicting clicks of PubMed articles

    PubMed Central

    Mao, Yuqing; Lu, Zhiyong

    2013-01-01

    Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed. PMID:24551386

  10. Fatigue Shifts and Scatters Heart Rate Variability in Elite Endurance Athletes

    PubMed Central

    Schmitt, Laurent; Regnard, Jacques; Desmarets, Maxime; Mauny, Fréderic; Mourot, Laurent; Fouillot, Jean-Pierre; Coulmy, Nicolas; Millet, Grégoire

    2013-01-01

    Purpose This longitudinal study aimed at comparing heart rate variability (HRV) in elite athletes identified either in ‘fatigue’ or in ‘no-fatigue’ state in ‘real life’ conditions. Methods 57 elite Nordic-skiers were surveyed over 4 years. R-R intervals were recorded supine (SU) and standing (ST). A fatigue state was quoted with a validated questionnaire. A multilevel linear regression model was used to analyze relationships between heart rate (HR) and HRV descriptors [total spectral power (TP), power in low (LF) and high frequency (HF) ranges expressed in ms2 and normalized units (nu)] and the status without and with fatigue. The variables not distributed normally were transformed by taking their common logarithm (log10). Results 172 trials were identified as in a ‘fatigue’ and 891 as in ‘no-fatigue’ state. All supine HR and HRV parameters (Beta±SE) were significantly different (P<0.0001) between ‘fatigue’ and ‘no-fatigue’: HRSU (+6.27±0.61 bpm), logTPSU (−0.36±0.04), logLFSU (−0.27±0.04), logHFSU (−0.46±0.05), logLF/HFSU (+0.19±0.03), HFSU(nu) (−9.55±1.33). Differences were also significant (P<0.0001) in standing: HRST (+8.83±0.89), logTPST (−0.28±0.03), logLFST (−0.29±0.03), logHFST (−0.32±0.04). Also, intra-individual variance of HRV parameters was larger (P<0.05) in the ‘fatigue’ state (logTPSU: 0.26 vs. 0.07, logLFSU: 0.28 vs. 0.11, logHFSU: 0.32 vs. 0.08, logTPST: 0.13 vs. 0.07, logLFST: 0.16 vs. 0.07, logHFST: 0.25 vs. 0.14). Conclusion HRV was significantly lower in 'fatigue' vs. 'no-fatigue' but accompanied with larger intra-individual variance of HRV parameters in 'fatigue'. The broader intra-individual variance of HRV parameters might encompass different changes from no-fatigue state, possibly reflecting different fatigue-induced alterations of HRV pattern. PMID:23951198

  11. Sociodemographic, lifestyle and health determinants of suicidal behaviour in Malaysia.

    PubMed

    Cheah, Yong Kang; Azahadi, Mohd; Phang, Siew Nooi; Abd Manaf, Noor Hazilah

    2018-03-01

    Suicide has become a serious matter in both developed and developing countries. The objective of the present study is to examine the factors affecting suicidal behaviour among adults in Malaysia. A nationally representative data which consists of 10,141 respondents is used for analysis. A trivariate probit model is utilised to identify the probability of having suicide ideation, suicide plan and suicide attempt. Results of the regression analysis show that to ensure unbiased estimates, a trivariate probit model should be used instead of three separate probit models. The determining factors of suicidal behaviour are income, age, gender, ethnicity, education, marital status, self-rated health and being diagnosed with diabetes and hypercholesterolemia. The likelihood of adopting suicidal behaviour is lower among higher income earners and older individuals. Being male and married significantly reduce the propensity to engage in suicidal behaviour. Of all the ethnic groups, Indian/others displays the highest likelihood of adopting suicidal behaviour. There is a positive relationship between poor health condition and suicide. Policies targeted at individuals who are likely to adopt suicidal behaviour may be effective in lowering the prevalence of suicide. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Investigation of factors affecting the injury severity of single-vehicle rollover crashes: A random-effects generalized ordered probit model.

    PubMed

    Anarkooli, Alireza Jafari; Hosseinpour, Mehdi; Kardar, Adele

    2017-09-01

    Rollover crashes are responsible for a notable number of serious injuries and fatalities; hence, they are of great concern to transportation officials and safety researchers. However, only few published studies have analyzed the factors associated with severity outcomes of rollover crashes. This research has two objectives. The first objective is to investigate the effects of various factors, of which some have been rarely reported in the existing studies, on the injury severities of single-vehicle (SV) rollover crashes based on six-year crash data collected on the Malaysian federal roads. A random-effects generalized ordered probit (REGOP) model is employed in this study to analyze injury severity patterns caused by rollover crashes. The second objective is to examine the performance of the proposed approach, REGOP, for modeling rollover injury severity outcomes. To this end, a mixed logit (MXL) model is also fitted in this study because of its popularity in injury severity modeling. Regarding the effects of the explanatory variables on the injury severity of rollover crashes, the results reveal that factors including dark without supplemental lighting, rainy weather condition, light truck vehicles (e.g., sport utility vehicles, vans), heavy vehicles (e.g., bus, truck), improper overtaking, vehicle age, traffic volume and composition, number of travel lanes, speed limit, undulating terrain, presence of central median, and unsafe roadside conditions are positively associated with more severe SV rollover crashes. On the other hand, unpaved shoulder width, area type, driver occupation, and number of access points are found as the significant variables decreasing the probability of being killed or severely injured (i.e., KSI) in rollover crashes. Land use and side friction are significant and positively associated only with slight injury category. These findings provide valuable insights into the causes and factors affecting the injury severity patterns of rollover crashes, and thus can help develop effective countermeasures to reduce the severity of rollover crashes. The model comparison results show that the REGOP model is found to outperform the MXL model in terms of goodness-of-fit measures, and also is significantly superior to other extensions of ordered probit models, including generalized ordered probit and random-effects ordered probit (REOP) models. As a result, this research introduces REGOP as a promising tool for future research focusing on crash injury severity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Higher aluminum concentration in Alzheimer's disease after Box-Cox data transformation.

    PubMed

    Rusina, Robert; Matěj, Radoslav; Kašparová, Lucie; Kukal, Jaromír; Urban, Pavel

    2011-11-01

    Evidence regarding the role of mercury and aluminum in the pathogenesis of Alzheimer's disease (AD) remains controversial. The aims of our project were to investigate the content of the selected metals in brain tissue samples and the use of a specific mathematical transform to eliminate the disadvantage of a strong positive skew in the original data distribution. In this study, we used atomic absorption spectrophotometry to determine mercury and aluminum concentrations in the hippocampus and associative visual cortex of 29 neuropathologically confirmed AD and 27 age-matched controls. The Box-Cox data transformation was used for statistical evaluation. AD brains had higher mean aluminum concentrations in the hippocampus than controls (0.357 vs. 0.090 μg/g; P = 0.039) after data transformation. Results for mercury were not significant. Original data regarding microelement concentrations are heavily skewed and do not pass the normality test in general. A Box-Cox transformation can eliminate this disadvantage and allow parametric testing.

  14. Comparison of parametric and bootstrap method in bioequivalence test.

    PubMed

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  15. Comparison of Parametric and Bootstrap Method in Bioequivalence Test

    PubMed Central

    Ahn, Byung-Jin

    2009-01-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699

  16. PHAGE FORMATION IN STAPHYLOCOCCUS MUSCAE CULTURES

    PubMed Central

    Price, Winston H.

    1949-01-01

    1. The total nucleic acid synthesized by normal and by infected S. muscae suspensions is approximately the same. This is true for either lag phase cells or log phase cells. 2. The amount of nucleic acid synthesized per cell in normal cultures increases during the lag period and remains fairly constant during log growth. 3. The amount of nucleic acid synthesized per cell by infected cells increases during the whole course of the infection. 4. Infected cells synthesize less RNA and more DNA than normal cells. The ratio of RNA/DNA is larger in lag phase cells than in log phase cells. 5. Normal cells release neither ribonucleic acid nor desoxyribonucleic acid into the medium. 6. Infected cells release both ribonucleic acid and desoxyribonucleic acid into the medium. The time and extent of release depend upon the physiological state of the cells. 7. Infected lag phase cells may or may not show an increased RNA content. They release RNA, but not DNA, into the medium well before observable cellular lysis and before any virus is liberated. At virus liberation, the cell RNA content falls to a value below that initially present, while DNA, which increased during infection falls to approximately the original value. 8. Infected log cells show a continuous loss of cell RNA and a loss of DNA a short time after infection. At the time of virus liberation the cell RNA value is well below that initially present and the cells begin to lyse. PMID:18139006

  17. Generating log-normal mock catalog of galaxies in redshift space

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Makiya, Ryu; Chiang, Chi-Ting; Jeong, Donghui; Saito, Shun; Komatsu, Eiichiro

    2017-10-01

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear bias relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.

  18. Contrast sensitivity measured by two different test methods in healthy, young adults with normal visual acuity.

    PubMed

    Koefoed, Vilhelm F; Baste, Valborg; Roumes, Corinne; Høvding, Gunnar

    2015-03-01

    This study reports contrast sensitivity (CS) reference values obtained by two different test methods in a strictly selected population of healthy, young adults with normal uncorrected visual acuity. Based on these results, the index of contrast sensitivity (ICS) is calculated, aiming to establish ICS reference values for this population and to evaluate the possible usefulness of ICS as a tool to compare the degree of agreement between different CS test methods. Military recruits with best eye uncorrected visual acuity 0.00 LogMAR or better, normal colour vision and age 18-25 years were included in a study to record contrast sensitivity using Optec 6500 (FACT) at spatial frequencies of 1.5, 3, 6, 12 and 18 cpd in photopic and mesopic light and CSV-1000E at spatial frequencies of 3, 6, 12 and 18 cpd in photopic light. Index of contrast sensitivity was calculated based on data from the three tests, and the Bland-Altman technique was used to analyse the agreement between ICS obtained by the different test methods. A total of 180 recruits were included. Contrast sensitivity frequency data for all tests were highly skewed with a marked ceiling effect for the photopic tests. The median ICS for Optec 6500 at 85 cd/m2 was -0.15 (95% percentile 0.45), compared with -0.00 (95% percentile 1.62) for Optec at 3 cd/m2 and 0.30 (95% percentile 1.20) FOR CSV-1000E. The mean difference between ICSFACT 85 and ICSCSV was -0.43 (95% CI -0.56 to -0.30, p<0.00) with limits of agreement (LoA) within -2.10 and 1.22. The regression line on the difference of average was near to zero (R2=0.03). The results provide reference CS and ICS values in a young, adult population with normal visual acuity. The agreement between the photopic tests indicated that they may be used interchangeably. There was little agreement between the mesopic and photopic tests. The mesopic test seemed best suited to differentiate between candidates and may therefore possibly be useful for medical selection purposes. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  19. Mixture toxicity of the anti-inflammatory drugs diclofenac, ibuprofen, naproxen, and acetylsalicylic acid.

    PubMed

    Cleuvers, Michael

    2004-11-01

    The ecotoxicity of the nonsteroidal anti-inflammatory drugs (NSAIDs) diclofenac, ibuprofen, naproxen, and acetylsalicylic acid (ASA) has been evaluated using acute Daphnia and algal tests. Toxicities were relatively low, with half-maximal effective concentration (EC50) values obtained using Daphnia in the range from 68 to 166 mg L(-1) and from 72 to 626 mg L(-1) in the algal test. Acute effects of these substances seem to be quite improbable. The quantitative structure-activity relationships (QSAR) approach showed that all substances act by nonpolar narcosis; thus, the higher the n-octanol/water partitioning coefficient (log Kow) of the substances, the higher is their toxicity. Mixture toxicity of the compounds could be accurately predicted using the concept of concentration addition. Toxicity of the mixture was considerable, even at concentrations at which the single substances showed no or only very slight effects, with some deviations in the Daphnia test, which could be explained by incompatibility of the very steep dose-response curves and the probit analysis of the data. Because pharmaceuticals in the aquatic environment occur usually as mixtures, an accurate prediction of the mixture toxicity is indispensable for environmental risk assessment.

  20. Chapter 9:Red maple lumber resources for glued-laminated timber beams

    Treesearch

    John J. Janowiak; Harvey B. Manbeck; Roland Hernandez; Russell C. Moody

    2005-01-01

    This chapter evaluates the performance of red maple glulam beams made from two distinctly different lumber resources: 1. logs sawn using practices normally used for hardwood appearance lumber recovery; and 2. lower-grade, smaller-dimension lumber primarily obtained from residual log cants.

  1. Distribution of transvascular pathway sizes through the pulmonary microvascular barrier.

    PubMed

    McNamee, J E

    1987-01-01

    Mathematical models of solute and water exchange in the lung have been helpful in understanding factors governing the volume flow rate and composition of pulmonary lymph. As experimental data and models become more encompassing, parameter identification becomes more difficult. Pore sizes in these models should approach and eventually become equivalent to actual physiological pathway sizes as more complex and accurate models are tried. However, pore sizes and numbers vary from model to model as new pathway sizes are added. This apparent inconsistency of pore sizes can be explained if it is assumed that the pulmonary blood-lymph barrier is widely heteroporous, for example, being composed of a continuous distribution of pathway sizes. The sieving characteristics of the pulmonary barrier are reproduced by a log normal distribution of pathway sizes (log mean = -0.20, log s.d. = 1.05). A log normal distribution of pathways in the microvascular barrier is shown to follow from a rather general assumption about the nature of the pulmonary endothelial junction.

  2. Selection on skewed characters and the paradox of stasis.

    PubMed

    Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel

    2017-11-01

    Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modeling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold's (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate sample sizes. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date - repeatedly described as more evolutionarily stable than expected - so this skewness should be accounted for when investigating evolutionary dynamics in the wild. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  3. Reward skewness coding in the insula independent of probability and loss

    PubMed Central

    Tobler, Philippe N.

    2011-01-01

    Rewards in the natural environment are rarely predicted with complete certainty. Uncertainty relating to future rewards has typically been defined as the variance of the potential outcomes. However, the asymmetry of predicted reward distributions, known as skewness, constitutes a distinct but neuroscientifically underexplored risk term that may also have an impact on preference. By changing only reward magnitudes, we study skewness processing in equiprobable ternary lotteries involving only gains and constant probabilities, thus excluding probability distortion or loss aversion as mechanisms for skewness preference formation. We show that individual preferences are sensitive to not only the mean and variance but also to the skewness of predicted reward distributions. Using neuroimaging, we show that the insula, a structure previously implicated in the processing of reward-related uncertainty, responds to the skewness of predicted reward distributions. Some insula responses increased in a monotonic fashion with skewness (irrespective of individual skewness preferences), whereas others were similarly elevated to both negative and positive as opposed to no reward skew. These data support the notion that the asymmetry of reward distributions is processed in the brain and, taken together with replicated findings of mean coding in the striatum and variance coding in the cingulate, suggest that the brain codes distinct aspects of reward distributions in a distributed fashion. PMID:21849610

  4. Nightly biting cycles of malaria vectors in a heterogeneous transmission area of eastern Amazonian Brazil

    PubMed Central

    2013-01-01

    Background The biting cycle of anopheline mosquitoes is an important component in the transmission of malaria. Inter- and intraspecific biting patterns of anophelines have been investigated using the number of mosquitoes caught over time to compare general tendencies in host-seeking activity and cumulative catch. In this study, all-night biting catch data from 32 consecutive months of collections in three riverine villages were used to compare biting cycles of the five most abundant vector species using common statistics to quantify variability and deviations of nightly catches from a normal distribution. Methods Three communities were selected for study. All-night human landing catches of mosquitoes were made each month in the peridomestic environment of four houses (sites) for nine consecutive days from April 2003 to November 2005. Host-seeking activities of the five most abundant species that were previously captured infected with Plasmodium falciparum, Plasmodium malariae or Plasmodium vivax, were analysed and compared by measuring the amount of variation in numbers biting per unit time (co-efficient of variation, V), the degree to which the numbers of individuals per unit time were asymmetrical (skewness = g1) and the relative peakedness or flatness of the distribution (kurtosis = g2). To analyse variation in V, g1, and g2 within species and villages, we used mixed model nested ANOVAs (PROC GLM in SAS) with independent variables (sources of variation): year, month (year), night (year X month) and collection site (year X month). Results The biting cycles of the most abundant species, Anopheles darlingi, had the least pronounced biting peaks, the lowest mean V values, and typically non-significant departures from normality in g1 and g2. By contrast, the species with the most sharply defined crepuscular biting peaks, Anopheles marajoara, Anopheles nuneztovari and Anopheles triannulatus, showed high to moderate mean V values and, most commonly, significantly positive skewness (g1) and kurtosis (g2) moments. Anopheles intermedius was usually, but not always, crepuscular in host seeking, and showed moderate mean V values and typically positive skewness and kurtosis. Among sites within villages, significant differences in frequencies of departures from normality (g1 and g2) were detected for An. marajoara and An. darlingi, suggesting that local environments, such as host availability, may affect the shape of biting pattern curves of these two species. Conclusions Analyses of co-efficients of variation, skewness and kurtosis facilitated quantitative comparisons of host-seeking activity patterns that differ among species, sites, villages, and dates. The variable and heterogeneous nightly host-seeking behaviours of the five exophilic vector species contribute to the maintenance of stable malaria transmission in these Amazonian villages. The abundances of An. darlingi and An. marajoara, their propensities to seek hosts throughout the night, and their ability to adapt host-seeking behaviour to local environments, contribute to their impact as the most important of these vector species. PMID:23890413

  5. Baseline MNREAD Measures for Normally Sighted Subjects From Childhood to Old Age

    PubMed Central

    Calabrèse, Aurélie; Cheong, Allen M. Y.; Cheung, Sing-Hang; He, Yingchen; Kwon, MiYoung; Mansfield, J. Stephen; Subramanian, Ahalya; Yu, Deyue; Legge, Gordon E.

    2016-01-01

    Purpose The continuous-text reading-acuity test MNREAD is designed to measure the reading performance of people with normal and low vision. This test is used to estimate maximum reading speed (MRS), critical print size (CPS), reading acuity (RA), and the reading accessibility index (ACC). Here we report the age dependence of these measures for normally sighted individuals, providing baseline data for MNREAD testing. Methods We analyzed MNREAD data from 645 normally sighted participants ranging in age from 8 to 81 years. The data were collected in several studies conducted by different testers and at different sites in our research program, enabling evaluation of robustness of the test. Results Maximum reading speed and reading accessibility index showed a trilinear dependence on age: first increasing from 8 to 16 years (MRS: 140–200 words per minute [wpm]; ACC: 0.7–1.0); then stabilizing in the range of 16 to 40 years (MRS: 200 ± 25 wpm; ACC: 1.0 ± 0.14); and decreasing to 175 wpm and 0.88 by 81 years. Critical print size was constant from 8 to 23 years (0.08 logMAR), increased slowly until 68 years (0.21 logMAR), and then more rapidly until 81 years (0.34 logMAR). logMAR reading acuity improved from −0.1 at 8 years to −0.18 at 16 years, then gradually worsened to −0.05 at 81 years. Conclusions We found a weak dependence of the MNREAD parameters on age in normal vision. In broad terms, MNREAD performance exhibits differences between three age groups: children 8 to 16 years, young adults 16 to 40 years, and middle-aged to older adults >40 years. PMID:27442222

  6. Hospital ownership and financial performance: what explains the different findings in the empirical literature?

    PubMed

    Shen, Yu-Chu; Eggleston, Karen; Lau, Joseph; Schmid, Christopher H

    2007-01-01

    This study applies meta-analytic methods to conduct a quantitative review of the empirical literature on hospital ownership since 1990. We examine four financial outcomes across 40 studies: cost, revenue, profit margin, and efficiency. We find that variation in the magnitudes of ownership effects can be explained by a study's research focus and methodology. Studies using empirical methods that control for few confounding factors tend to find larger differences between for-profit and not-for-profit hospitals than studies that control for a wider range of confounding factors. Functional form and sample size also matter. Failure to apply log transformation to highly skewed expenditure data yields misleadingly large estimated differences between for-profits and not-for-profits. Studies with fewer than 200 observations also produce larger point estimates and wide confidence intervals.

  7. Review of Statistical Methods for Analysing Healthcare Resources and Costs

    PubMed Central

    Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G

    2011-01-01

    We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344

  8. Classical fragile-X phenotype in a female infant disclosed by comprehensive genomic studies.

    PubMed

    Jorge, Paula; Garcia, Elsa; Gonçalves, Ana; Marques, Isabel; Maia, Nuno; Rodrigues, Bárbara; Santos, Helena; Fonseca, Jacinta; Soares, Gabriela; Correia, Cecília; Reis-Lima, Margarida; Cirigliano, Vincenzo; Santos, Rosário

    2018-05-10

    We describe a female infant with Fragile-X syndrome, with a fully expanded FMR1 allele and preferential inactivation of the homologous X-chromosome carrying a de novo deletion. This unusual and rare case demonstrates the importance of a detailed genomic approach, the absence of which could be misguiding, and calls for reflection on the current clinical and diagnostic workup for developmental disabilities. We present a female infant, referred for genetic testing due to psychomotor developmental delay without specific dysmorphic features or relevant family history. FMR1 mutation screening revealed a methylated full mutation and a normal but inactive FMR1 allele, which led to further investigation. Complete skewing of X-chromosome inactivation towards the paternally-inherited normal-sized FMR1 allele was found. No pathogenic variants were identified in the XIST promoter. Microarray analysis revealed a 439 kb deletion at Xq28, in a region known to be associated with extreme skewing of X-chromosome inactivation. Overall results enable us to conclude that the developmental delay is the cumulative result of a methylated FMR1 full mutation on the active X-chromosome and the inactivation of the other homologue carrying the de novo 439 kb deletion. Our findings should be taken into consideration in future guidelines for the diagnostic workup on the diagnosis of intellectual disabilities, particularly in female infant cases.

  9. Activities and summary statistics of radon-222 in stream- and ground-water samples, Owl Creek basin, north-central Wyoming, September 1991 through March 1992

    USGS Publications Warehouse

    Ogle, K.M.; Lee, R.W.

    1994-01-01

    Radon-222 activity was measured for 27 water samples from streams, an alluvial aquifer, bedrock aquifers, and a geothermal system, in and near the 510-square mile area of Owl Creek Basin, north- central Wyoming. Summary statistics of the radon- 222 activities are compiled. For 16 stream-water samples, the arithmetic mean radon-222 activity was 20 pCi/L (picocuries per liter), geometric mean activity was 7 pCi/L, harmonic mean activity was 2 pCi/L and median activity was 8 pCi/L. The standard deviation of the arithmetic mean is 29 pCi/L. The activities in the stream-water samples ranged from 0.4 to 97 pCi/L. The histogram of stream-water samples is left-skewed when compared to a normal distribution. For 11 ground-water samples, the arithmetic mean radon- 222 activity was 486 pCi/L, geometric mean activity was 280 pCi/L, harmonic mean activity was 130 pCi/L and median activity was 373 pCi/L. The standard deviation of the arithmetic mean is 500 pCi/L. The activity in the ground-water samples ranged from 25 to 1,704 pCi/L. The histogram of ground-water samples is left-skewed when compared to a normal distribution. (USGS)

  10. How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.

    PubMed

    Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J

    2014-09-01

    Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  11. Choriocapillaris Flow Features Follow a Power Law Distribution: Implications for Characterization and Mechanisms of Disease Progression.

    PubMed

    Spaide, Richard F

    2016-10-01

    To investigate flow characteristics of the choriocapillaris using optical coherence tomography angiography. Retrospective observational case series. Visualization of flow in individual choriocapillary vessels is below the current resolution limit of optical coherence tomography angiography instruments, but areas of absent flow signal, called flow voids, are resolvable. The central macula was imaged with the Optovue RTVue XR Avanti using a 10-μm slab thickness in 104 eyes of 80 patients who ranged in age from 24 to 99 years of age. Automatic local thresholding of the resultant raw data with the Phansalkar method was analyzed with generalized estimating equations. The distribution of flow voids vs size of the voids was highly skewed. The data showed a linear log-log plot and goodness-of-fit methods showed the data followed a power law distribution over the relevant range. A slope intercept relationship was also evaluated for the log transform and significant predictors for variables included age, hypertension, pseudodrusen, and the presence of late age-related macular degeneration (AMD) in the fellow eye. The pattern of flow voids forms a scale invariant pattern in the choriocapillaris starting at a size much smaller than a choroidal lobule. Age and hypertension affect the choriocapillaris, a flat layer of capillaries that may serve as an observable surrogate for the neural or systemic microvasculature. Significant alterations detectable in the flow pattern in eyes with pseudodrusen and in eyes with late AMD in the fellow eye offer diagnostic possibilities and impact theories of disease pathogenesis. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Estimating magnitude and frequency of floods using the PeakFQ 7.0 program

    USGS Publications Warehouse

    Veilleux, Andrea G.; Cohn, Timothy A.; Flynn, Kathleen M.; Mason, Jr., Robert R.; Hummel, Paul R.

    2014-01-01

    Flood-frequency analysis provides information about the magnitude and frequency of flood discharges based on records of annual maximum instantaneous peak discharges collected at streamgages. The information is essential for defining flood-hazard areas, for managing floodplains, and for designing bridges, culverts, dams, levees, and other flood-control structures. Bulletin 17B (B17B) of the Interagency Advisory Committee on Water Data (IACWD; 1982) codifies the standard methodology for conducting flood-frequency studies in the United States. B17B specifies that annual peak-flow data are to be fit to a log-Pearson Type III distribution. Specific methods are also prescribed for improving skew estimates using regional skew information, tests for high and low outliers, adjustments for low outliers and zero flows, and procedures for incorporating historical flood information. The authors of B17B identified various needs for methodological improvement and recommended additional study. In response to these needs, the Advisory Committee on Water Information (ACWI, successor to IACWD; http://acwi.gov/, Subcommittee on Hydrology (SOH), Hydrologic Frequency Analysis Work Group (HFAWG), has recommended modest changes to B17B. These changes include adoption of a generalized method-of-moments estimator denoted the Expected Moments Algorithm (EMA) (Cohn and others, 1997) and a generalized version of the Grubbs-Beck test for low outliers (Cohn and others, 2013). The SOH requested that the USGS implement these changes in a user-friendly, publicly accessible program.

  13. Gradual multifractal reconstruction of time-series: Formulation of the method and an application to the coupling between stock market indices and their Hölder exponents

    NASA Astrophysics Data System (ADS)

    Keylock, Christopher J.

    2018-04-01

    A technique termed gradual multifractal reconstruction (GMR) is formulated. A continuum is defined from a signal that preserves the pointwise Hölder exponent (multifractal) structure of a signal but randomises the locations of the original data values with respect to this (φ = 0), to the original signal itself(φ = 1). We demonstrate that this continuum may be populated with synthetic time series by undertaking selective randomisation of wavelet phases using a dual-tree complex wavelet transform. That is, the φ = 0 end of the continuum is realised using the recently proposed iterated, amplitude adjusted wavelet transform algorithm (Keylock, 2017) that fully randomises the wavelet phases. This is extended to the GMR formulation by selective phase randomisation depending on whether or not the wavelet coefficient amplitudes exceeds a threshold criterion. An econophysics application of the technique is presented. The relation between the normalised log-returns and their Hölder exponents for the daily returns of eight financial indices are compared. One particularly noticeable result is the change for the two American indices (NASDAQ 100 and S&P 500) from a non-significant to a strongly significant (as determined using GMR) cross-correlation between the returns and their Hölder exponents from before the 2008 crash to afterwards. This is also reflected in the skewness of the phase difference distributions, which exhibit a geographical structure, with Asian markets not exhibiting significant skewness in contrast to those from elsewhere globally.

  14. A comparison of per sample global scaling and per gene normalization methods for differential expression analysis of RNA-seq data.

    PubMed

    Li, Xiaohong; Brock, Guy N; Rouchka, Eric C; Cooper, Nigel G F; Wu, Dongfeng; O'Toole, Timothy E; Gill, Ryan S; Eteleeb, Abdallah M; O'Brien, Liz; Rai, Shesh N

    2017-01-01

    Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level.

  15. A comparison of per sample global scaling and per gene normalization methods for differential expression analysis of RNA-seq data

    PubMed Central

    Li, Xiaohong; Brock, Guy N.; Rouchka, Eric C.; Cooper, Nigel G. F.; Wu, Dongfeng; O’Toole, Timothy E.; Gill, Ryan S.; Eteleeb, Abdallah M.; O’Brien, Liz

    2017-01-01

    Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level. PMID:28459823

  16. Body fat assessed from body density and estimated from skinfold thickness in normal children and children with cystic fibrosis.

    PubMed

    Johnston, J L; Leong, M S; Checkland, E G; Zuberbuhler, P C; Conger, P R; Quinney, H A

    1988-12-01

    Body density and skinfold thickness at four sites were measured in 140 normal boys, 168 normal girls, and 6 boys and 7 girls with cystic fibrosis, all aged 8-14 y. Prediction equations for the normal boys and girls for the estimation of body-fat content from skinfold measurements were derived from linear regression of body density vs the log of the sum of the skinfold thickness. The relationship between body density and the log of the sum of the skinfold measurements differed from normal for the boys and girls with cystic fibrosis because of their high body density even though their large residual volume was corrected for. However the sum of skinfold measurements in the children with cystic fibrosis did not differ from normal. Thus body fat percent of these children with cystic fibrosis was underestimated when calculated from body density and invalid when calculated from skinfold thickness.

  17. Guidelines for determining flood flow frequency—Bulletin 17C

    USGS Publications Warehouse

    England, John F.; Cohn, Timothy A.; Faber, Beth A.; Stedinger, Jery R.; Thomas, Wilbert O.; Veilleux, Andrea G.; Kiang, Julie E.; Mason, Robert R.

    2018-03-29

    Accurate estimates of flood frequency and magnitude are a key component of any effective nationwide flood risk management and flood damage abatement program. In addition to accuracy, methods for estimating flood risk must be uniformly and consistently applied because management of the Nation’s water and related land resources is a collaborative effort involving multiple actors including most levels of government and the private sector.Flood frequency guidelines have been published in the United States since 1967, and have undergone periodic revisions. In 1967, the U.S. Water Resources Council presented a coherent approach to flood frequency with Bulletin 15, “A Uniform Technique for Determining Flood Flow Frequencies.” The method it recommended involved fitting the log-Pearson Type III distribution to annual peak flow data by the method of moments.The first extension and update of Bulletin 15 was published in 1976 as Bulletin 17, “Guidelines for Determining Flood Flow Frequency” (Guidelines). It extended the Bulletin 15 procedures by introducing methods for dealing with outliers, historical flood information, and regional skew. Bulletin 17A was published the following year to clarify the computation of weighted skew. The next revision of the Bulletin, the Bulletin 17B, provided a host of improvements and new techniques designed to address situations that often arise in practice, including better methods for estimating and using regional skew, weighting station and regional skew, detection of outliers, and use of the conditional probability adjustment.The current version of these Guidelines are presented in this document, denoted Bulletin 17C. It incorporates changes motivated by four of the items listed as “Future Work” in Bulletin 17B and 30 years of post-17B research on flood processes and statistical methods. The updates include: adoption of a generalized representation of flood data that allows for interval and censored data types; a new method, called the Expected Moments Algorithm, which extends the method of moments so that it can accommodate interval data; a generalized approach to identification of low outliers in flood data; and an improved method for computing confidence intervals.Federal agencies are requested to use these Guidelines in all planning activities involving water and related land resources. State, local, and private organizations are encouraged to use these Guidelines to assure uniformity in the flood frequency estimates that all agencies concerned with flood risk should use for Federal planning decisions.This revision is adopted with the knowledge and understanding that review of these procedures will be ongoing. Updated methods will be adopted when warranted by experience and by examination and testing of new techniques.

  18. Measurement of the distribution of ventilation-perfusion ratios in the human lung with proton MRI: comparison with the multiple inert-gas elimination technique.

    PubMed

    Sá, Rui Carlos; Henderson, A Cortney; Simonson, Tatum; Arai, Tatsuya J; Wagner, Harrieth; Theilmann, Rebecca J; Wagner, Peter D; Prisk, G Kim; Hopkins, Susan R

    2017-07-01

    We have developed a novel functional proton magnetic resonance imaging (MRI) technique to measure regional ventilation-perfusion (V̇ A /Q̇) ratio in the lung. We conducted a comparison study of this technique in healthy subjects ( n = 7, age = 42 ± 16 yr, Forced expiratory volume in 1 s = 94% predicted), by comparing data measured using MRI to that obtained from the multiple inert gas elimination technique (MIGET). Regional ventilation measured in a sagittal lung slice using Specific Ventilation Imaging was combined with proton density measured using a fast gradient-echo sequence to calculate regional alveolar ventilation, registered with perfusion images acquired using arterial spin labeling, and divided on a voxel-by-voxel basis to obtain regional V̇ A /Q̇ ratio. LogSDV̇ and LogSDQ̇, measures of heterogeneity derived from the standard deviation (log scale) of the ventilation and perfusion vs. V̇ A /Q̇ ratio histograms respectively, were calculated. On a separate day, subjects underwent study with MIGET and LogSDV̇ and LogSDQ̇ were calculated from MIGET data using the 50-compartment model. MIGET LogSDV̇ and LogSDQ̇ were normal in all subjects. LogSDQ̇ was highly correlated between MRI and MIGET (R = 0.89, P = 0.007); the intercept was not significantly different from zero (-0.062, P = 0.65) and the slope did not significantly differ from identity (1.29, P = 0.34). MIGET and MRI measures of LogSDV̇ were well correlated (R = 0.83, P = 0.02); the intercept differed from zero (0.20, P = 0.04) and the slope deviated from the line of identity (0.52, P = 0.01). We conclude that in normal subjects, there is a reasonable agreement between MIGET measures of heterogeneity and those from proton MRI measured in a single slice of lung. NEW & NOTEWORTHY We report a comparison of a new proton MRI technique to measure regional V̇ A /Q̇ ratio against the multiple inert gas elimination technique (MIGET). The study reports good relationships between measures of heterogeneity derived from MIGET and those derived from MRI. Although currently limited to a single slice acquisition, these data suggest that single sagittal slice measures of V̇ A /Q̇ ratio provide an adequate means to assess heterogeneity in the normal lung. Copyright © 2017 the American Physiological Society.

  19. Utility functions predict variance and skewness risk preferences in monkeys

    PubMed Central

    Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram

    2016-01-01

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  20. Utility functions predict variance and skewness risk preferences in monkeys.

    PubMed

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.

  1. Measurement and Analysis of Failures in Computer Systems

    NASA Technical Reports Server (NTRS)

    Thakur, Anshuman

    1997-01-01

    This thesis presents a study of software failures spanning several different releases of Tandem's NonStop-UX operating system running on Tandem Integrity S2(TMR) systems. NonStop-UX is based on UNIX System V and is fully compliant with industry standards, such as the X/Open Portability Guide, the IEEE POSIX standards, and the System V Interface Definition (SVID) extensions. In addition to providing a general UNIX interface to the hardware, the operating system has built-in recovery mechanisms and audit routines that check the consistency of the kernel data structures. The analysis is based on data on software failures and repairs collected from Tandem's product report (TPR) logs for a period exceeding three years. A TPR log is created when a customer or an internal developer observes a failure in a Tandem Integrity system. This study concentrates primarily on those TPRs that report a UNIX panic that subsequently crashes the system. Approximately 200 of the TPRs fall into this category. Approximately 50% of the failures reported are from field systems, and the rest are from the testing and development sites. It has been observed by Tandem developers that fewer cases are encountered from the field than from the test centers. Thus, the data selection mechanism has introduced a slight skew.

  2. Investigating the Investigative Task: Testing for Skewness--An Investigation of Different Test Statistics and Their Power to Detect Skewness

    ERIC Educational Resources Information Center

    Tabor, Josh

    2010-01-01

    On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)

  3. Social and genetic structure of paper wasp cofoundress associations: tests of reproductive skew models.

    PubMed

    Field, J; Solís, C R; Queller, D C; Strassmann, J E

    1998-06-01

    Recent models postulate that the members of a social group assess their ecological and social environments and agree a "social contract" of reproductive partitioning (skew). We tested social contracts theory by using DNA microsatellites to measure skew in 24 cofoundress associations of paper wasps, Polistes bellicosus. In contrast to theoretical predictions, there was little variation in cofoundress relatedness, and relatedness either did not predict skew or was negatively correlated with it; the dominant/subordinate size ratio, assumed to reflect relative fighting ability, did not predict skew; and high skew was associated with decreased aggression by the rank 2 subordinate toward the dominant. High skew was associated with increased group size. A difficulty with measuring skew in real systems is the frequent changes in group composition that commonly occur in social animals. In P. bellicosus, 61% of egg layers and an unknown number of non-egg layers were absent by the time nests were collected. The social contracts models provide an attractive general framework linking genetics, ecology, and behavior, but there have been few direct tests of their predictions. We question assumptions underlying the models and suggest directions for future research.

  4. A Language-Independent Approach to Automatic Text Difficulty Assessment for Second-Language Learners

    DTIC Science & Technology

    2013-08-01

    best-suited for regression. Our baseline uses z-normalized shallow length features and TF -LOG weighted vectors on bag-of-words for Arabic, Dari...length features and TF -LOG weighted vectors on bag-of-words for Arabic, Dari, English and Pashto. We compare Support Vector Machines and the Margin...football, whereas they are much less common in documents about opera). We used TF -LOG weighted word frequencies on bag-of-words for each document

  5. The effect of different intensity measures and earthquake directions on the seismic assessment of skewed highway bridges

    NASA Astrophysics Data System (ADS)

    Bayat, M.; Daneshjoo, F.; Nisticò, N.

    2017-01-01

    In this study the probable seismic behavior of skewed bridges with continuous decks under earthquake excitations from different directions is investigated. A 45° skewed bridge is studied. A suite of 20 records is used to perform an Incremental Dynamic Analysis (IDA) for fragility curves. Four different earthquake directions have been considered: -45°, 0°, 22.5°, 45°. A sensitivity analysis on different spectral intensity meas ures is presented; efficiency and practicality of different intensity measures have been studied. The fragility curves obtained indicate that the critical direction for skewed bridges is the skew direction as well as the longitudinal direction. The study shows the importance of finding the most critical earthquake in understanding and predicting the behavior of skewed bridges.

  6. Ventilation-perfusion distribution in normal subjects.

    PubMed

    Beck, Kenneth C; Johnson, Bruce D; Olson, Thomas P; Wilson, Theodore A

    2012-09-01

    Functional values of LogSD of the ventilation distribution (σ(V)) have been reported previously, but functional values of LogSD of the perfusion distribution (σ(q)) and the coefficient of correlation between ventilation and perfusion (ρ) have not been measured in humans. Here, we report values for σ(V), σ(q), and ρ obtained from wash-in data for three gases, helium and two soluble gases, acetylene and dimethyl ether. Normal subjects inspired gas containing the test gases, and the concentrations of the gases at end-expiration during the first 10 breaths were measured with the subjects at rest and at increasing levels of exercise. The regional distribution of ventilation and perfusion was described by a bivariate log-normal distribution with parameters σ(V), σ(q), and ρ, and these parameters were evaluated by matching the values of expired gas concentrations calculated for this distribution to the measured values. Values of cardiac output and LogSD ventilation/perfusion (Va/Q) were obtained. At rest, σ(q) is high (1.08 ± 0.12). With the onset of ventilation, σ(q) decreases to 0.85 ± 0.09 but remains higher than σ(V) (0.43 ± 0.09) at all exercise levels. Rho increases to 0.87 ± 0.07, and the value of LogSD Va/Q for light and moderate exercise is primarily the result of the difference between the magnitudes of σ(q) and σ(V). With known values for the parameters, the bivariate distribution describes the comprehensive distribution of ventilation and perfusion that underlies the distribution of the Va/Q ratio.

  7. Classification of epileptic seizures using wavelet packet log energy and norm entropies with recurrent Elman neural network classifier.

    PubMed

    Raghu, S; Sriraam, N; Kumar, G Pradeep

    2017-02-01

    Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.

  8. Somatic cell mutations at the glycophorin A locus in erythrocytes of atomic bomb survivors: Implications for radiation carcinogenesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyoizumi, Seishi; Akiyama, Mitoshi; Tanabe, Kazumi

    To clarify the relationship between somatic cell mutations and radiation exposure, the frequency of hemizygous mutant erythrocytes at the glycophorin A (GPA) locus was measured by flow cytometry for 1,226 heterozygous atomic bomb (A-bomb) survivors in HIroshima and Nagasaki. For statistical analysis, both GPA mutant frequency and radiation dose were log-transformed to normalize skewed distributions of these variables. The GPA mutant frequency increased slightly but significantly with age at testing and with the number of cigarettes smoked. Also, mutant frequency was significantly higher in males than in females even with adjustment for smoking and was higher to Hiroshima than inmore » Nagasaki. These characteristics of background GPA mutant frequency are qualitatively similar to those of background solid cancer incidence or mortality obtained from previous epidemiological studies of survivors. An analysis of the mutant frequency dose response using a descriptive model showed that the doubling dose is about 1.20 Sv [95% confidence interval (CI): 0.95-1.56], whereas the minimum dose for detecting a significant increase in mutant frequency is about 0.24 Sv (95% CI: 0.041-0.51). No significant effects of sex, city or age at the time of exposure on the dose response were detected. Interestingly, the doubling dose of the GPA mutant frequency was similar to that of solid cancer incidence in A-bomb survivors. This observation is in line with the hypothesis that radiation-induced somatic cell mutations are the major cause of excess cancer risk after radiation. 49 refs., 6 figs., 2 tabs.« less

  9. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord; Cornett, Frank N.

    2006-04-18

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. Each of the plurality of delayed signals is compared to a reference signal to detect changes in the skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in the detected skew.

  10. Thermal niche for in situ seed germination by Mediterranean mountain streams: model prediction and validation for Rhamnus persicifolia seeds

    PubMed Central

    Porceddu, Marco; Mattana, Efisio; Pritchard, Hugh W.; Bacchetta, Gianluigi

    2013-01-01

    Background and Aims Mediterranean mountain species face exacting ecological conditions of rainy, cold winters and arid, hot summers, which affect seed germination phenology. In this study, a soil heat sum model was used to predict field emergence of Rhamnus persicifolia, an endemic tree species living at the edge of mountain streams of central eastern Sardinia. Methods Seeds were incubated in the light at a range of temperatures (10–25 and 25/10 °C) after different periods (up to 3 months) of cold stratification at 5 °C. Base temperatures (Tb), and thermal times for 50 % germination (θ50) were calculated. Seeds were also buried in the soil in two natural populations (Rio Correboi and Rio Olai), both underneath and outside the tree canopy, and exhumed at regular intervals. Soil temperatures were recorded using data loggers and soil heat sum (°Cd) was calculated on the basis of the estimated Tb and soil temperatures. Key Results Cold stratification released physiological dormancy (PD), increasing final germination and widening the range of germination temperatures, indicative of a Type 2 non-deep PD. Tb was reduced from 10·5 °C for non-stratified seeds to 2·7 °C for seeds cold stratified for 3 months. The best thermal time model was obtained by fitting probit germination against log °Cd. θ50 was 2·6 log °Cd for untreated seeds and 2·17–2·19 log °Cd for stratified seeds. When θ50 values were integrated with soil heat sum estimates, field emergence was predicted from March to April and confirmed through field observations. Conclusions Tb and θ50 values facilitated model development of the thermal niche for in situ germination of R. persicifolia. These experimental approaches may be applied to model the natural regeneration patterns of other species growing on Mediterranean mountain waterways and of physiologically dormant species, with overwintering cold stratification requirement and spring germination. PMID:24201139

  11. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  12. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    PubMed

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Generating log-normal mock catalog of galaxies in redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Aniket; Makiya, Ryu; Saito, Shun

    We present a public code to generate a mock galaxy catalog in redshift space assuming a log-normal probability density function (PDF) of galaxy and matter density fields. We draw galaxies by Poisson-sampling the log-normal field, and calculate the velocity field from the linearised continuity equation of matter fields, assuming zero vorticity. This procedure yields a PDF of the pairwise velocity fields that is qualitatively similar to that of N-body simulations. We check fidelity of the catalog, showing that the measured two-point correlation function and power spectrum in real space agree with the input precisely. We find that a linear biasmore » relation in the power spectrum does not guarantee a linear bias relation in the density contrasts, leading to a cross-correlation coefficient of matter and galaxies deviating from unity on small scales. We also find that linearising the Jacobian of the real-to-redshift space mapping provides a poor model for the two-point statistics in redshift space. That is, non-linear redshift-space distortion is dominated by non-linearity in the Jacobian. The power spectrum in redshift space shows a damping on small scales that is qualitatively similar to that of the well-known Fingers-of-God (FoG) effect due to random velocities, except that the log-normal mock does not include random velocities. This damping is a consequence of non-linearity in the Jacobian, and thus attributing the damping of the power spectrum solely to FoG, as commonly done in the literature, is misleading.« less

  14. Assessment of visual disability using visual evoked potentials.

    PubMed

    Jeon, Jihoon; Oh, Seiyul; Kyung, Sungeun

    2012-08-06

    The purpose of this study is to validate the use of visual evoked potential (VEP) to objectively quantify visual acuity in normal and amblyopic patients, and determine if it is possible to predict visual acuity in disability assessment to register visual pathway lesions. A retrospective chart review was conducted of patients diagnosed with normal vision, unilateral amblyopia, optic neuritis, and visual disability who visited the university medical center for registration from March 2007 to October 2009. The study included 20 normal subjects (20 right eyes: 10 females, 10 males, ages 9-42 years), 18 unilateral amblyopic patients (18 amblyopic eyes, ages 19-36 years), 19 optic neuritis patients (19 eyes: ages 9-71 years), and 10 patients with visual disability having visual pathway lesions. Amplitude and latencies were analyzed and correlations with visual acuity (logMAR) were derived from 20 normal and 18 amblyopic subjects. Correlation of VEP amplitude and visual acuity (logMAR) of 19 optic neuritis patients confirmed relationships between visual acuity and amplitude. We calculated the objective visual acuity (logMAR) of 16 eyes from 10 patients to diagnose the presence or absence of visual disability using relations derived from 20 normal and 18 amblyopic eyes. Linear regression analyses between amplitude of pattern visual evoked potentials and visual acuity (logMAR) of 38 eyes from normal (right eyes) and amblyopic (amblyopic eyes) subjects were significant [y = -0.072x + 1.22, x: VEP amplitude, y: visual acuity (logMAR)]. There were no significant differences between visual acuity prediction values, which substituted amplitude values of 19 eyes with optic neuritis into function. We calculated the objective visual acuity of 16 eyes of 10 patients to diagnose the presence or absence of visual disability using relations of y = -0.072x + 1.22 (-0.072). This resulted in a prediction reference of visual acuity associated with malingering vs. real disability in a range >5.77 μV. The results could be useful, especially in cases of no obvious pale disc with trauma. Visual acuity quantification using absolute value of amplitude in pattern visual evoked potentials was useful in confirming subjective visual acuity for cutoff values >5.77 μV in disability evaluation to discriminate the malingering from real disability.

  15. Assessment of visual disability using visual evoked potentials

    PubMed Central

    2012-01-01

    Background The purpose of this study is to validate the use of visual evoked potential (VEP) to objectively quantify visual acuity in normal and amblyopic patients, and determine if it is possible to predict visual acuity in disability assessment to register visual pathway lesions. Methods A retrospective chart review was conducted of patients diagnosed with normal vision, unilateral amblyopia, optic neuritis, and visual disability who visited the university medical center for registration from March 2007 to October 2009. The study included 20 normal subjects (20 right eyes: 10 females, 10 males, ages 9–42 years), 18 unilateral amblyopic patients (18 amblyopic eyes, ages 19–36 years), 19 optic neuritis patients (19 eyes: ages 9–71 years), and 10 patients with visual disability having visual pathway lesions. Amplitude and latencies were analyzed and correlations with visual acuity (logMAR) were derived from 20 normal and 18 amblyopic subjects. Correlation of VEP amplitude and visual acuity (logMAR) of 19 optic neuritis patients confirmed relationships between visual acuity and amplitude. We calculated the objective visual acuity (logMAR) of 16 eyes from 10 patients to diagnose the presence or absence of visual disability using relations derived from 20 normal and 18 amblyopic eyes. Results Linear regression analyses between amplitude of pattern visual evoked potentials and visual acuity (logMAR) of 38 eyes from normal (right eyes) and amblyopic (amblyopic eyes) subjects were significant [y = −0.072x + 1.22, x: VEP amplitude, y: visual acuity (logMAR)]. There were no significant differences between visual acuity prediction values, which substituted amplitude values of 19 eyes with optic neuritis into function. We calculated the objective visual acuity of 16 eyes of 10 patients to diagnose the presence or absence of visual disability using relations of y = −0.072x + 1.22 (−0.072). This resulted in a prediction reference of visual acuity associated with malingering vs. real disability in a range >5.77 μV. The results could be useful, especially in cases of no obvious pale disc with trauma. Conclusions Visual acuity quantification using absolute value of amplitude in pattern visual evoked potentials was useful in confirming subjective visual acuity for cutoff values >5.77 μV in disability evaluation to discriminate the malingering from real disability. PMID:22866948

  16. Investigating the detection of multi-homed devices independent of operating systems

    DTIC Science & Technology

    2017-09-01

    timestamp data was used to estimate clock skews using linear regression and linear optimization methods. Analysis revealed that detection depends on...the consistency of the estimated clock skew. Through vertical testing, it was also shown that clock skew consistency depends on the installed...optimization methods. Analysis revealed that detection depends on the consistency of the estimated clock skew. Through vertical testing, it was also

  17. Onset of normal and inverse homoclinic bifurcation in a double plasma system near a plasma fireball

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitra, Vramori; Sarma, Bornali; Sarma, Arun

    Plasma fireballs are generated due to a localized discharge and appear as a luminous glow with a sharp boundary, which suggests the presence of a localized electric field such as electrical sheath or double layer structure. The present work reports the observation of normal and inverse homoclinic bifurcation phenomena in plasma oscillations that are excited in the presence of fireball in a double plasma device. The controlling parameters for these observations are the ratio of target to source chamber (n{sub T}/n{sub S}) densities and applied electrode voltage. Homoclinic bifurcation is noticed in the plasma potential fluctuations as the system evolvesmore » from narrow to long time period oscillations and vice versa with the change of control parameter. The dynamical transition in plasma fireball is demonstrated by spectral analysis, recurrence quantification analysis (RQA), and statistical measures, viz., skewness and kurtosis. The increasing trend of normalized variance reflects that enhancing n{sub T}/n{sub S} induces irregularity in plasma dynamics. The exponential growth of the time period is strongly indicative of homoclinic bifurcation in the system. The gradual decrease of skewness and increase of kurtosis with the increase of n{sub T}/n{sub S} also reflect growing complexity in the system. The visual change of recurrence plot and gradual enhancement of RQA variables DET, L{sub max}, and ENT reflects the bifurcation behavior in the dynamics. The combination of RQA and spectral analysis is a clear evidence that homoclinic bifurcation occurs due to the presence of plasma fireball with different density ratios. However, inverse bifurcation takes place due to the change of fireball voltage. Some of the features observed in the experiment are consistent with a model that describes the dynamics of ionization instabilities.« less

  18. Spin Transparent Siberian Snake And Spin Rotator With Solenoids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koop, I. A.; Otboyev, A. V.; Shatunov, P. Yu.

    2007-06-13

    For intermediate energies of electrons and protons it happens that it is more convenient to construct Siberian snakes and spin rotators using solenoidal fields. Strong coupling caused by the solenoids is suppressed by a number of skew and normal quadrupole magnets. More complicate problem of the spin transparency of such devices also can be solved. This paper gives two examples: spin rotator for electron ring in the eRHIC project and Siberian snake for proton (antiproton) storage ring HESR, which cover whole machines working energy region.

  19. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  20. Evaluation of regulatory variation and theoretical health risk for pesticide maximum residue limits in food.

    PubMed

    Li, Zijian

    2018-08-01

    To evaluate whether pesticide maximum residue limits (MRLs) can protect public health, a deterministic dietary risk assessment of maximum pesticide legal exposure was conducted to convert global MRLs to theoretical maximum dose intake (TMDI) values by estimating the average food intake rate and human body weight for each country. A total of 114 nations (58% of the total nations in the world) and two international organizations, including the European Union (EU) and Codex (WHO) have regulated at least one of the most currently used pesticides in at least one of the most consumed agricultural commodities. In this study, 14 of the most commonly used pesticides and 12 of the most commonly consumed agricultural commodities were identified and selected for analysis. A health risk analysis indicated that nearly 30% of the computed pesticide TMDI values were greater than the acceptable daily intake (ADI) values; however, many nations lack common pesticide MRLs in many commonly consumed foods and other human exposure pathways, such as soil, water, and air were not considered. Normality tests of the TMDI values set indicated that all distributions had a right skewness due to large TMDI clusters at the low end of the distribution, which were caused by some strict pesticide MRLs regulated by the EU (normally a default MRL of 0.01 mg/kg when essential data are missing). The Box-Cox transformation and optimal lambda (λ) were applied to these TMDI distributions, and normality tests of the transformed data set indicated that the power transformed TMDI values of at least eight pesticides presented a normal distribution. It was concluded that unifying strict pesticide MRLs by nations worldwide could significantly skew the distribution of TMDI values to the right, lower the legal exposure to pesticide, and effectively control human health risks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Image statistics and the perception of surface gloss and lightness.

    PubMed

    Kim, Juno; Anderson, Barton L

    2010-07-01

    Despite previous data demonstrating the critical importance of 3D surface geometry in the perception of gloss and lightness, I. Motoyoshi, S. Nishida, L. Sharan, and E. H. Adelson (2007) recently proposed that a simple image statistic--histogram or sub-band skew--is computed by the visual system to infer the gloss and albedo of surfaces. One key source of evidence used to support this claim was an experiment in which adaptation to skewed image statistics resulted in opponent aftereffects in observers' judgments of gloss and lightness. We report a series of adaptation experiments that were designed to assess the cause of these aftereffects. We replicated their original aftereffects in gloss but found no consistent aftereffect in lightness. We report that adaptation to zero-skew adaptors produced similar aftereffects as positively skewed adaptors, and that negatively skewed adaptors induced no reliable aftereffects. We further find that the adaptation effect observed with positively skewed adaptors is not robust to changes in mean luminance that diminish the intensity of the luminance extrema. Finally, we show that adaptation to positive skew reduces (rather than increases) the apparent lightness of light pigmentation on non-uniform albedo surfaces. These results challenge the view that the adaptation results reported by Motoyoshi et al. (2007) provide evidence that skew is explicitly computed by the visual system.

  2. Individual differences in loss aversion and preferences for skewed risks across adulthood.

    PubMed

    Seaman, Kendra L; Green, Mikella A; Shu, Stephen; Samanez-Larkin, Gregory R

    2018-06-01

    In a previous study, we found adult age differences in the tendency to accept more positively skewed gambles (with a small chance of a large win) than other equivalent risks, or an age-related positive-skew bias. In the present study, we examined whether loss aversion explained this bias. A total of 508 healthy participants (ages 21-82) completed measures of loss aversion and skew preference. Age was not related to loss aversion. Although loss aversion was a significant predictor of gamble acceptance, it did not influence the age-related positive-skew bias. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Lethal Temperature for Pinewood Nematode, Bursaphelenchus xylophilus, in Infested Wood Using Microwave Energy

    PubMed Central

    Hoover, Kelli; Uzunovic, Adnan; Gething, Brad; Dale, Angela; Leung, Karen; Ostiguy, Nancy; Janowiak, John J.

    2010-01-01

    To reduce the risks associated with global transport of wood infested with pinewood nematode Bursaphelenchus xylophilus, microwave irradiation was tested at 14 temperatures in replicated wood samples to determine the temperature that would kill 99.9968% of nematodes in a sample of ≥ 100,000 organisms, meeting a level of efficacy of Probit 9. Treatment of these heavily infested wood samples (mean of > 1,000 nematodes/g of sapwood) produced 100% mortality at 56 °C and above, held for 1 min. Because this “brute force” approach to Probit 9 treats individual nematodes as the observational unit regardless of the number of wood samples it takes to treat this number of organisms, we also used a modeling approach. The best fit was to a Probit function, which estimated lethal temperature at 62.2 (95% confidence interval 59.0-70.0) °C. This discrepancy between the observed and predicted temperature to achieve Probit 9 efficacy may have been the result of an inherently limited sample size when predicting the true mean from the total population. The rate of temperature increase in the small wood samples (rise time) did not affect final nematode mortality at 56 °C. In addition, microwave treatment of industrial size, infested wood blocks killed 100% of > 200,000 nematodes at ≥ 56 °C held for 1 min in replicated wood samples. The 3rd-stage juvenile (J3) of the nematode, that is resistant to cold temperatures and desiccation, was abundant in our wood samples and did not show any resistance to microwave treatment. Regression analysis of internal wood temperatures as a function of surface temperature produced a regression equation that could be used with a relatively high degree of accuracy to predict internal wood temperatures, under the conditions of this study. These results provide strong evidence of the ability of microwave treatment to successfully eradicate B. xylophilus in infested wood at or above 56 °C held for 1 min. PMID:22736846

  4. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    PubMed Central

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  5. Flow in Rotating Serpentine Coolant Passages With Skewed Trip Strips

    NASA Technical Reports Server (NTRS)

    Tse, David G.N.; Steuber, Gary

    1996-01-01

    Laser velocimetry was utilized to map the velocity field in serpentine turbine blade cooling passages with skewed trip strips. The measurements were obtained at Reynolds and Rotation numbers of 25,000 and 0.24 to assess the influence of trips, passage curvature and Coriolis force on the flow field. The interaction of the secondary flows induced by skewed trips with the passage rotation produces a swirling vortex and a corner recirculation zone. With trips skewed at +45 deg, the secondary flows remain unaltered as the cross-flow proceeds from the passage to the turn. However, the flow characteristics at these locations differ when trips are skewed at -45 deg. Changes in the flow structure are expected to augment heat transfer, in agreement with the heat transfer measurements of Johnson, et al. The present results show that trips are skewed at -45 deg in the outward flow passage and trips are skewed at +45 deg in the inward flow passage maximize heat transfer. Details of the present measurements were related to the heat transfer measurements of Johnson, et al. to relate fluid flow and heat transfer measurements.

  6. Stochastic modelling of non-stationary financial assets

    NASA Astrophysics Data System (ADS)

    Estevens, Joana; Rocha, Paulo; Boto, João P.; Lind, Pedro G.

    2017-11-01

    We model non-stationary volume-price distributions with a log-normal distribution and collect the time series of its two parameters. The time series of the two parameters are shown to be stationary and Markov-like and consequently can be modelled with Langevin equations, which are derived directly from their series of values. Having the evolution equations of the log-normal parameters, we reconstruct the statistics of the first moments of volume-price distributions which fit well the empirical data. Finally, the proposed framework is general enough to study other non-stationary stochastic variables in other research fields, namely, biology, medicine, and geology.

  7. Migration plans of the rural populations of the Third World countries: a probit analysis of micro-level data from Asia, Africa, and Latin America.

    PubMed

    Mcdevitt, T M; Hawley, A H; Udry, J R; Gadalla, S; Leoprapai, B; Cardona, R

    1986-07-01

    This study 1) examines the extent to which a given set of microlevel factors has predictive value in different socioeconomic settings and 2) demonstrates the utility of a probit estimation technique in examining plans of rural populations to migrate. Data were collected in 1977-1979 in Thailand, Egypt, and Colombia, 3 countries which differ in culture, extent of urbanization, and proportion of labor force engaged in nonextractive industries. The researchers used identical questionnaires and obtained interviews in 4 rural villages with the "migration shed" of each country's capital city. There were 1088 rural-resident men and women interviewed in Thailand, 1088 in Colombia, and 1376 in Egypt. The researchers gathered information about year-to-year changes in residence, marital status, fertility, housing, employment status, occupation, and industry. While in all 3 countries return moves are relatively frequent, especially among males, the proportions of migrants who have moved 3 or more times do not rise above 10%. The model used portrays the formation of migration intentions of the individual as the outcome of a decision process involving the subjective weighing of perceived differentials in well-being associated with current residence and 1 or more potential destinations, taking into account the direct relocation costs and ability to finance a move. The researchers used dichotomous probit and ordinal probit techniques and 4 variations on the dependant variable to generate some of the results. The only expectancy variable significant in all countries is age. Education is also positively and significantly associated with intentions to move for both sexes in Colombia and Egypt. Marital status is a deterrent to migration plans for males in Colombia and both sexes in Egypt. Previous migration experience fails to show any significant relationship to propensity to move. Conclusions drawn from the data include: 1) the effects of age and economic status appear to increase, both in strength and significance, for males in countries as the likelihood of a move increases; and 2) the effect of the kin and friend contract variable in Colombia appears to be related to its usefulness in explaining th initial consideration of a move rather than the plans that carry a probability or certainty of implementation. The careful measurement of strength of migration intentions and the application of ordinal probit estimation methods to the analysis of prospective migration may contribute to the refinement of our understanding of the process of migration decision making across a range of geographical, cultural, and developmental contexts.

  8. Regional skew for California, and flood frequency for selected sites in the Sacramento-San Joaquin River Basin, based on data through water year 2006

    USGS Publications Warehouse

    Parrett, Charles; Veilleux, Andrea; Stedinger, J.R.; Barth, N.A.; Knifong, Donna L.; Ferris, J.C.

    2011-01-01

    Improved flood-frequency information is important throughout California in general and in the Sacramento-San Joaquin River Basin in particular, because of an extensive network of flood-control levees and the risk of catastrophic flooding. A key first step in updating flood-frequency information is determining regional skew. A Bayesian generalized least squares (GLS) regression method was used to derive a regional-skew model based on annual peak-discharge data for 158 long-term (30 or more years of record) stations throughout most of California. The desert areas in southeastern California had too few long-term stations to reliably determine regional skew for that hydrologically distinct region; therefore, the desert areas were excluded from the regional skew analysis for California. Of the 158 long-term stations used to determine regional skew, 145 have minimally regulated annual-peak discharges, and 13 stations are dam sites for which unregulated peak discharges were estimated from unregulated daily maximum discharge data furnished by the U.S. Army Corp of Engineers. Station skew was determined by using an expected moments algorithm (EMA) program for fitting the Pearson Type 3 flood-frequency distribution to the logarithms of annual peak-discharge data. The Bayesian GLS regression method previously developed was modified because of the large cross correlations among concurrent recorded peak discharges in California and the use of censored data and historical flood information with the new expected moments algorithm. In particular, to properly account for these cross-correlation problems and develop a suitable regression model and regression diagnostics, a combination of Bayesian weighted least squares and generalized least squares regression was adopted. This new methodology identified a nonlinear function relating regional skew to mean basin elevation. The regional skew values ranged from -0.62 for a mean basin elevation of zero to 0.61 for a mean basin elevation of 11,000 feet. This relation between skew and elevation reflects the interaction of snow with rain, which increases with increased elevation. The equivalent record length for the new regional skew ranges from 52 to 65 years of record, depending upon mean basin elevation. The old regional skew map in Bulletin 17B, published by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data (1982), reported an equivalent record length of only 17 years. The newly developed regional skew relation for California was used to update flood frequency for the 158 sites used in the regional skew analysis as well as 206 selected sites in the Sacramento-San Joaquin River Basin. For these sites, annual-peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years were determined on the basis of data through water year 2006. The expected moments algorithm was used for determining the magnitude and frequency of floods at gaged sites by using regional skew values and using the basic approach outlined in Bulletin

  9. Box-Cox transformation of firm size data in statistical analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2014-03-01

    Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.

  10. Probing star formation relations of mergers and normal galaxies across the CO ladder

    NASA Astrophysics Data System (ADS)

    Greve, Thomas R.

    We examine integrated luminosity relations between the IR continuum and the CO rotational ladder observed for local (ultra) luminous infra-red galaxies ((U)LIRGs, L IR >= 1011 M⊙) and normal star forming galaxies in the context of radiation pressure regulated star formation proposed by Andrews & Thompson (2011). This can account for the normalization and linear slopes of the luminosity relations (log L IR = α log L'CO + β) of both low- and high-J CO lines observed for normal galaxies. Super-linear slopes occur for galaxy samples with significantly different dense gas fractions. Local (U)LIRGs are observed to have sub-linear high-J (J up > 6) slopes or, equivalently, increasing L COhigh-J /L IR with L IR. In the extreme ISM conditions of local (U)LIRGs, the high-J CO lines no longer trace individual hot spots of star formation (which gave rise to the linear slopes for normal galaxies) but a more widespread warm and dense gas phase mechanically heated by powerful supernovae-driven turbulence and shocks.

  11. Contributions of Optical and Non-Optical Blur to Variation in Visual Acuity

    PubMed Central

    McAnany, J. Jason; Shahidi, Mahnaz; Applegate, Raymond A.; Zelkha, Ruth; Alexander, Kenneth R.

    2011-01-01

    Purpose To determine the relative contributions of optical and non-optical sources of intrinsic blur to variations in visual acuity (VA) among normally sighted subjects. Methods Best-corrected VA of sixteen normally sighted subjects was measured using briefly presented (59 ms) tumbling E optotypes that were either unblurred or blurred through convolution with Gaussian functions of different widths. A standard model of intrinsic blur was used to estimate each subject’s equivalent intrinsic blur (σint) and VA for the unblurred tumbling E (MAR0). For 14 subjects, a radially averaged optical point spread function due to higher-order aberrations was derived by Shack-Hartmann aberrometry and fit with a Gaussian function. The standard deviation of the best-fit Gaussian function defined optical blur (σopt). An index of non-optical blur (η) was defined as: 1-σopt/σint. A control experiment was conducted on 5 subjects to evaluate the effect of stimulus duration on MAR0 and σint. Results Log MAR0 for the briefly presented E was correlated significantly with log σint (r = 0.95, p < 0.01), consistent with previous work. However, log MAR0 was not correlated significantly with log σopt (r = 0.46, p = 0.11). For subjects with log MAR0 equivalent to approximately 20/20 or better, log MAR0 was independent of log η, whereas for subjects with larger log MAR0 values, log MAR0 was proportional to log η. The control experiment showed a statistically significant effect of stimulus duration on log MAR0 (p < 0.01) but a non-significant effect on σint (p = 0.13). Conclusions The relative contributions of optical and non-optical blur to VA varied among the subjects, and were related to the subject’s VA. Evaluating optical and non-optical blur may be useful for predicting changes in VA following procedures that improve the optics of the eye in patients with both optical and non-optical sources of VA loss. PMID:21460756

  12. A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.

    PubMed

    Rhiel, G Steven

    2007-02-01

    In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.

  13. Effects of modification of in vitro fertilization techniques on the sex ratio of the resultant bovine embryos.

    PubMed

    Iwata, H; Shiono, H; Kon, Y; Matsubara, K; Kimura, K; Kuwayama, T; Monji, Y

    2008-05-01

    The duration of sperm-oocyte co-incubation has been observed to affect the sex ratio of in vitro produced bovine embryos. The purpose of this study was to investigate some factors that may be responsible for the skewed sex ratio. The factors studied were selected combinations of the duration of co-incubation, the presence or absence of cumulus cells, and the level of hyaluronic acid (HA) in the culture medium. Experiment 1 examined the effect of selected combinations of different factors during the fertilization phase of in vitro oocyte culture. The factors were the nature of the sperm or its treatment, the duration of the sperm-oocyte co-incubation, and the level of hyaluronic acid in the culture medium. In experiment 2, the capacitation of frozen-thawed-Percoll-washed sperm (control), pre-incubated, and non-binding sperm was evaluated by the zona pellucida (ZP) binding assay and the hypo-osmotic swelling test (HOST). The purpose of experiment 3 was to determine the oocyte cleavage rate and sex ratio of the embryos (>5 cells) produced as a consequence of the 10 treatments used in experiment 1. In treatments 1-3 (experiments 1 and 3) COC were co-cultured with sperm for 1, 5 or 18 h. Polyspermic fertilization rose as the co-incubation period increased (1 h 6.5%, 5 h 15.9%, 18 h 41.8%; P<0.05), and the highest rate of normal fertilization was observed for 5h culture (73.4%; P<0.05). The sex ratio was significantly (P<0.05) skewed from the expected 50:50 towards males following 1 h (64.4%) and 5 h (67.3%) co-incubation, but was not affected by 18 h incubation (52.3%). In treatment 4, sperm was pre-incubated for 1h and cultured with COC for 5 h. Relative to control sperm, pre-incubation of sperm increased ZP binding (116 versus 180 per ZP; P<0.05) and decreased the proportion of HOST positive sperm (65.8-48.6%; P<0.05; experiment 2). Pre-incubation did not affect the rates of polyspermy, normal fertilization or the sex ratio of the embryos (experiments 1 and 3). The oocytes used in treatments 5-10 of experiments 1 and 3 were denuded prior to fertilization. Co-incubation of denuded oocytes for 1h (treatment 5) or 5h (treatment 6) resulted in levels of polyspermic fertilization similar to that for treatment 2 with significantly lower levels of normal fertilization (41.7% and 52.6%, respectively; P<0.05), and the 1h co-incubation significantly skewed (P<0.05) the proportion of male embryos to 70.0%. Denuded oocytes were fertilized for 5h with sperm unable to bind to cumulus cells (NB sperm) in treatment 7 or those that bound to cumulus cells (B) in treatment 8. These two treatments had similar rates of polyspermic, normal and non-fertilization. However, the B sperm caused the sex ratio of the embryos to be significantly skewed to males (63.9%; P<0.05). Fertilization of denuded oocytes in medium containing hyaluronic acid (0.1 mg/ml, treatment 9; 1.0 mg/ml treatment 10) significantly (P<0.05) reduced the incidence of polyspermic fertilization relative to treatments 2 and 6, and normal fertilization relative to treatment 2, but did not affect the sex ratio of the embryos. It was concluded that exposure of sperm to cumulus cells, either before fertilization of denuded oocytes or during the process of fertilization of complete COC, increased the proportion of male embryos produced by in vitro culture. It was hypothesized that this may be due to the capacitation state of the sperm, the cumulus-sperm interaction, and/or the ability of the sperm to bind to cumulus cells or oocytes.

  14. Accuracy of the Thermo Fisher Scientific (Sensititre™) dry-form broth microdilution MIC product when testing ceftaroline.

    PubMed

    Jones, Ronald N; Holliday, Nicole M; Critchley, Ian A

    2015-04-01

    Ceftaroline, the active metabolite of the ceftaroline fosamil pro-drug, was the first advanced-spectrum cephalosporin with potent activity against methicillin-resistant Staphylococcus aureus to be approved by the US Food and Drug Administration for acute bacterial skin and skin structure infections. After 4 years of clinical use, few ceftaroline commercial susceptibility testing devices other than agar diffusion methods (disks and stable gradient) are available. Here, we validate a broth microdilution product (Sensititre™; Thermo Fisher Scientific, Cleveland, OH, USA) that achieved 99.2% essential agreement (manual and automated reading) and 95.3-100.0% categorical agreement, with high reproducibility (98.0-100.0%). Sensititre™ MIC values for ceftaroline, however, were slightly skewed toward an elevated value (0.5 × log2 dilution step), greatest when testing for streptococci and Enterobacteriaceae. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Parsimonious nonstationary flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Serago, Jake M.; Vogel, Richard M.

    2018-02-01

    There is now widespread awareness of the impact of anthropogenic influences on extreme floods (and droughts) and thus an increasing need for methods to account for such influences when estimating a frequency distribution. We introduce a parsimonious approach to nonstationary flood frequency analysis (NFFA) based on a bivariate regression equation which describes the relationship between annual maximum floods, x, and an exogenous variable which may explain the nonstationary behavior of x. The conditional mean, variance and skewness of both x and y = ln (x) are derived, and combined with numerous common probability distributions including the lognormal, generalized extreme value and log Pearson type III models, resulting in a very simple and general approach to NFFA. Our approach offers several advantages over existing approaches including: parsimony, ease of use, graphical display, prediction intervals, and opportunities for uncertainty analysis. We introduce nonstationary probability plots and document how such plots can be used to assess the improved goodness of fit associated with a NFFA.

  16. Regional regression equations to estimate peak-flow frequency at sites in North Dakota using data through 2009

    USGS Publications Warehouse

    Williams-Sether, Tara

    2015-08-06

    Annual peak-flow frequency data from 231 U.S. Geological Survey streamflow-gaging stations in North Dakota and parts of Montana, South Dakota, and Minnesota, with 10 or more years of unregulated peak-flow record, were used to develop regional regression equations for exceedance probabilities of 0.5, 0.20, 0.10, 0.04, 0.02, 0.01, and 0.002 using generalized least-squares techniques. Updated peak-flow frequency estimates for 262 streamflow-gaging stations were developed using data through 2009 and log-Pearson Type III procedures outlined by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data. An average generalized skew coefficient was determined for three hydrologic zones in North Dakota. A StreamStats web application was developed to estimate basin characteristics for the regional regression equation analysis. Methods for estimating a weighted peak-flow frequency for gaged sites and ungaged sites are presented.

  17. Improvement of Reynolds-Stress and Triple-Product Lag Models

    NASA Technical Reports Server (NTRS)

    Olsen, Michael E.; Lillard, Randolph P.

    2017-01-01

    The Reynolds-stress and triple product Lag models were created with a normal stress distribution which was denied by a 4:3:2 distribution of streamwise, spanwise and wall normal stresses, and a ratio of r(sub w) = 0.3k in the log layer region of high Reynolds number flat plate flow, which implies R11(+)= [4/(9/2)*.3] approximately 2.96. More recent measurements show a more complex picture of the log layer region at high Reynolds numbers. The first cut at improving these models along with the direction for future refinements is described. Comparison with recent high Reynolds number data shows areas where further work is needed, but also shows inclusion of the modeled turbulent transport terms improve the prediction where they influence the solution. Additional work is needed to make the model better match experiment, but there is significant improvement in many of the details of the log layer behavior.

  18. 210Po Log-normal distribution in human urines: Survey from Central Italy people

    PubMed Central

    Sisti, D.; Rocchi, M. B. L.; Meli, M. A.; Desideri, D.

    2009-01-01

    The death in London of the former secret service agent Alexander Livtinenko on 23 November 2006 generally attracted the attention of the public to the rather unknown radionuclide 210Po. This paper presents the results of a monitoring programme of 210Po background levels in the urines of noncontaminated people living in Central Italy (near the Republic of S. Marino). The relationship between age, sex, years of smoking, number of cigarettes per day, and 210Po concentration was also studied. The results indicated that the urinary 210Po concentration follows a surprisingly perfect Log-normal distribution. Log 210Po concentrations were positively correlated to age (p < 0.0001), number of daily smoked cigarettes (p = 0.006), and years of smoking (p = 0.021), and associated to sex (p = 0.019). Consequently, this study provides upper reference limits for each sub-group identified by significantly predictive variables. PMID:19750019

  19. Household income and preschool attendance in china.

    PubMed

    Gong, Xin; Xu, Di; Han, Wen-Jui

    2015-01-01

    This article draws upon the literature showing the benefits of high-quality preschools on child well-being to explore the role of household income on preschool attendance for a cohort of 3- to 6-year-olds in China using data from the China Health and Nutrition Survey, 1991-2006. Analyses are conducted separately for rural (N = 1,791) and urban (N = 633) settings. Estimates from a probit model with rich controls suggest a positive association between household income per capita and preschool attendance in both settings. A household fixed-effects model, conducted only on the rural sample, finds results similar to, although smaller than, those from the probit estimates. Policy recommendations are discussed. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.

  20. An improved probit method for assessment of domino effect to chemical process equipment caused by overpressure.

    PubMed

    Mingguang, Zhang; Juncheng, Jiang

    2008-10-30

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.

  1. Weak interactions, omnivory and emergent food-web properties.

    PubMed

    Emmerson, Mark; Yearsley, Jon M

    2004-02-22

    Empirical studies have shown that, in real ecosystems, species-interaction strengths are generally skewed in their distribution towards weak interactions. Some theoretical work also suggests that weak interactions, especially in omnivorous links, are important for the local stability of a community at equilibrium. However, the majority of theoretical studies use uniform distributions of interaction strengths to generate artificial communities for study. We investigate the effects of the underlying interaction-strength distribution upon the return time, permanence and feasibility of simple Lotka-Volterra equilibrium communities. We show that a skew towards weak interactions promotes local and global stability only when omnivory is present. It is found that skewed interaction strengths are an emergent property of stable omnivorous communities, and that this skew towards weak interactions creates a dynamic constraint maintaining omnivory. Omnivory is more likely to occur when omnivorous interactions are skewed towards weak interactions. However, a skew towards weak interactions increases the return time to equilibrium, delays the recovery of ecosystems and hence decreases the stability of a community. When no skew is imposed, the set of stable omnivorous communities shows an emergent distribution of skewed interaction strengths. Our results apply to both local and global concepts of stability and are robust to the definition of a feasible community. These results are discussed in the light of empirical data and other theoretical studies, in conjunction with their broader implications for community assembly.

  2. Sociality, mating system and reproductive skew in marmots: evidence and hypotheses.

    PubMed

    Allainé

    2000-10-05

    Marmot species exhibit a great diversity of social structure, mating systems and reproductive skew. In particular, among the social species (i.e. all except Marmota monax), the yellow-bellied marmot appears quite different from the others. The yellow-bellied marmot is primarily polygynous with an intermediate level of sociality and low reproductive skew among females. In contrast, all other social marmot species are mainly monogamous, highly social and with marked reproductive skew among females. To understand the evolution of this difference in reproductive skew, I examined four possible explanations identified from reproductive skew theory. From the literature, I then reviewed evidence to investigate if marmot species differ in: (1) the ability of dominants to control the reproduction of subordinates; (2) the degree of relatedness between group members; (3) the benefit for subordinates of remaining in the social group; and (4) the benefit for dominants of retaining subordinates. I found that the optimal skew hypothesis may apply for both sets of species. I suggest that yellow-bellied marmot females may benefit from retaining subordinate females and in return have to concede them reproduction. On the contrary, monogamous marmot species may gain by suppressing the reproduction of subordinate females to maximise the efficiency of social thermoregulation, even at the risk of departure of subordinate females from the family group. Finally, I discuss scenarios for the simultaneous evolution of sociality, monogamy and reproductive skew in marmots.

  3. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  4. New approach application of data transformation in mean centering of ratio spectra method

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.

    2015-05-01

    Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.

  5. G6PD deficiency from lyonization after hematopoietic stem cell transplantation from female heterozygous donors.

    PubMed

    Au, W-Y; Pang, A; Lam, K K Y; Song, Y-Q; Lee, W-M; So, J C C; Kwong, Y-L

    2007-10-01

    To determine whether during hematopoietic stem cell transplantation (HSCT), X-chromosome inactivation (lyonization) of donor HSC might change after engraftment in recipients, the glucose-6-phosphate dehydrogenase (G6PD) gene of 180 female donors was genotyped by PCR/allele-specific primer extension, and MALDI-TOF mass spectrometry/Sequenom MassARRAY analysis. X-inactivation was determined by semiquantitative PCR for the HUMARA gene before/after HpaII digestion. X-inactivation was preserved in most cases post-HSCT, although altered skewing of lyonization might occur to either of the X-chromosomes. Among pre-HSCT clinicopathologic parameters analyzed, only recipient gender significantly affected skewing. Seven donors with normal G6PD biochemically but heterozygous for G6PD mutants were identified. Owing to lyonization changes, some donor-recipient pairs showed significantly different G6PD levels. In one donor-recipient pair, extreme lyonization affecting the wild-type G6PD allele occurred, causing biochemical G6PD deficiency in the recipient. In HSCT from asymptomatic female donors heterozygous for X-linked recessive disorders, altered lyonization might cause clinical diseases in the recipients.

  6. Inference of median difference based on the Box-Cox model in randomized clinical trials.

    PubMed

    Maruo, K; Isogawa, N; Gosho, M

    2015-05-10

    In randomized clinical trials, many medical and biological measurements are not normally distributed and are often skewed. The Box-Cox transformation is a powerful procedure for comparing two treatment groups for skewed continuous variables in terms of a statistical test. However, it is difficult to directly estimate and interpret the location difference between the two groups on the original scale of the measurement. We propose a helpful method that infers the difference of the treatment effect on the original scale in a more easily interpretable form. We also provide statistical analysis packages that consistently include an estimate of the treatment effect, covariance adjustments, standard errors, and statistical hypothesis tests. The simulation study that focuses on randomized parallel group clinical trials with two treatment groups indicates that the performance of the proposed method is equivalent to or better than that of the existing non-parametric approaches in terms of the type-I error rate and power. We illustrate our method with cluster of differentiation 4 data in an acquired immune deficiency syndrome clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Harvesting of males delays female breeding in a socially monogamous mammal; the beaver.

    PubMed

    Parker, Howard; Rosell, Frank; Mysterud, Atle

    2007-02-22

    Human exploitation may skew adult sex ratios in vertebrate populations to the extent that males become limiting for normal reproduction. In polygynous ungulates, females delay breeding in heavily harvested populations, but effects are often fairly small. We would expect a stronger effect of male harvesting in species with a monogamous mating system, but no such study has been performed. We analysed the effect of harvesting males on the timing of reproduction in the obligate monogamous beaver (Castor fiber). We found a negative impact of harvesting of adult males on the timing of parturition in female beavers. The proportion of normal breeders sank from over 80%, when no males had been shot in the territories of pregnant females, to under 20%, when three males had been shot. Harvesting of males in monogamous mammals can apparently affect their normal reproductive cycle.

  8. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    NASA Astrophysics Data System (ADS)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  9. Determining prescription durations based on the parametric waiting time distribution.

    PubMed

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-12-01

    The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Technical note: An improved approach to determining background aerosol concentrations with PILS sampling on aircraft

    NASA Astrophysics Data System (ADS)

    Fukami, Christine S.; Sullivan, Amy P.; Ryan Fulgham, S.; Murschell, Trey; Borch, Thomas; Smith, James N.; Farmer, Delphine K.

    2016-07-01

    Particle-into-Liquid Samplers (PILS) have become a standard aerosol collection technique, and are widely used in both ground and aircraft measurements in conjunction with off-line ion chromatography (IC) measurements. Accurate and precise background samples are essential to account for gas-phase components not efficiently removed and any interference in the instrument lines, collection vials or off-line analysis procedures. For aircraft sampling with PILS, backgrounds are typically taken with in-line filters to remove particles prior to sample collection once or twice per flight with more numerous backgrounds taken on the ground. Here, we use data collected during the Front Range Air Pollution and Photochemistry Éxperiment (FRAPPÉ) to demonstrate that not only are multiple background filter samples are essential to attain a representative background, but that the chemical background signals do not follow the Gaussian statistics typically assumed. Instead, the background signals for all chemical components analyzed from 137 background samples (taken from ∼78 total sampling hours over 18 flights) follow a log-normal distribution, meaning that the typical approaches of averaging background samples and/or assuming a Gaussian distribution cause an over-estimation of background samples - and thus an underestimation of sample concentrations. Our approach of deriving backgrounds from the peak of the log-normal distribution results in detection limits of 0.25, 0.32, 3.9, 0.17, 0.75 and 0.57 μg m-3 for sub-micron aerosol nitrate (NO3-), nitrite (NO2-), ammonium (NH4+), sulfate (SO42-), potassium (K+) and calcium (Ca2+), respectively. The difference in backgrounds calculated from assuming a Gaussian distribution versus a log-normal distribution were most extreme for NH4+, resulting in a background that was 1.58× that determined from fitting a log-normal distribution.

  11. Human respiratory syncytial virus load normalized by cell quantification as predictor of acute respiratory tract infection.

    PubMed

    Gómez-Novo, Miriam; Boga, José A; Álvarez-Argüelles, Marta E; Rojo-Alba, Susana; Fernández, Ana; Menéndez, María J; de Oña, María; Melón, Santiago

    2018-05-01

    Human respiratory syncytial virus (HRSV) is a common cause of respiratory infections. The main objective is to analyze the prediction ability of viral load of HRSV normalized by cell number in respiratory symptoms. A prospective, descriptive, and analytical study was performed. From 7307 respiratory samples processed between December 2014 to April 2016, 1019 HRSV-positive samples, were included in this study. Low respiratory tract infection was present in 729 patients (71.54%). Normalized HRSV load was calculated by quantification of HRSV genome and human β-globin gene and expressed as log10 copies/1000 cells. HRSV mean loads were 4.09 ± 2.08 and 4.82 ± 2.09 log10 copies/1000 cells in the 549 pharyngeal and 470 nasopharyngeal samples, respectively (P < 0.001). The viral mean load was 4.81 ± 1.98 log10 copies/1000 cells for patients under the age of 4-year-old (P < 0.001). The viral mean loads were 4.51 ± 2.04 cells in patients with low respiratory tract infection and 4.22 ± 2.28 log10 copies/1000 cells with upper respiratory tract infection or febrile syndrome (P < 0.05). A possible cut off value to predict LRTI evolution was tentatively established. Normalization of viral load by cell number in the samples is essential to ensure an optimal virological molecular diagnosis avoiding that the quality of samples affects the results. A high viral load can be a useful marker to predict disease progression. © 2018 Wiley Periodicals, Inc.

  12. Earthquake fragility assessment of curved and skewed bridges in Mountain West region.

    DOT National Transportation Integrated Search

    2016-09-01

    Reinforced concrete (RC) bridges with both skew and curvature are common in areas with : complex terrains. Skewed and/or curved bridges were found in existing studies to exhibit more : complicated seismic performance than straight bridges, however th...

  13. Earthquake fragility assessment of curved and skewed bridges in Mountain West region : research brief.

    DOT National Transportation Integrated Search

    2016-09-01

    the ISSUE : the RESEARCH : Earthquake Fragility : Assessment of Curved : and Skewed Bridges in : Mountain West Region : Reinforced concrete bridges with both skew and curvature are common in areas with complex terrains. : These bridges are irregular ...

  14. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors

    NASA Astrophysics Data System (ADS)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George

    2017-03-01

    Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  15. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  16. Arc voltage distribution skewness as an indicator of electrode gap during vacuum arc remelting

    DOEpatents

    Williamson, Rodney L.; Zanner, Frank J.; Grose, Stephen M.

    1998-01-01

    The electrode gap of a VAR is monitored by determining the skewness of a distribution of gap voltage measurements. A decrease in skewness indicates an increase in gap and may be used to control the gap.

  17. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord [Eau Claire, WI; Cornett, Frank N [Chippewa Falls, WI

    2008-10-07

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.

  18. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord [Redwood Shores, CA; Cornett, Frank N [Chippewa Falls, WI

    2011-10-04

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.

  19. Geological entropy and solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Bianchi, Marco; Pedretti, Daniele

    2017-06-01

    We propose a novel approach to link solute transport behavior to the physical heterogeneity of the aquifer, which we fully characterize with two measurable parameters: the variance of the log K values (σY2), and a new indicator (HR) that integrates multiple properties of the K field into a global measure of spatial disorder or geological entropy. From the results of a detailed numerical experiment considering solute transport in K fields representing realistic distributions of hydrofacies in alluvial aquifers, we identify empirical relationship between the two parameters and the first three central moments of the distributions of arrival times of solute particles at a selected control plane. The analysis of experimental data indicates that the mean and the variance of the solutes arrival times tend to increase with spatial disorder (i.e., HR increasing), while highly skewed distributions are observed in more orderly structures (i.e., HR decreasing) or at higher σY2. We found that simple closed-form empirical expressions of the bivariate dependency of skewness on HR and σY2 can be used to predict the emergence of non-Fickian transport in K fields considering a range of structures and heterogeneity levels, some of which based on documented real aquifers. The accuracy of these predictions and in general the results from this study indicate that a description of the global variability and structure of the K field in terms of variance and geological entropy offers a valid and broadly applicable approach for the interpretation and prediction of transport in heterogeneous porous media.

  20. Estimating the Magnitude and Frequency of Peak Streamflows for Ungaged Sites on Streams in Alaska and Conterminous Basins in Canada

    USGS Publications Warehouse

    Curran, Janet H.; Meyer, David F.; Tasker, Gary D.

    2003-01-01

    Estimates of the magnitude and frequency of peak streamflow are needed across Alaska for floodplain management, cost-effective design of floodway structures such as bridges and culverts, and other water-resource management issues. Peak-streamflow magnitudes for the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were computed for 301 streamflow-gaging and partial-record stations in Alaska and 60 stations in conterminous basins of Canada. Flows were analyzed from data through the 1999 water year using a log-Pearson Type III analysis. The State was divided into seven hydrologically distinct streamflow analysis regions for this analysis, in conjunction with a concurrent study of low and high flows. New generalized skew coefficients were developed for each region using station skew coefficients for stations with at least 25 years of systematic peak-streamflow data. Equations for estimating peak streamflows at ungaged locations were developed for Alaska and conterminous basins in Canada using a generalized least-squares regression model. A set of predictive equations for estimating the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year peak streamflows was developed for each streamflow analysis region from peak-streamflow magnitudes and physical and climatic basin characteristics. These equations may be used for unregulated streams without flow diversions, dams, periodically releasing glacial impoundments, or other streamflow conditions not correlated to basin characteristics. Basin characteristics should be obtained using methods similar to those used in this report to preserve the statistical integrity of the equations.

  1. Analysis of geophysical logs from six boreholes at Lariat Gulch, former U.S. Air Force site PJKS, Jefferson County, Colorado

    USGS Publications Warehouse

    Paillet, Frederick L.; Hodges, Richard E.; Corland, Barbara S.

    2002-01-01

    This report presents and describes geophysical logs for six boreholes in Lariat Gulch, a topographic gulch at the former U.S. Air Force site PJKS in Jefferson County near Denver, Colorado. Geophysical logs include gamma, normal resistivity, fluid-column temperature and resistivity, caliper, televiewer, and heat-pulse flowmeter. These logs were run in two boreholes penetrating only the Fountain Formation of Pennsylvanian and Permian age (logged to depths of about 65 and 570 feet) and in four boreholes (logged to depths of about 342 to 742 feet) penetrating mostly the Fountain Formation and terminating in Precambrian crystalline rock, which underlies the Fountain Formation. Data from the logs were used to identify fractures and bedding planes and to locate the contact between the two formations. The logs indicated few fractures in the boreholes and gave no indication of higher transmissivity in the contact zone between the two formations. Transmissivities for all fractures in each borehole were estimated to be less than 2 feet squared per day.

  2. New approaches to probing Minkowski functionals

    NASA Astrophysics Data System (ADS)

    Munshi, D.; Smidt, J.; Cooray, A.; Renzi, A.; Heavens, A.; Coles, P.

    2013-10-01

    We generalize the concept of the ordinary skew-spectrum to probe the effect of non-Gaussianity on the morphology of cosmic microwave background (CMB) maps in several domains: in real space (where they are commonly known as cumulant-correlators), and in harmonic and needlet bases. The essential aim is to retain more information than normally contained in these statistics, in order to assist in determining the source of any measured non-Gaussianity, in the same spirit as Munshi & Heavens skew-spectra were used to identify foreground contaminants to the CMB bispectrum in Planck data. Using a perturbative series to construct the Minkowski functionals (MFs), we provide a pseudo-C_ℓ based approach in both harmonic and needlet representations to estimate these spectra in the presence of a mask and inhomogeneous noise. Assuming homogeneous noise, we present approximate expressions for error covariance for the purpose of joint estimation of these spectra. We present specific results for four different models of primordial non-Gaussianity local, equilateral, orthogonal and enfolded models, as well as non-Gaussianity caused by unsubtracted point sources. Closed form results of next-order corrections to MFs too are obtained in terms of a quadruplet of kurt-spectra. We also use the method of modal decomposition of the bispectrum and trispectrum to reconstruct the MFs as an alternative method of reconstruction of morphological properties of CMB maps. Finally, we introduce the odd-parity skew-spectra to probe the odd-parity bispectrum and its impact on the morphology of the CMB sky. Although developed for the CMB, the generic results obtained here can be useful in other areas of cosmology.

  3. Study on compensation algorithm of head skew in hard disk drives

    NASA Astrophysics Data System (ADS)

    Xiao, Yong; Ge, Xiaoyu; Sun, Jingna; Wang, Xiaoyan

    2011-10-01

    In hard disk drives (HDDs), head skew among multiple heads is pre-calibrated during manufacturing process. In real applications with high capacity of storage, the head stack may be tilted due to environmental change, resulting in additional head skew errors from outer diameter (OD) to inner diameter (ID). In case these errors are below the preset threshold for power on recalibration, the current strategy may not be aware, and drive performance under severe environment will be degraded. In this paper, in-the-field compensation of small DC head skew variation across stroke is proposed, where a zone table has been equipped. Test results demonstrating its effectiveness to reduce observer error and to enhance drive performance via accurate prediction of DC head skew are provided.

  4. Asymmetric skew Bessel processes and their applications to finance

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; Goovaerts, Marc; Schoutens, Wim

    2006-02-01

    In this paper, we extend the Harrison and Shepp's construction of the skew Brownian motion (1981) and we obtain a diffusion similar to the two-dimensional Bessel process with speed and scale densities discontinuous at one point. Natural generalizations to multi-dimensional and fractional order Bessel processes are then discussed as well as invariance properties. We call this family of diffusions asymmetric skew Bessel processes in opposition to skew Bessel processes as defined in Barlow et al. [On Walsh's Brownian motions, Seminaire de Probabilities XXIII, Lecture Notes in Mathematics, vol. 1372, Springer, Berlin, New York, 1989, pp. 275-293]. We present factorizations involving (asymmetric skew) Bessel processes with random time. Finally, applications to the valuation of perpetuities and Asian options are proposed.

  5. Optical clock distribution in supercomputers using polyimide-based waveguides

    NASA Astrophysics Data System (ADS)

    Bihari, Bipin; Gan, Jianhua; Wu, Linghui; Liu, Yujie; Tang, Suning; Chen, Ray T.

    1999-04-01

    Guided-wave optics is a promising way to deliver high-speed clock-signal in supercomputer with minimized clock-skew. Si- CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitters. Surface-normal couplers can couple the optical clock signals into and out from the H-tree polyimide waveguides surface-normally, which facilitates the integration of photodetectors to convert optical-signal to electrical-signal. A 45-degree surface- normal couplers has been integrated at each output end. The measured output coupling efficiency is nearly 100 percent. The output profile from 45-degree surface-normal coupler were calculated using Fresnel approximation. the theoretical result is in good agreement with experimental result. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.

  6. Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock

    NASA Technical Reports Server (NTRS)

    Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.

    2001-01-01

    Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.

  7. Arc voltage distribution skewness as an indicator of electrode gap during vacuum arc remelting

    DOEpatents

    Williamson, R.L.; Zanner, F.J.; Grose, S.M.

    1998-01-13

    The electrode gap of a VAR is monitored by determining the skewness of a distribution of gap voltage measurements. A decrease in skewness indicates an increase in gap and may be used to control the gap. 4 figs.

  8. Steel framing strategies for highly skewed bridges to reduce/eliminate distortion near skewed supports.

    DOT National Transportation Integrated Search

    2014-05-01

    Different problems in straight skewed steel I-girder bridges are often associated with the methods used for detailing the cross-frames. Use of theoretical terms to describe these detailing methods and absence of complete and simplified design approac...

  9. An "ASYMPTOTIC FRACTAL" Approach to the Morphology of Malignant Cell Nuclei

    NASA Astrophysics Data System (ADS)

    Landini, Gabriel; Rippin, John W.

    To investigate quantitatively nuclear membrane irregularity, 672 nuclei from 10 cases of oral cancer (squamous cell carcinoma) and normal cells from oral mucosa were studied in transmission electron micrographs. The nuclei were photographed at ×1400 magnification and transferred to computer memory (1 pixel = 35 nm). The perimeter of the profiles was analysed using the "yardstick method" of fractal dimension estimation, and the log-log plot of ruler size vs. boundary length demonstrated that there exists a significant effect of resolution on length measurement. However, this effect seems to disappear at higher resolutions. As this observation is compatible with the concept of asymptotic fractal, we estimated the parameters c, L and Bm from the asymptotic fractal formula Br = Bm {1 + (r / L)c}-1 , where Br is the boundary length measured with a ruler of size r, Bm is the maximum boundary for r → 0, L is a constant, and c = asymptotic fractal dimension minus topological dimension (D - Dt) for r → ∞. Analyses of variance showed c to be significantly higher in the normal than malignant cases (P < 0.001), but log(L) and Bm to be significantly higher in the malignant cases (P < 0.001). A multivariate linear discrimination analysis on c, log(L) and Bm re-classified 76.6% of the cells correctly (84.8% of the normal and 67.5% of the tumor). Furthermore, this shows that asymptotic fractal analysis applied to nuclear profiles has great potential for shape quantification in diagnosis of oral cancer.

  10. A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence

    NASA Astrophysics Data System (ADS)

    Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.

    2018-04-01

    Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.

  11. Optical and physical properties of stratospheric aerosols from balloon measurements in the visible and near-infrared domains. I. Analysis of aerosol extinction spectra from the AMON and SALOMON balloonborne spectrometers

    NASA Astrophysics Data System (ADS)

    Berthet, Gwenaël; Renard, Jean-Baptiste; Brogniez, Colette; Robert, Claude; Chartier, Michel; Pirre, Michel

    2002-12-01

    Aerosol extinction coefficients have been derived in the 375-700-nm spectral domain from measurements in the stratosphere since 1992, at night, at mid- and high latitudes from 15 to 40 km, by two balloonborne spectrometers, Absorption par les Minoritaires Ozone et NOx (AMON) and Spectroscopie d'Absorption Lunaire pour l'Observation des Minoritaires Ozone et NOx (SALOMON). Log-normal size distributions associated with the Mie-computed extinction spectra that best fit the measurements permit calculation of integrated properties of the distributions. Although measured extinction spectra that correspond to background aerosols can be reproduced by the Mie scattering model by use of monomodal log-normal size distributions, each flight reveals some large discrepancies between measurement and theory at several altitudes. The agreement between measured and Mie-calculated extinction spectra is significantly improved by use of bimodal log-normal distributions. Nevertheless, neither monomodal nor bimodal distributions permit correct reproduction of some of the measured extinction shapes, especially for the 26 February 1997 AMON flight, which exhibited spectral behavior attributed to particles from a polar stratospheric cloud event.

  12. Log-Normality and Multifractal Analysis of Flame Surface Statistics

    NASA Astrophysics Data System (ADS)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2013-11-01

    The turbulent flame surface is typically highly wrinkled and folded at a multitude of scales controlled by various flame properties. It is useful if the information contained in this complex geometry can be projected onto a simpler regular geometry for the use of spectral, wavelet or multifractal analyses. Here we investigate local flame surface statistics of turbulent flame expanding under constant pressure. First the statistics of local length ratio is experimentally obtained from high-speed Mie scattering images. For spherically expanding flame, length ratio on the measurement plane, at predefined equiangular sectors is defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at corresponding area-ratio pdfs. Both the pdfs are found to be near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis. Currently at Indian Institute of Science, India.

  13. [Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].

    PubMed

    Zhu, Chun; Zhang, Xu

    2010-10-01

    Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.

  14. A Spatial Probit Econometric Model of Land Change: The Case of Infrastructure Development in Western Amazonia, Peru

    PubMed Central

    Arima, E. Y.

    2016-01-01

    Tropical forests are now at the center stage of climate mitigation policies worldwide given their roles as sources of carbon emissions resulting from deforestation and forest degradation. Although the international community has created mechanisms such as REDD+ to reduce those emissions, developing tropical countries continue to invest in infrastructure development in an effort to spur economic growth. Construction of roads in particular is known to be an important driver of deforestation. This article simulates the impact of road construction on deforestation in Western Amazonia, Peru, and quantifies the amount of carbon emissions associated with projected deforestation. To accomplish this objective, the article adopts a Bayesian probit land change model in which spatial dependencies are defined between regions or groups of pixels instead of between individual pixels, thereby reducing computational requirements. It also compares and contrasts the patterns of deforestation predicted by both spatial and non-spatial probit models. The spatial model replicates complex patterns of deforestation whereas the non-spatial model fails to do so. In terms of policy, both models suggest that road construction will increase deforestation by a modest amount, between 200–300 km2. This translates into aboveground carbon emissions of 1.36 and 1.85 x 106 tons. However, recent introduction of palm oil in the region serves as a cautionary example that the models may be underestimating the impact of roads. PMID:27010739

  15. A Spatial Probit Econometric Model of Land Change: The Case of Infrastructure Development in Western Amazonia, Peru.

    PubMed

    Arima, E Y

    2016-01-01

    Tropical forests are now at the center stage of climate mitigation policies worldwide given their roles as sources of carbon emissions resulting from deforestation and forest degradation. Although the international community has created mechanisms such as REDD+ to reduce those emissions, developing tropical countries continue to invest in infrastructure development in an effort to spur economic growth. Construction of roads in particular is known to be an important driver of deforestation. This article simulates the impact of road construction on deforestation in Western Amazonia, Peru, and quantifies the amount of carbon emissions associated with projected deforestation. To accomplish this objective, the article adopts a Bayesian probit land change model in which spatial dependencies are defined between regions or groups of pixels instead of between individual pixels, thereby reducing computational requirements. It also compares and contrasts the patterns of deforestation predicted by both spatial and non-spatial probit models. The spatial model replicates complex patterns of deforestation whereas the non-spatial model fails to do so. In terms of policy, both models suggest that road construction will increase deforestation by a modest amount, between 200-300 km2. This translates into aboveground carbon emissions of 1.36 and 1.85 x 106 tons. However, recent introduction of palm oil in the region serves as a cautionary example that the models may be underestimating the impact of roads.

  16. Determining inert content in coal dust/rock dust mixture

    DOEpatents

    Sapko, Michael J.; Ward, Jr., Jack A.

    1989-01-01

    A method and apparatus for determining the inert content of a coal dust and rock dust mixture uses a transparent window pressed against the mixture. An infrared light beam is directed through the window such that a portion of the infrared light beam is reflected from the mixture. The concentration of the reflected light is detected and a signal indicative of the reflected light is generated. A normalized value for the generated signal is determined according to the relationship .phi.=(log i.sub.c `log i.sub.co) / (log i.sub.c100 -log i.sub.co) where i.sub.co =measured signal at 0% rock dust i.sub.c100 =measured signal at 100% rock dust i.sub.c =measured signal of the mixture. This normalized value is then correlated to a predetermined relationship of .phi. to rock dust percentage to determine the rock dust content of the mixture. The rock dust content is displayed where the percentage is between 30 and 100%, and an indication of out-of-range is displayed where the rock dust percent is less than 30%. Preferably, the rock dust percentage (RD%) is calculated from the predetermined relationship RD%=100+30 log .phi.. where the dust mixture initially includes moisture, the dust mixture is dried before measuring by use of 8 to 12 mesh molecular-sieves which are shaken with the dust mixture and subsequently screened from the dust mixture.

  17. Conditional estimates of the number of podiform chromite deposits

    USGS Publications Warehouse

    Singer, D.A.

    1994-01-01

    A desirable guide for estimating the number of undiscovered mineral deposits is the number of known deposits per unit area from another well-explored permissive terrain. An analysis of the distribution of 805 podiform chromite deposits among ultramafic rocks in 12 subareas of Oregon and 27 counties of California is used to examine and extend this guide. The average number of deposits in this sample of 39 areas is 0.225 deposits per km2 of ultramafic rock; the frequency distribution is significantly skewed to the right. Probabilistic estimates can be made by using the observation that the lognormal distribution fits the distribution of deposits per unit area. A further improvement in the estimates is available by using the relationship between the area of ultramafic rock and the number of deposits. The number (N) of exposed podiform chromite deposits can be estimated by the following relationship: log10(N)=-0.194+0.577 log10(area of ultramafic rock). The slope is significantly different from both 0.0 and 1.0. Because the slope is less than 1.0, the ratio of deposits to area of permissive rock is a biased estimator when the area of ultramafic rock is different from the median 93 km2. Unbiased estimates of the number of podiform chromite deposits can be made with the regression equation and 80 percent confidence limits presented herein. ?? 1994 Oxford University Press.

  18. Labeling Defects in CT Images of Hardwood Logs with Species-Dependent and Species-Independent Classifiers

    Treesearch

    Pei Li; Jing He; A. Lynn Abbott; Daniel L. Schmoldt

    1996-01-01

    This paper analyses computed tomography (CT) images of hardwood logs, with the goal of locating internal defects. The ability to detect and identify defects automatically is a critical component of efficiency improvements for future sawmills and veneer mills. This paper describes an approach in which 1) histogram equalization is used during preprocessing to normalize...

  19. Mean estimation in highly skewed samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pederson, S P

    The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.

  20. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable

    PubMed Central

    2012-01-01

    Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998

  1. Monocular oral reading after treatment of dense congenital unilateral cataract

    PubMed Central

    Birch, Eileen E.; Cheng, Christina; Christina, V; Stager, David R.

    2010-01-01

    Background Good long-term visual acuity outcomes for children with dense congenital unilateral cataracts have been reported following early surgery and good compliance with postoperative amblyopia therapy. However, treated eyes rarely achieve normal visual acuity and there has been no formal evaluation of the utility of the treated eye for reading. Methods Eighteen children previously treated for dense congenital unilateral cataract were tested monocularly with the Gray Oral Reading Test, 4th edition (GORT-4) at 7 to 13 years of age using two passages for each eye, one at grade level and one at +1 above grade level. In addition, right eyes of 55 normal children age 7 to 13 served as a control group. The GORT-4 assesses reading rate, accuracy, fluency, and comprehension. Results Visual acuity of treated eyes ranged from 0.1 to 2.0 logMAR and of fellow eyes from −0.1 to 0.2 logMAR. Treated eyes scored significantly lower than fellow and normal control eyes on all scales at grade level and at +1 above grade level. Monocular reading rate, accuracy, fluency, and comprehension were correlated with visual acuity of treated eyes (rs = −0.575 to −0.875, p < 0.005). Treated eyes with 0.1-0.3 logMAR visual acuity did not differ from fellow or normal control eyes in rate, accuracy, fluency, or comprehension when reading at grade level or at +1 above grade level. Fellow eyes did not differ from normal controls on any reading scale. Conclusions Excellent visual acuity outcomes following treatment of dense congenital unilateral cataracts are associated with normal reading ability of the treated eye in school-age children. PMID:20603057

  2. Measuring Skewness: A Forgotten Statistic?

    ERIC Educational Resources Information Center

    Doane, David P.; Seward, Lori E.

    2011-01-01

    This paper discusses common approaches to presenting the topic of skewness in the classroom, and explains why students need to know how to measure it. Two skewness statistics are examined: the Fisher-Pearson standardized third moment coefficient, and the Pearson 2 coefficient that compares the mean and median. The former is reported in statistical…

  3. Learning a Novel Pattern through Balanced and Skewed Input

    ERIC Educational Resources Information Center

    McDonough, Kim; Trofimovich, Pavel

    2013-01-01

    This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…

  4. Numerical solution for the velocity-derivative skewness of a low-Reynolds-number decaying Navier-Stokes flow

    NASA Technical Reports Server (NTRS)

    Deissler, Robert G.

    1990-01-01

    The variation of the velocity-derivative skewness of a Navier-Stokes flow as the Reynolds number goes toward zero is calculated numerically. The value of the skewness, which has been somewhat controversial, is shown to become small at low Reynolds numbers.

  5. Investigation of free vibration characteristics for skew multiphase magneto-electro-elastic plate

    NASA Astrophysics Data System (ADS)

    Kiran, M. C.; Kattimani, S.

    2018-04-01

    This article presents the investigation of skew multiphase magneto-electro-elastic (MMEE) plate to assess its free vibration characteristics. A finite element (FE) model is formulated considering the different couplings involved via coupled constitutive equations. The transformation matrices are derived to transform local degrees of freedom into the global degrees of freedom for the nodes lying on the skew edges. Effect of different volume fraction (Vf) on the free vibration behavior is explicitly studied. In addition, influence of width to thickness ratio, the aspect ratio, and the stacking arrangement on natural frequencies of skew multiphase MEE plate investigated. Particular attention has been paid to investigate the effect of skew angle on the non-dimensional Eigen frequencies of multiphase MEE plate with simply supported edges.

  6. Skew information in the XY model with staggered Dzyaloshinskii-Moriya interaction

    NASA Astrophysics Data System (ADS)

    Qiu, Liang; Quan, Dongxiao; Pan, Fei; Liu, Zhi

    2017-06-01

    We study the performance of the lower bound of skew information in the vicinity of transition point for the anisotropic spin-1/2 XY chain with staggered Dzyaloshinskii-Moriya interaction by use of quantum renormalization-group method. For a fixed value of the Dzyaloshinskii-Moriya interaction, there are two saturated values for the lower bound of skew information corresponding to the spin-fluid and Néel phases, respectively. The scaling exponent of the lower bound of skew information closely relates to the correlation length of the model and the Dzyaloshinskii-Moriya interaction shifts the factorization point. Our results show that the lower bound of skew information can be a good candidate to detect the critical point of XY spin chain with staggered Dzyaloshinskii-Moriya interaction.

  7. Diaper area skin microflora of normal children and children with atopic dermatitis.

    PubMed Central

    Keswick, B H; Seymour, J L; Milligan, M C

    1987-01-01

    In vitro studies established that neither cloth nor disposable diapers demonstrably contributed to the growth of Escherichia coli, Proteus vulgaris, Staphylococcus aureus, or Candida albicans when urine was present as a growth medium. In a clinical study of 166 children, the microbial skin flora of children with atopic dermatitis was compared with the flora of children with normal skin to determine the influence of diaper type. No biologically significant differences were detected between groups wearing disposable or cloth diapers in terms of frequency of isolation or log mean recovery of selected skin flora. Repeated isolation of S. aureus correlated with atopic dermatitis. The log mean recovery of S. aureus was higher in the atopic groups. The effects of each diaper type on skin microflora were equivalent in the normal and atopic populations. PMID:3546360

  8. Stick-slip behavior in a continuum-granular experiment.

    PubMed

    Geller, Drew A; Ecke, Robert E; Dahmen, Karin A; Backhaus, Scott

    2015-12-01

    We report moment distribution results from a laboratory experiment, similar in character to an isolated strike-slip earthquake fault, consisting of sheared elastic plates separated by a narrow gap filled with a two-dimensional granular medium. Local measurement of strain displacements of the plates at 203 spatial points located adjacent to the gap allows direct determination of the event moments and their spatial and temporal distributions. We show that events consist of spatially coherent, larger motions and spatially extended (noncoherent), smaller events. The noncoherent events have a probability distribution of event moment consistent with an M(-3/2) power law scaling with Poisson-distributed recurrence times. Coherent events have a log-normal moment distribution and mean temporal recurrence. As the applied normal pressure increases, there are more coherent events and their log-normal distribution broadens and shifts to larger average moment.

  9. PROBIT: A Probit Analysis Program for the DRES (Defence Research Establishment Suffield) Computer Facility,

    DTIC Science & Technology

    1986-07-01

    p are are also discussed. When iteration is terminated, we can determine the effective dose at the A percentile level or EDX ; that is the dose at...corresponds to the lower limit, fl.. The fiducial or Fieller limits on x , i.e., the EDX are then f’ = exp If.1 (36) It can be readily shown that g = t2 v-1...4 .4+ 1W (n 0 CL C ~o 0 0 X U) -W CLW W 24W Q X C W C > VW U)w5 z C z 5memo f- G5555t (j) tSS a;C w CO W QZ 1 I- ~ ~ ~ C (. 5r 5W <A * a W Www w V

  10. An Instrumental Variable Probit (IVP) analysis on depressed mood in Korea: the impact of gender differences and other socio-economic factors.

    PubMed

    Gitto, Lara; Noh, Yong-Hwan; Andrés, Antonio Rodríguez

    2015-04-16

    Depression is a mental health state whose frequency has been increasing in modern societies. It imposes a great burden, because of the strong impact on people's quality of life and happiness. Depression can be reliably diagnosed and treated in primary care: if more people could get effective treatments earlier, the costs related to depression would be reversed. The aim of this study was to examine the influence of socio-economic factors and gender on depressed mood, focusing on Korea. In fact, in spite of the great amount of empirical studies carried out for other countries, few epidemiological studies have examined the socio-economic determinants of depression in Korea and they were either limited to samples of employed women or did not control for individual health status. Moreover, as the likely data endogeneity (i.e. the possibility of correlation between the dependent variable and the error term as a result of autocorrelation or simultaneity, such as, in this case, the depressed mood due to health factors that, in turn might be caused by depression), might bias the results, the present study proposes an empirical approach, based on instrumental variables, to deal with this problem. Data for the year 2008 from the Korea National Health and Nutrition Examination Survey (KNHANES) were employed. About seven thousands of people (N= 6,751, of which 43% were males and 57% females), aged from 19 to 75 years old, were included in the sample considered in the analysis. In order to take into account the possible endogeneity of some explanatory variables, two Instrumental Variables Probit (IVP) regressions were estimated; the variables for which instrumental equations were estimated were related to the participation of women to the workforce and to good health, as reported by people in the sample. Explanatory variables were related to age, gender, family factors (such as the number of family members and marital status) and socio-economic factors (such as education, residence in metropolitan areas, and so on). As the results of the Wald test carried out after the estimations did not allow to reject the null hypothesis of endogeneity, a probit model was run too. Overall, women tend to develop depression more frequently than men. There is an inverse effect of education on depressed mood (probability of -24.6% to report a depressed mood due to high school education, as it emerges from the probit model marginal effects), while marital status and the number of family members may act as protective factors (probability to report a depressed mood of -1.0% for each family member). Depression is significantly associated with socio-economic conditions, such as work and income. Living in metropolitan areas is inversely correlated with depression (probability of -4.1% to report a depressed mood estimated through the probit model): this could be explained considering that, in rural areas, people rarely have immediate access to high-quality health services. This study outlines the factors that are more likely to impact on depression, and applies an IVP model to take into account the potential endogeneity of some of the predictors of depressive mood, such as female participation to workforce and health status. A probit model has been estimated too. Depression is associated with a wide range of socio-economic factors, although the strength and direction of the association can differ by gender. Prevention approaches to contrast depressive symptoms might take into consideration the evidence offered by the present study. © 2015 by Kerman University of Medical Sciences.

  11. An Instrumental Variable Probit (IVP) analysis on depressed mood in Korea: the impact of gender differences and other socio-economic factors

    PubMed Central

    Gitto, Lara; Noh, Yong-Hwan; Andrés, Antonio Rodríguez

    2015-01-01

    Background: Depression is a mental health state whose frequency has been increasing in modern societies. It imposes a great burden, because of the strong impact on people’s quality of life and happiness. Depression can be reliably diagnosed and treated in primary care: if more people could get effective treatments earlier, the costs related to depression would be reversed. The aim of this study was to examine the influence of socio-economic factors and gender on depressed mood, focusing on Korea. In fact, in spite of the great amount of empirical studies carried out for other countries, few epidemiological studies have examined the socio-economic determinants of depression in Korea and they were either limited to samples of employed women or did not control for individual health status. Moreover, as the likely data endogeneity (i.e. the possibility of correlation between the dependent variable and the error term as a result of autocorrelation or simultaneity, such as, in this case, the depressed mood due to health factors that, in turn might be caused by depression), might bias the results, the present study proposes an empirical approach, based on instrumental variables, to deal with this problem. Methods: Data for the year 2008 from the Korea National Health and Nutrition Examination Survey (KNHANES) were employed. About seven thousands of people (N= 6,751, of which 43% were males and 57% females), aged from 19 to 75 years old, were included in the sample considered in the analysis. In order to take into account the possible endogeneity of some explanatory variables, two Instrumental Variables Probit (IVP) regressions were estimated; the variables for which instrumental equations were estimated were related to the participation of women to the workforce and to good health, as reported by people in the sample. Explanatory variables were related to age, gender, family factors (such as the number of family members and marital status) and socio-economic factors (such as education, residence in metropolitan areas, and so on). As the results of the Wald test carried out after the estimations did not allow to reject the null hypothesis of endogeneity, a probit model was run too. Results: Overall, women tend to develop depression more frequently than men. There is an inverse effect of education on depressed mood (probability of -24.6% to report a depressed mood due to high school education, as it emerges from the probit model marginal effects), while marital status and the number of family members may act as protective factors (probability to report a depressed mood of -1.0% for each family member). Depression is significantly associated with socio-economic conditions, such as work and income. Living in metropolitan areas is inversely correlated with depression (probability of -4.1% to report a depressed mood estimated through the probit model): this could be explained considering that, in rural areas, people rarely have immediate access to high-quality health services. Conclusion: This study outlines the factors that are more likely to impact on depression, and applies an IVP model to take into account the potential endogeneity of some of the predictors of depressive mood, such as female participation to workforce and health status. A probit model has been estimated too. Depression is associated with a wide range of socio-economic factors, although the strength and direction of the association can differ by gender. Prevention approaches to contrast depressive symptoms might take into consideration the evidence offered by the present study. PMID:26340392

  12. Statistical analysis of variability properties of the Kepler blazar W2R 1926+42

    NASA Astrophysics Data System (ADS)

    Li, Yutong; Hu, Shaoming; Wiita, Paul J.; Gupta, Alok C.

    2018-04-01

    We analyzed Kepler light curves of the blazar W2R 1926+42 that provided nearly continuous coverage from quarter 11 through quarter 17 (589 days between 2011 and 2013) and examined some of their flux variability properties. We investigate the possibility that the light curve is dominated by a large number of individual flares and adopt exponential rise and decay models to investigate the symmetry properties of flares. We found that those variations of W2R 1926+42 are predominantly asymmetric with weak tendencies toward positive asymmetry (rapid rise and slow decay). The durations (D) and the amplitudes (F0) of flares can be fit with log-normal distributions. The energy (E) of each flare is also estimated for the first time. There are positive correlations between logD and logE with a slope of 1.36, and between logF0 and logE with a slope of 1.12. Lomb-Scargle periodograms are used to estimate the power spectral density (PSD) shape. It is well described by a power law with an index ranging between -1.1 and -1.5. The sizes of the emission regions, R, are estimated to be in the range of 1.1 × 1015cm - 6.6 × 1016cm. The flare asymmetry is difficult to explain by a light travel time effect but may be caused by differences between the timescales for acceleration and dissipation of high-energy particles in the relativistic jet. A jet-in-jet model also could produce the observed log-normal distributions.

  13. Performance-based seismic assessment of skewed bridges with and without considering soil-foundation interaction effects for various site classes

    NASA Astrophysics Data System (ADS)

    Ghotbi, Abdoul R.

    2014-09-01

    The seismic behavior of skewed bridges has not been well studied compared to straight bridges. Skewed bridges have shown extensive damage, especially due to deck rotation, shear keys failure, abutment unseating and column-bent drift. This research, therefore, aims to study the behavior of skewed and straight highway overpass bridges both with and without taking into account the effects of Soil-Structure Interaction (SSI) due to near-fault ground motions. Due to several sources of uncertainty associated with the ground motions, soil and structure, a probabilistic approach is needed. Thus, a probabilistic methodology similar to the one developed by the Pacific Earthquake Engineering Research Center (PEER) has been utilized to assess the probability of damage due to various levels of shaking using appropriate intensity measures with minimum dispersions. The probabilistic analyses were performed for various bridge configurations and site conditions, including sand ranging from loose to dense and clay ranging from soft to stiff, in order to evaluate the effects. The results proved a considerable susceptibility of skewed bridges to deck rotation and shear keys displacement. It was also found that SSI had a decreasing effect on the damage probability for various demands compared to the fixed-base model without including SSI. However, deck rotation for all types of the soil and also abutment unseating for very loose sand and soft clay showed an increase in damage probability compared to the fixed-base model. The damage probability for various demands has also been found to decrease with an increase of soil strength for both sandy and clayey sites. With respect to the variations in the skew angle, an increase in skew angle has had an increasing effect on the amplitude of the seismic response for various demands. Deck rotation has been very sensitive to the increase in the skew angle; therefore, as the skew angle increased, the deck rotation responded accordingly. Furthermore, abutment unseating showed an increasing trend due to an increase in skew angle for both fixed-base and SSI models.

  14. Characterization of double diffusive convection step and heat budget in the deep Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Lu, Y.

    2013-12-01

    In this paper, we explore the hydrographic structure and heat budget in deep Canada Basin using data measured with McLane-Moored-Profilers (MMPs), bottom-pressure-recorders (BPRs), and conductivity-temperature-depth (CTD) profilers. From the bottom upward, a homogenous bottom layer and its overlaying double diffusive convection (DDC) steps are well identified at Mooring A (75oN, 150oW). We find that the deep water is in weak diapycnal mixing because the effective diffusivity of the bottom layer is ~1.8×10-5 m 2s-1 while that of the other steps is ~10-6 m 2s-1. The vertical heat flux through DDC steps is evaluated with different methods. We find that the heat flux (0.1-11 mWm-2) is much smaller than geothermal heating (~50 mWm-2), which suggests that the stack of DDC steps acts as a thermal barrier in the deep basin. Moreover, the temporal distributions of temperature and salinity differences across the interface are exponential, while those of heat flux and effective diffusivity are found to be approximately log-normal. Both are the result of strong intermittency. Between 2003 and 2011, temperature fluctuation close to the sea floor distributed asymmetrically and skewed towards positive values, which provides direct indication that geothermal heating is transferred into ocean. Both BPR and CTD data suggest that geothermal heating, not the warming of upper ocean, is the dominant mechanism responsible for the warming of deep water. As the DDC steps prevent the vertical heat transfer, geothermal heating will be unlikely to have significant effect on the middle and upper oceans.

  15. Differential release of manure-borne bioactive phosphorus forms to runoff and leachate under simulated rain.

    PubMed

    Blaustein, R A; Dao, Thanh H; Pachepsky, Y A; Shelton, D R

    2017-05-01

    Limited information exists on the unhindered release of bioactive phosphorus (P) from a manure layer to model the partitioning and transport of component P forms before they reach an underlying soil. Rain simulations were conducted to quantify effects of intensity (30, 60, and 90 mm h -1 ) on P release from an application of 60 Mg ha -1 of dairy manure. Runoff contained water-extractable- (WEP), exchangeable and enzyme-labile bioactive P (TBIOP), in contrast to the operationally defined "dissolved-reactive P" form. The released P concentrations and flow-weighed mass loads were described by the log-normal probability density function. At a reference condition of 30 mm h -1 and maintaining the surface at a 5% incline, runoff was minimal, and WEP accounted for 20.9% of leached total P (TP) concentrations, with an additional 25-30% as exchangeable and enzyme-labile bioactive P over the 1-h simulation. On a 20% incline, increased intensity accelerated occurrence of concentration max and shifted the skewed P concentration distribution more to the left. Differences in trends of WEP, TBIOP, or net enzyme-labile P (PHP o ) cumulative mass released per unit mass of manure between intensities were attributable to the higher frequency of raindrops striking the manure layer, thus increasing detachment and load of colloidal PHP o of the water phases. Thus, detailed knowledge of manure physical characteristics, bioactive P distribution in relation to rain intensity, and attainment of steady-state of water fluxes were critical factors in improved prediction of partitioning and movement of manure-borne P under rainfall. Published by Elsevier Ltd.

  16. Measuring Resistance to Change at the Within-Session Level

    PubMed Central

    Tonneau, François; Ríos, Américo; Cabrera, Felipe

    2006-01-01

    Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases more slowly than in another component. A problem with normalization, however, is that it can produce artifactual results if the relation between baseline level and disruption is not multiplicative. One way to address this issue is to fit specific models of disruption to untransformed response rates and evaluate whether or not a multiplicative model accounts for the data. Here we present such a test of resistance to change, using within-session response patterns in rats as a data base for fitting models of disruption. By analyzing response rate at a within-session level, we were able to confirm a central prediction of the resistance-to-change framework while discarding normalization artifacts as a plausible explanation of our results. PMID:16903495

  17. Measuring resistance to change at the within-session level.

    PubMed

    Tonneau, François; Ríos, Américo; Cabrera, Felipe

    2006-07-01

    Resistance to change is often studied by measuring response rate in various components of a multiple schedule. Response rate in each component is normalized (that is, divided by its baseline level) and then log-transformed. Differential resistance to change is demonstrated if the normalized, log-transformed response rate in one component decreases more slowly than in another component. A problem with normalization, however, is that it can produce artifactual results if the relation between baseline level and disruption is not multiplicative. One way to address this issue is to fit specific models of disruption to untransformed response rates and evaluate whether or not a multiplicative model accounts for the data. Here we present such a test of resistance to change, using within-session response patterns in rats as a data base for fitting models of disruption. By analyzing response rate at a within-session level, we were able to confirm a central prediction of the resistance-to-change framework while discarding normalization artifacts as a plausible explanation of our results.

  18. Hydrodynamic impeller stiffness, damping, and inertia in the rotordynamics of centrifugal flow pumps

    NASA Technical Reports Server (NTRS)

    Jery, S.; Acosta, A. J.; Brennen, C. E.; Caughey, T. K.

    1984-01-01

    The lateral hydrodynamic forces experienced by a centrifugal pump impeller performing circular whirl motions within several volute geometries were measured. The lateral forces were decomposed into: (1) time averaged lateral forces and (2) hydrodynamic force matrices representing the variation of the lateral forces with position of the impeller center. It is found that these force matrices essentially consist of equal diagonal terms and skew symmetric off diagonal terms. One consequence of this is that during its whirl motion the impeller experiences forces acting normal and tangential to the locus of whirl. Data on these normal and tangential forces are presented; it is shown that there exists a region of positive reduced whirl frequencies, within which the hydrodynamic forces can be destablizing with respect to whirl.

  19. Evaluation of the expected moments algorithm and a multiple low-outlier test for flood frequency analysis at streamgaging stations in Arizona

    USGS Publications Warehouse

    Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.

    2014-01-01

    Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B17B-GB and EMA-MGB RPD-boxplot results showed that the median RPDs across all streamgaging stations for the 10-, 1-, and 0.2-percent AEPs, computed using station skew, were approximately zero. As the AEP flow estimates decreased (that is, from 10 to 0.2 percent AEP) the variability in the RPDs increased, indicating that the AEP flow estimate was greater for EMA-MGB when compared to B17B-GB. There was only one RPD greater than 100 percent for the 10- and 1-percent AEP estimates, whereas 19 RPDs exceeded 100 percent for the 0.2-percent AEP. At streamgaging stations with low-outlier data, historical peak-flow data, or both, RPDs ranged from −84 to 262 percent for the 0.2-percent AEP flow estimate. When streamgaging stations were separated by the presence of historical peak-flow data (that is, no low outliers or censored peaks) or by low outlier peak-flow data (no historical data), the results showed that RPD variability was greatest for the 0.2-AEP flow estimates, indicating that the treatment of historical and (or) low-outlier data was different between methods and that method differences were most influential when estimating the less probable AEP flows (1, 0.5, and 0.2 percent). When regional skew information was weighted with the station skew, B17B-GB estimates were generally higher than the EMA-MGB estimates for any given AEP. This was related to the different regional skews and mean square error used in the weighting procedure for each flood frequency analysis. The B17B-GB weighted skew analysis used a more positive regional skew determined in USGS Water Supply Paper 2433 (Thomas and others, 1997), while the EMA-MGB analysis used a more negative regional skew with a lower mean square error determined from a Bayesian generalized least squares analysis. Regional groupings of streamgaging stations reflected differences in physiographic and climatic characteristics. Potentially influential low flows (PILFs) were more prevalent in arid regions of the State, and generally AEP flows were larger with EMA-MGB than with B17B-GB for gaging stations with PILFs. In most cases EMA-MGB curves would fit the largest floods more accurately than B17B-GB. In areas of the State with more baseflow, such as along the Mogollon Rim and the White Mountains, streamgaging stations generally had fewer PILFs and more positive skews, causing estimated AEP flows to be larger with B17B-GB than with EMA-MGB. The effect of including regional skew was similar for all regions, and the observed pattern was increasingly greater B17B-GB flows (more negative RPDs) with each decreasing AEP quantile. A variation on a goodness-of-fit test statistic was used to describe each method’s ability to fit the largest floods. The mean absolute percent difference between the measured peak flows and the log-Pearson Type 3 (LP3)-estimated flows, for each method, was averaged over the 90th, 75th, and 50th percentiles of peak-flow data at each site. In most percentile subsets, EMA-MGB on average had smaller differences (1 to 3 percent) between the observed and fitted value, suggesting that the EMA-MGB-LP3 distribution is fitting the observed peak-flow data more precisely than B17B-GB. The smallest EMA-MGB percent differences occurred for the greatest 10 percent (90th percentile) of the peak-flow data. When stations were analyzed by USGS NWIS peak flow qualification code groups, the stations with historical peak flows and no low outliers had average percent differences as high as 11 percent greater for B17B-GB, indicating that EMA-MGB utilized the historical information to fit the largest observed floods more accurately. A resampling procedure was used in which 1,000 random subsamples were drawn, each comprising one-half of the observed data. An LP3 distribution was fit to each subsample using B17B-GB and EMA-MGB methods, and the predicted 1-percent AEP flows were compared to those generated from distributions fit to the entire dataset. With station skew, the two methods were similar in the median percent difference, but with weighted skew EMA-MGB estimates were generally better. At two gages where B17B-GB appeared to perform better, a large number of peak flows were deemed to be PILFs by the MGB test, although they did not appear to depart significantly from the trend of the data (step or dogleg appearance). At two gages where EMA-MGB performed better, the MGB identified several PILFs that were affecting the fitted distribution of the B17B-GB method. Monte Carlo simulations were run for the LP3 distribution using different skews and with different assumptions about the expected number of historical peaks. The primary benefit of running Monte Carlo simulations is that the underlying distribution statistics are known, meaning that the true 1-percent AEP is known. The results showed that EMA-MGB performed as well or better in situations where the LP3 distribution had a zero or positive skew and historical information. When the skew for the LP3 distribution was negative, EMA-MGB performed significantly better than B17B-GB and EMA-MGB estimates were less biased by more closely estimating the true 1-percent AEP for 1, 2, and 10 historical flood scenarios.

  20. Rapid pupil-based assessment of glaucomatous damage.

    PubMed

    Chen, Yanjun; Wyatt, Harry J; Swanson, William H; Dul, Mitchell W

    2008-06-01

    To investigate the ability of a technique employing pupillometry and functionally-shaped stimuli to assess loss of visual function due to glaucomatous optic neuropathy. Pairs of large stimuli, mirror images about the horizontal meridian, were displayed alternately in the upper and lower visual field. Pupil diameter was recorded and analyzed in terms of the "contrast balance" (relative sensitivity to the upper and lower stimuli), and the pupil constriction amplitude to upper and lower stimuli separately. A group of 40 patients with glaucoma was tested twice in a first session, and twice more in a second session, 1 to 3 weeks later. A group of 40 normal subjects was tested with the same protocol. Results for the normal subjects indicated functional symmetry in upper/lower retina, on average. Contrast balance results for the patients with glaucoma differed from normal: half the normal subjects had contrast balance within 0.06 log unit of equality and 80% had contrast balance within 0.1 log unit. Half the patients had contrast balances more than 0.1 log unit from equality. Patient contrast balances were moderately correlated with predictions from perimetric data (r = 0.37, p < 0.00001). Contrast balances correctly classified visual field damage in 28 patients (70%), and response amplitudes correctly classified 24 patients (60%). When contrast balance and response amplitude were combined, receiver operating characteristic area for discriminating glaucoma from normal was 0.83. Pupillary evaluation of retinal asymmetry provides a rapid method for detecting and classifying visual field defects. In this patient population, classification agreed with perimetry in 70% of eyes.

  1. Rapid Pupil-Based Assessment of Glaucomatous Damage

    PubMed Central

    Chen, Yanjun; Wyatt, Harry J.; Swanson, William H.; Dul, Mitchell W.

    2010-01-01

    Purpose To investigate the ability of a technique employing pupillometry and functionally-shaped stimuli to assess loss of visual function due to glaucomatous optic neuropathy. Methods Pairs of large stimuli, mirror images about the horizontal meridian, were displayed alternately in the upper and lower visual field. Pupil diameter was recorded and analyzed in terms of the “contrast balance” (relative sensitivity to the upper and lower stimuli), and the pupil constriction amplitude to upper and lower stimuli separately. A group of 40 patients with glaucoma was tested twice in a first session, and twice more in a second session, 1 to 3 weeks later. A group of 40 normal subjects was tested with the same protocol. Results Results for the normal subjects indicated functional symmetry in upper/lower retina, on average. Contrast balance results for the patients with glaucoma differed from normal: half the normal subjects had contrast balance within 0.06 log unit of equality and 80% had contrast balance within 0.1 log unit. Half the patients had contrast balances more than 0.1 log unit from equality. Patient contrast balances were moderately correlated with predictions from perimetric data (r = 0.37, p < 0.00001). Contrast balances correctly classified visual field damage in 28 patients (70%), and response amplitudes correctly classified 24 patients (60%). When contrast balance and response amplitude were combined, receiver operating characteristic area for discriminating glaucoma from normal was 0.83. Conclusions Pupillary evaluation of retinal asymmetry provides a rapid method for detecting and classifying visual field defects. In this patient population, classification agreed with perimetry in 70% of eyes. PMID:18521026

  2. Sample Skewness as a Statistical Measurement of Neuronal Tuning Sharpness

    PubMed Central

    Samonds, Jason M.; Potetz, Brian R.; Lee, Tai Sing

    2014-01-01

    We propose using the statistical measurement of the sample skewness of the distribution of mean firing rates of a tuning curve to quantify sharpness of tuning. For some features, like binocular disparity, tuning curves are best described by relatively complex and sometimes diverse functions, making it difficult to quantify sharpness with a single function and parameter. Skewness provides a robust nonparametric measure of tuning curve sharpness that is invariant with respect to the mean and variance of the tuning curve and is straightforward to apply to a wide range of tuning, including simple orientation tuning curves and complex object tuning curves that often cannot even be described parametrically. Because skewness does not depend on a specific model or function of tuning, it is especially appealing to cases of sharpening where recurrent interactions among neurons produce sharper tuning curves that deviate in a complex manner from the feedforward function of tuning. Since tuning curves for all neurons are not typically well described by a single parametric function, this model independence additionally allows skewness to be applied to all recorded neurons, maximizing the statistical power of a set of data. We also compare skewness with other nonparametric measures of tuning curve sharpness and selectivity. Compared to these other nonparametric measures tested, skewness is best used for capturing the sharpness of multimodal tuning curves defined by narrow peaks (maximum) and broad valleys (minima). Finally, we provide a more formal definition of sharpness using a shape-based information gain measure and derive and show that skewness is correlated with this definition. PMID:24555451

  3. New Families of Skewed Higher-Order Kernel Estimators to Solve the BSS/ICA Problem for Multimodal Sources Mixtures.

    PubMed

    Jabbar, Ahmed Najah

    2018-04-13

    This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.

  4. X Chromosome Inactivation in Women with Alcoholism

    PubMed Central

    Manzardo, Ann M.; Henkhaus, Rebecca; Hidaka, Brandon; Penick, Elizabeth C.; Poje, Albert B.; Butler, Merlin G.

    2012-01-01

    Background All female mammals with two X chromosomes balance gene expression with males having only one X by inactivating one of their Xs (X chromosome inactivation, XCI). Analysis of XCI in females offers the opportunity to investigate both X-linked genetic factors and early embryonic development that may contribute to alcoholism. Increases in the prevalence of skewing of XCI in women with alcoholism could implicate biological risk factors. Methods The pattern of XCI was examined in DNA isolated in blood from 44 adult females meeting DSM IV criteria for an Alcohol Use Disorder, and 45 control females with no known history of alcohol abuse or dependence. XCI status was determined by analyzing digested and undigested polymerase chain reaction (PCR) products of the polymorphic androgen receptor (AR) gene located on the X chromosome. Subjects were categorized into 3 groups based upon the degree of XCI skewness: random (50:50–64:36), moderately skewed (65:35–80:20) and highly skewed (>80:20). Results XCI status from informative females with alcoholism was found to be random in 59% (n=26), moderately skewed in 27% (n=12) or highly skewed in 14% (n=6). Control subjects showed 60%, 29% and 11%, respectively. The distribution of skewed XCI observed among women with alcoholism did not differ statistically from that of control subjects (χ2 =0.14, 2 df, p=0.93). Conclusions Our data did not support an increase in XCI skewness among women with alcoholism or implicate early developmental events associated with embryonic cell loss or unequal (non-random) expression of X-linked gene(s) or defects in alcoholism among females. PMID:22375556

  5. Generalized Skew Coefficients of Annual Peak Flows for Rural, Unregulated Streams in West Virginia

    USGS Publications Warehouse

    Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.

    2009-01-01

    Generalized skew was determined from analysis of records from 147 streamflow-gaging stations in or near West Virginia. The analysis followed guidelines established by the Interagency Advisory Committee on Water Data described in Bulletin 17B, except that stations having 50 or more years of record were used instead of stations with the less restrictive recommendation of 25 or more years of record. The generalized-skew analysis included contouring, averaging, and regression of station skews. The best method was considered the one with the smallest mean square error (MSE). MSE is defined as the following quantity summed and divided by the number of peaks: the square of the difference of an individual logarithm (base 10) of peak flow less the mean of all individual logarithms of peak flow. Contouring of station skews was the best method for determining generalized skew for West Virginia, with a MSE of about 0.2174. This MSE is an improvement over the MSE of about 0.3025 for the national map presented in Bulletin 17B.

  6. Halo Pressure Profile through the Skew Cross-power Spectrum of the Sunyaev-Zel’dovich Effect and CMB Lensing in Planck

    NASA Astrophysics Data System (ADS)

    Timmons, Nicholas; Cooray, Asantha; Feng, Chang; Keating, Brian

    2017-11-01

    We measure the cosmic microwave background (CMB) skewness power spectrum in Planck, using frequency maps of the HFI instrument and the Sunyaev-Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing-SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gas pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck.

  7. Assessing Impacts of Selective Logging on Water, Energy, and Carbon Fluxes in Amazon Forests Using the Functionally Assembled Terrestrial Ecosystem Simulator (FATES)

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Huang, M.; Keller, M. M.; Longo, M.; Knox, R. G.; Koven, C.; Fisher, R.

    2016-12-01

    As a key component in the climate system, old-growth tropical forests act as carbon sinks that remove CO2 from the atmosphere. However, these forests could be easily turned into C sources when disturbed. In fact, over half of tropical forests have been cleared or logged, and almost half of standing primary tropical forests are designated for timber production. Existing literature suggests that timber harvests alone could contribute up to 25% as much C losses as deforestation in Amazon. Yet, the spatial extent and recovery trajectory of disturbed forests in a changing climate are highly uncertain. This study constitutes our first attempt to quantify impacts of selective logging on water, energy, and carbon budgets in Amazon forests using the Functionally Assembled Terrestrial Ecosystem Simulator (FATES). The Community Land Model version 4.5 (CLM4.5), with and without FATES turned on, are configured to run at two flux towers established in the Large Scale Biosphere-Atmosphere Experiment in Amazonia (LBA). One tower is located at in an old-growth forest (i.e. KM67) and the other is located in a selectively logged site (i.e., KM83). The three CLM4.5 options, (1) Satellite Phenology (CLM4.5-SP), (2) Century-based biogeochemical cycling with prognostic phenology (CLM4.5-BGC), and (3) CLM4.5-FATES, are spun up to equilibrium by recycling the observed meteorology at the towers, respectively. The simulated fluxes (i.e., sensible heat, latent heat, and net ecosystem exchange) are then compared to observations at KM67 to evaluate the capability of the models in capturing water and carbon dynamics in old-growth tropical forests. Our results suggest that all three models perform reasonably well in capturing the fluxes but demographic features simulated by FATES, such as distributions of diameter at breast height (DBH) and stem density (SD), are skewed heavily toward extremely large trees (e.g., > 100 cm in DBH) when compared to site surveys at the forest plots. Efforts are underway to evaluate parametric sensitivity in FATES to improve simulations in old-growth forests, and to implement parameterization to represent pulse disturbance to carbon pools created by logging events at different intensities, and follow-up recovery closely related to gap-phase regeneration and competition for lights within the gaps.

  8. Fishery stock assessment of Kiddi shrimp ( Parapenaeopsis stylifera) in the Northern Arabian Sea Coast of Pakistan by using surplus production models

    NASA Astrophysics Data System (ADS)

    Mohsin, Muhammad; Mu, Yongtong; Memon, Aamir Mahmood; Kalhoro, Muhammad Talib; Shah, Syed Baber Hussain

    2017-07-01

    Pakistani marine waters are under an open access regime. Due to poor management and policy implications, blind fishing is continued which may result in ecological as well as economic losses. Thus, it is of utmost importance to estimate fishery resources before harvesting. In this study, catch and effort data, 1996-2009, of Kiddi shrimp Parapenaeopsis stylifera fishery from Pakistani marine waters was analyzed by using specialized fishery software in order to know fishery stock status of this commercially important shrimp. Maximum, minimum and average capture production of P. stylifera was observed as 15 912 metric tons (mt) (1997), 9 438 mt (2009) and 11 667 mt/a. Two stock assessment tools viz. CEDA (catch and effort data analysis) and ASPIC (a stock production model incorporating covariates) were used to compute MSY (maximum sustainable yield) of this organism. In CEDA, three surplus production models, Fox, Schaefer and Pella-Tomlinson, along with three error assumptions, log, log normal and gamma, were used. For initial proportion (IP) 0.8, the Fox model computed MSY as 6 858 mt (CV=0.204, R 2 =0.709) and 7 384 mt (CV=0.149, R 2 =0.72) for log and log normal error assumption respectively. Here, gamma error produced minimization failure. Estimated MSY by using Schaefer and Pella-Tomlinson models remained the same for log, log normal and gamma error assumptions i.e. 7 083 mt, 8 209 mt and 7 242 mt correspondingly. The Schafer results showed highest goodness of fit R 2 (0.712) values. ASPIC computed MSY, CV, R 2, F MSY and B MSY parameters for the Fox model as 7 219 mt, 0.142, 0.872, 0.111 and 65 280, while for the Logistic model the computed values remained 7 720 mt, 0.148, 0.868, 0.107 and 72 110 correspondingly. Results obtained have shown that P. stylifera has been overexploited. Immediate steps are needed to conserve this fishery resource for the future and research on other species of commercial importance is urgently needed.

  9. Opposite GC skews at the 5' and 3' ends of genes in unicellular fungi

    PubMed Central

    2011-01-01

    Background GC-skews have previously been linked to transcription in some eukaryotes. They have been associated with transcription start sites, with the coding strand G-biased in mammals and C-biased in fungi and invertebrates. Results We show a consistent and highly significant pattern of GC-skew within genes of almost all unicellular fungi. The pattern of GC-skew is asymmetrical: the coding strand of genes is typically C-biased at the 5' ends but G-biased at the 3' ends, with intermediate skews at the middle of genes. Thus, the initiation, elongation, and termination phases of transcription are associated with different skews. This pattern influences the encoded proteins by generating differential usage of amino acids at the 5' and 3' ends of genes. These biases also affect fourfold-degenerate positions and extend into promoters and 3' UTRs, indicating that skews cannot be accounted by selection for protein function or translation. Conclusions We propose two explanations, the mutational pressure hypothesis, and the adaptive hypothesis. The mutational pressure hypothesis is that different co-factors bind to RNA pol II at different phases of transcription, producing different mutational regimes. The adaptive hypothesis is that cytidine triphosphate deficiency may lead to C-avoidance at the 3' ends of transcripts to control the flow of RNA pol II molecules and reduce their frequency of collisions. PMID:22208287

  10. Effect of skew angle on second harmonic guided wave measurement in composite plates

    NASA Astrophysics Data System (ADS)

    Cho, Hwanjeong; Choi, Sungho; Lissenden, Cliff J.

    2017-02-01

    Waves propagating in anisotropic media are subject to skewing effects due to the media having directional wave speed dependence, which is characterized by slowness curves. Likewise, the generation of second harmonics is sensitive to micro-scale damage that is generally not detectable from linear features of ultrasonic waves. Here, the effect of skew angle on second harmonic guided wave measurement in a transversely isotropic lamina and a quasi-isotropic laminate are numerically studied. The strain energy density function for a nonlinear transversely isotropic material is formulated in terms of the Green-Lagrange strain invariants. The guided wave mode pairs for cumulative second harmonic generation in the plate are selected in accordance with the internal resonance criteria - i.e., phase matching and non-zero power flux. Moreover, the skew angle dispersion curves for the mode pairs are obtained from the semi-analytical finite element method using the derivative of the slowness curve. The skew angles of the primary and secondary wave modes are calculated and wave propagation simulations are carried out using COMSOL. Numerical simulations revealed that the effect of skew angle mismatch can be significant for second harmonic generation in anisotropic media. The importance of skew angle matching on cumulative second harmonic generation is emphasized and the accompanying issue of the selection of internally resonant mode pairs for both a unidirectional transversely isotropic lamina and a quasi-isotropic laminate is demonstrated.

  11. Estimation of Renyi exponents in random cascades

    USGS Publications Warehouse

    Troutman, Brent M.; Vecchia, Aldo V.

    1999-01-01

    We consider statistical estimation of the Re??nyi exponent ??(h), which characterizes the scaling behaviour of a singular measure ?? defined on a subset of Rd. The Re??nyi exponent is defined to be lim?????0 [{log M??(h)}/(-log ??)], assuming that this limit exists, where M??(h) = ??i??h(??i) and, for ??>0, {??i} are the cubes of a ??-coordinate mesh that intersect the support of ??. In particular, we demonstrate asymptotic normality of the least-squares estimator of ??(h) when the measure ?? is generated by a particular class of multiplicative random cascades, a result which allows construction of interval estimates and application of hypothesis tests for this scaling exponent. Simulation results illustrating this asymptotic normality are presented. ?? 1999 ISI/BS.

  12. Medium Access Control for Opportunistic Concurrent Transmissions under Shadowing Channels

    PubMed Central

    Son, In Keun; Mao, Shiwen; Hur, Seung Min

    2009-01-01

    We study the problem of how to alleviate the exposed terminal effect in multi-hop wireless networks in the presence of log-normal shadowing channels. Assuming node location information, we propose an extension of the IEEE 802.11 MAC protocol that sched-ules concurrent transmissions in the presence of log-normal shadowing, thus mitigating the exposed terminal problem and improving network throughput and delay performance. We observe considerable improvements in throughput and delay achieved over the IEEE 802.11 MAC under various network topologies and channel conditions in ns-2 simulations, which justify the importance of considering channel randomness in MAC protocol design for multi-hop wireless networks. PMID:22408556

  13. Quantitative analysis of diagnosing pancreatic fibrosis using EUS-elastography (comparison with surgical specimens).

    PubMed

    Itoh, Yuya; Itoh, Akihiro; Kawashima, Hiroki; Ohno, Eizaburo; Nakamura, Yosuke; Hiramatsu, Takeshi; Sugimoto, Hiroyuki; Sumi, Hajime; Hayashi, Daijuro; Kuwahara, Takamichi; Morishima, Tomomasa; Funasaka, Kohei; Nakamura, Masanao; Miyahara, Ryoji; Ohmiya, Naoki; Katano, Yoshiaki; Ishigami, Masatoshi; Goto, Hidemi; Hirooka, Yoshiki

    2014-07-01

    An accurate diagnosis of pancreatic fibrosis is clinically important and may have potential for staging chronic pancreatitis. The aim of this study was to diagnose the grade of pancreatic fibrosis through a quantitative analysis of endoscopic ultrasound elastography (EUS-EG). From September 2004 to October 2010, 58 consecutive patients examined by EUS-EG for both pancreatic tumors and their upstream pancreas before pancreatectomy were enrolled. Preoperative EUS-EG images in the upstream pancreas were statistically quantified, and the results were retrospectively compared with postoperative histological fibrosis in the same area. For the quantification of EUS-EG images, 4 parameters (mean, standard deviation, skewness, and kurtosis) were calculated using novel software. Histological fibrosis was graded into 4 categories (normal, mild fibrosis, marked fibrosis, and severe fibrosis) according to a previously reported scoring system. The fibrosis grade in the upstream pancreas was normal in 24 patients, mild fibrosis in 19, marked fibrosis in 6, and severe fibrosis in 9. Fibrosis grade was significantly correlated with all 4 quantification parameters (mean r = -0.75, standard deviation r = -0.54, skewness r = 0.69, kurtosis r = 0.67). According to the receiver operating characteristic analysis, the mean was the most useful parameter for diagnosing pancreatic fibrosis. Using the mean, the area under the ROC curves for the diagnosis of mild or higher-grade fibrosis, marked or higher-grade fibrosis and severe fibrosis were 0.90, 0.90, and 0.90, respectively. An accurate diagnosis of pancreatic fibrosis may be possible by analyzing EUS-EG images.

  14. Anomalous Hall effect in semiconductor quantum wells in proximity to chiral p -wave superconductors

    NASA Astrophysics Data System (ADS)

    Yang, F.; Yu, T.; Wu, M. W.

    2018-05-01

    By using the gauge-invariant optical Bloch equation, we perform a microscopic kinetic investigation on the anomalous Hall effect in chiral p -wave superconducting states. Specifically, the intrinsic anomalous Hall conductivity in the absence of the magnetic field is zero as a consequence of Galilean invariance in our description. As for the extrinsic channel, a finite anomalous Hall current is obtained from the impurity scattering with the optically excited normal quasiparticle current even at zero temperature. From our kinetic description, it can be clearly seen that the excited normal quasiparticle current is due to an induced center-of-mass momentum of Cooper pairs through the acceleration driven by ac electric field. For the induced anomalous Hall current, we show that the conventional skew-scattering channel in the linear response makes the dominant contribution in the strong impurity interaction. In this case, our kinetic description as a supplementary viewpoint mostly confirms the results of Kubo formalism in the literature. Nevertheless, in the weak impurity interaction, this skew-scattering channel becomes marginal and we reveal that an induction channel from the Born contribution dominates the anomalous Hall current. This channel, which has long been overlooked in the literature, is due to the particle-hole asymmetry by nonlinear optical excitation. Finally, we study the case in the chiral p -wave superconducting state with a transverse conical magnetization, which breaks the Galilean invariance. In this situation, the intrinsic anomalous Hall conductivity is no longer zero. Comparison of this intrinsic channel with the extrinsic one from impurity scattering is addressed.

  15. Artificial Neural Network Based Fault Diagnostics of Rolling Element Bearings Using Time-Domain Features

    NASA Astrophysics Data System (ADS)

    Samanta, B.; Al-Balushi, K. R.

    2003-03-01

    A procedure is presented for fault diagnosis of rolling element bearings through artificial neural network (ANN). The characteristic features of time-domain vibration signals of the rotating machinery with normal and defective bearings have been used as inputs to the ANN consisting of input, hidden and output layers. The features are obtained from direct processing of the signal segments using very simple preprocessing. The input layer consists of five nodes, one each for root mean square, variance, skewness, kurtosis and normalised sixth central moment of the time-domain vibration signals. The inputs are normalised in the range of 0.0 and 1.0 except for the skewness which is normalised between -1.0 and 1.0. The output layer consists of two binary nodes indicating the status of the machine—normal or defective bearings. Two hidden layers with different number of neurons have been used. The ANN is trained using backpropagation algorithm with a subset of the experimental data for known machine conditions. The ANN is tested using the remaining set of data. The effects of some preprocessing techniques like high-pass, band-pass filtration, envelope detection (demodulation) and wavelet transform of the vibration signals, prior to feature extraction, are also studied. The results show the effectiveness of the ANN in diagnosis of the machine condition. The proposed procedure requires only a few features extracted from the measured vibration data either directly or with simple preprocessing. The reduced number of inputs leads to faster training requiring far less iterations making the procedure suitable for on-line condition monitoring and diagnostics of machines.

  16. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  17. Monitoring the Groningen gas field by seismic noise interferometry

    NASA Astrophysics Data System (ADS)

    Zhou, Wen; Paulssen, Hanneke

    2017-04-01

    The Groningen gas field in the Netherlands is the world's 7th largest onshore gas field and has been producing from 1963. Since 2013, the year with the highest level of induced seismicity, the reservoir has been monitored by two geophone strings at reservoir level at about 3 km depth. For borehole SDM, 10 geophones with a natural frequency of 15-Hz are positioned from the top to bottom of the reservoir with a geophone spacing of 30 m. We used seismic interferometry to determine, as accurately as possible, the inter-geophone P- and S-wave velocities from ambient noise. We used 1-bit normalization and spectral whitening, together with a bandpass filter from 3 to 400 Hz. After that, for each station pair, the normalized cross-correlation was calculated for 6 seconds segments with 2/3 overlap. These segmented cross-correlations were stacked for every 1 hour, 24(hours)*33(days) segments were obtained for each station pair. The cross-correlations show both day-and-night and weekly variations reflecting fluctuations in cultural noise. The apparent P-wave travel time for each geophone pair is measured from the maximum of the vertical component cross-correlation for each of the hourly stacks. Because the distribution of these (24*33) picked travel times is not Gaussian but skewed, we used Kernel density estimations to obtain probability density functions of the travel times. The maximum likelihood travel times of all the geophone pairs was subsequently used to determine inter-geophone P-wave velocities. A good agreement was found between our estimated P velocity structure and well logging data, with difference less than 5%. The S-velocity structure was obtained from the east-component cross-correlations. They show both the direct P- and S-wave arrivals and, because of the interference, the inferred S-velocity structure is less accurate. From the 9(3x3)-component cross-correlations for all the geophone pairs, not only the direct P and S waves can be identified, but also reflected waves within the reservoir for some of the cross-correlations. It is concluded that noise interferometry can be used to determine the seismic velocity structure from deep borehole data.

  18. Wavefront-Guided Scleral Lens Correction in Keratoconus

    PubMed Central

    Marsack, Jason D.; Ravikumar, Ayeswarya; Nguyen, Chi; Ticak, Anita; Koenig, Darren E.; Elswick, James D.; Applegate, Raymond A.

    2014-01-01

    Purpose To examine the performance of state-of-the-art wavefront-guided scleral contact lenses (wfgSCLs) on a sample of keratoconic eyes, with emphasis on performance quantified with visual quality metrics; and to provide a detailed discussion of the process used to design, manufacture and evaluate wfgSCLs. Methods Fourteen eyes of 7 subjects with keratoconus were enrolled and a wfgSCL was designed for each eye. High-contrast visual acuity and visual quality metrics were used to assess the on-eye performance of the lenses. Results The wfgSCL provided statistically lower levels of both lower-order RMS (p < 0.001) and higher-order RMS (p < 0.02) than an intermediate spherical equivalent scleral contact lens. The wfgSCL provided lower levels of lower-order RMS than a normal group of well-corrected observers (p < < 0.001). However, the wfgSCL does not provide less higher-order RMS than the normal group (p = 0.41). Of the 14 eyes studied, 10 successfully reached the exit criteria, achieving residual higher-order root mean square wavefront error (HORMS) less than or within 1 SD of the levels experienced by normal, age-matched subjects. In addition, measures of visual image quality (logVSX, logNS and logLIB) for the 10 eyes were well distributed within the range of values seen in normal eyes. However, visual performance as measured by high contrast acuity did not reach normal, age-matched levels, which is in agreement with prior results associated with the acute application of wavefront correction to KC eyes. Conclusions Wavefront-guided scleral contact lenses are capable of optically compensating for the deleterious effects of higher-order aberration concomitant with the disease, and can provide visual image quality equivalent to that seen in normal eyes. Longer duration studies are needed to assess whether the visual system of the highly aberrated eye wearing a wfgSCL is capable of producing visual performance levels typical of the normal population. PMID:24830371

  19. X chromosome inactivation in women with alcoholism.

    PubMed

    Manzardo, Ann M; Henkhaus, Rebecca; Hidaka, Brandon; Penick, Elizabeth C; Poje, Albert B; Butler, Merlin G

    2012-08-01

    All female mammals with 2 X chromosomes balance gene expression with males having only 1 X by inactivating one of their X chromosomes (X chromosome inactivation [XCI]). Analysis of XCI in females offers the opportunity to investigate both X-linked genetic factors and early embryonic development that may contribute to alcoholism. Increases in the prevalence of skewing of XCI in women with alcoholism could implicate biological risk factors. The pattern of XCI was examined in DNA isolated in blood from 44 adult women meeting DSM-IV criteria for an alcohol use disorder and 45 control women with no known history of alcohol abuse or dependence. XCI status was determined by analyzing digested and undigested polymerase chain reaction (PCR) products of the polymorphic androgen receptor (AR) gene located on the X chromosome. Subjects were categorized into 3 groups based upon the degree of XCI skewness: random (50:50 to 64:36%), moderately skewed (65:35 to 80:20%), and highly skewed (>80:20%). XCI status from informative women with alcoholism was found to be random in 59% (n = 26), moderately skewed in 27% (n = 12), or highly skewed in 14% (n = 6). Control subjects showed 60, 29, and 11%, respectively. The distribution of skewed XCI observed among women with alcoholism did not differ statistically from that of control subjects (χ(2) test = 0.14, 2 df, p = 0.93). Our data did not support an increase in XCI skewness among women with alcoholism or implicate early developmental events associated with embryonic cell loss or unequal (nonrandom) expression of X-linked gene(s) or defects in alcoholism among women. Copyright © 2012 by the Research Society on Alcoholism.

  20. Characterization of X Chromosome Inactivation Using Integrated Analysis of Whole-Exome and mRNA Sequencing

    PubMed Central

    Szelinger, Szabolcs; Malenica, Ivana; Corneveaux, Jason J.; Siniard, Ashley L.; Kurdoglu, Ahmet A.; Ramsey, Keri M.; Schrauwen, Isabelle; Trent, Jeffrey M.; Narayanan, Vinodh; Huentelman, Matthew J.; Craig, David W.

    2014-01-01

    In females, X chromosome inactivation (XCI) is an epigenetic, gene dosage compensatory mechanism by inactivation of one copy of X in cells. Random XCI of one of the parental chromosomes results in an approximately equal proportion of cells expressing alleles from either the maternally or paternally inherited active X, and is defined by the XCI ratio. Skewed XCI ratio is suggestive of non-random inactivation, which can play an important role in X-linked genetic conditions. Current methods rely on indirect, semi-quantitative DNA methylation-based assay to estimate XCI ratio. Here we report a direct approach to estimate XCI ratio by integrated, family-trio based whole-exome and mRNA sequencing using phase-by-transmission of alleles coupled with allele-specific expression analysis. We applied this method to in silico data and to a clinical patient with mild cognitive impairment but no clear diagnosis or understanding molecular mechanism underlying the phenotype. Simulation showed that phased and unphased heterozygous allele expression can be used to estimate XCI ratio. Segregation analysis of the patient's exome uncovered a de novo, interstitial, 1.7 Mb deletion on Xp22.31 that originated on the paternally inherited X and previously been associated with heterogeneous, neurological phenotype. Phased, allelic expression data suggested an 83∶20 moderately skewed XCI that favored the expression of the maternally inherited, cytogenetically normal X and suggested that the deleterious affect of the de novo event on the paternal copy may be offset by skewed XCI that favors expression of the wild-type X. This study shows the utility of integrated sequencing approach in XCI ratio estimation. PMID:25503791

  1. When is category specific in Alzheimer's disease?

    PubMed

    Laws, Keith R; Gale, Tim M; Leeson, Verity C; Crawford, John R

    2005-08-01

    Mixed findings have emerged concerning whether category-specific disorders occur in Alzheimer's disease. Factors that may contribute to these inconsistencies include: ceiling effects/skewed distributions for control data in some studies; differences in the severity of cognitive deficit in patients; and differences in the type of analysis (in particular, if and how controls are used to analyse single case data). We examined picture naming in Alzheimer's patients and matched elderly healthy normal controls in three experiments. These experiments used stimuli that did and did not produce ceiling effects/skewed data in controls. In Experiment 1, we examined for category effects in individual DAT patients using commonly used analyses for single cases (chi2 and z-scores). The different techniques produced quite different outcomes. In Experiment 2a, we used the same techniques on a different group of patients with similar outcomes. Finally, in Experiment 2b, we examined the same patients but (a) used stimuli that did not produce ceiling effects/skewed distributions in healthy controls, and (b) used statistical methods that did not treat the control sample as a population. We found that ceiling effects in controls may markedly inflate the incidence of dissociations in which living things are differentially impaired and seriously underestimate dissociations in the opposite direction. In addition, methods that treat the control sample as a population led to inflation in the overall number of dissociations detected. These findings have implications for the reliability of category effects previously reported both in Alzheimer patients and in other pathologies. In particular, they suggest that the greater proportion of living than nonliving deficits reported in the literature may be an artifact of the methods used.

  2. Skewed X-chromosome inactivation plays a crucial role in the onset of symptoms in carriers of Becker muscular dystrophy.

    PubMed

    Viggiano, Emanuela; Picillo, Esther; Ergoli, Manuela; Cirillo, Alessandra; Del Gaudio, Stefania; Politano, Luisa

    2017-04-01

    Becker muscular dystrophy (BMD) is an X-linked recessive disorder affecting approximately 1: 18.000 male births. Female carriers are usually asymptomatic, although 2.5-18% may present muscle or heart symptoms. In the present study, the role of the X chromosome inactivation (XCI) on the onset of symptoms in BMD carriers was analysed and compared with the pattern observed in Duchenne muscular dystrophy (DMD) carriers. XCI was determined on the lymphocytes of 36 BMD carriers (both symptomatic and not symptomatic) from 11 families requiring genetic advice at the Cardiomyology and Medical Genetics of the Second University of Naples, using the AR methylation-based assay. Carriers were subdivided into two groups, according to age above or below 50 years. Seven females from the same families known as noncarriers were used as controls. A Student's t-test for nonpaired data was performed to evaluate the differences observed in the XCI values between asymptomatic and symptomatic carriers, and carriers aged above or below 50 years. A Pearson correlation test was used to evaluate the inheritance of the XCI pattern in 19 mother-daughter pairs. The results showed that symptomatic BMD carriers had a skewed XCI with a preferential inactivation of the X chromosome carrying the normal allele, whereas the asymptomatic carriers and controls showed a random XCI. No concordance concerning the XCI pattern was observed between mothers and related daughters. The data obtained in the present study suggest that the onset of symptoms in BMD carriers is related to a skewed XCI, as observed in DMD carriers. Furthermore, they showed no concordance in the XCI pattern inheritance. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Quantifying the cross-sectional relationship between online sentiment and the skewness of stock returns

    NASA Astrophysics Data System (ADS)

    Shen, Dehua; Liu, Lanbiao; Zhang, Yongjie

    2018-01-01

    The constantly increasing utilization of social media as the alternative information channel, e.g., Twitter, provides us a unique opportunity to investigate the dynamics of the financial market. In this paper, we employ the daily happiness sentiment extracted from Twitter as the proxy for the online sentiment dynamics and investigate its association with the skewness of stock returns of 26 international stock market index returns. The empirical results show that: (1) by dividing the daily happiness sentiment into quintiles from the least to the most happiness days, the skewness of the Most-happiness subgroup is significantly larger than that of the Least-happiness subgroup. Besides, there exist significant differences in any pair of subgroups; (2) in an event study methodology, we further show that the skewness around the highest happiness days is significantly larger than the skewness around the lowest happiness days.

  4. Halo Pressure Profile through the Skew Cross-power Spectrum of the Sunyaev–Zel’dovich Effect and CMB Lensing in Planck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timmons, Nicholas; Cooray, Asantha; Feng, Chang

    2017-11-01

    We measure the cosmic microwave background (CMB) skewness power spectrum in Planck , using frequency maps of the HFI instrument and the Sunyaev–Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing–SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gasmore » pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck .« less

  5. Fast frequency domain method to detect skew in a document image

    NASA Astrophysics Data System (ADS)

    Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee

    2015-12-01

    In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.

  6. Impact of radius and skew angle on areal density in heat assisted magnetic recording hard disk drives

    NASA Astrophysics Data System (ADS)

    Cordle, Michael; Rea, Chris; Jury, Jason; Rausch, Tim; Hardie, Cal; Gage, Edward; Victora, R. H.

    2018-05-01

    This study aims to investigate the impact that factors such as skew, radius, and transition curvature have on areal density capability in heat-assisted magnetic recording hard disk drives. We explore a "ballistic seek" approach for capturing in-situ scan line images of the magnetization footprint on the recording media, and extract parametric results of recording characteristics such as transition curvature. We take full advantage of the significantly improved cycle time to apply a statistical treatment to relatively large samples of experimental curvature data to evaluate measurement capability. Quantitative analysis of factors that impact transition curvature reveals an asymmetry in the curvature profile that is strongly correlated to skew angle. Another less obvious skew-related effect is an overall decrease in curvature as skew angle increases. Using conventional perpendicular magnetic recording as the reference case, we characterize areal density capability as a function of recording position.

  7. Experimental investigation of the noise emission of axial fans under distorted inflow conditions

    NASA Astrophysics Data System (ADS)

    Zenger, Florian J.; Renz, Andreas; Becher, Marcus; Becker, Stefan

    2016-11-01

    An experimental investigation on the noise emission of axial fans under distorted inflow conditions was conducted. Three fans with forward-skewed fan blades and three fans with backward-skewed fan blades and a common operating point were designed with a 2D element blade method. Two approaches were adopted to modify the inflow conditions: first, the inflow turbulence intensity was increased by two different rectangular grids and second, the inflow velocity profile was changed to an asymmetric characteristic by two grids with a distinct bar stacking. An increase in the inflow turbulence intensity affects both tonal and broadband noise, whereas a non-uniform velocity profile at the inlet influences mainly tonal components. The magnitude of this effect is not the same for all fans but is dependent on the blade skew. The impact is greater for the forward-skewed fans than for the backward-skewed and thus directly linked to the fan blade geometry.

  8. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  9. Estimation of Microbial Contamination of Food from Prevalence and Concentration Data: Application to Listeria monocytogenes in Fresh Vegetables▿

    PubMed Central

    Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric

    2007-01-01

    A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926

  10. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  11. Effects of reduced-impact logging on fish assemblages in central Amazonia.

    PubMed

    Dias, Murilo S; Magnusson, William E; Zuanon, Jansen

    2010-02-01

    In Amazonia reduced-impact logging, which is meant to reduce environmental disturbance by controlling stem-fall directions and minimizing construction of access roads, has been applied to large areas containing thousands of streams. We investigated the effects of reduced-impact logging on environmental variables and the composition of fish in forest streams in a commercial logging concession in central Amazonia, Amazonas State, Brazil. To evaluate short-term effects, we sampled 11 streams before and after logging in one harvest area. We evaluated medium-term effects by comparing streams in 11 harvest areas logged 1-8 years before the study with control streams in adjacent areas. Each sampling unit was a 50-m stream section. The tetras Pyrrhulina brevis and Hemigrammus cf. pretoensis had higher abundances in plots logged > or =3 years before compared with plots logged <3 years before. The South American darter (Microcharacidium eleotrioides) was less abundant in logged plots than in control plots. In the short term, the overall fish composition did not differ two months before and immediately after reduced-impact logging. Temperature and pH varied before and after logging, but those differences were compatible with normal seasonal variation. In the medium term, temperature and cover of logs were lower in logged plots. Differences in ordination scores on the basis of relative fish abundance between streams in control and logged areas changed with time since logging, mainly because some common species increased in abundance after logging. There was no evidence of species loss from the logging concession, but differences in log cover and ordination scores derived from relative abundance of fish species persisted even after 8 years. For Amazonian streams, reduced-impact logging appears to be a viable alternative to clear-cut practices, which severely affect aquatic communities. Nevertheless, detailed studies are necessary to evaluated subtle long-term effects.

  12. Comparative analysis of background EEG activity in childhood absence epilepsy during valproate treatment: a standardized, low-resolution, brain electromagnetic tomography (sLORETA) study.

    PubMed

    Shin, Jung-Hyun; Eom, Tae-Hoon; Kim, Young-Hoon; Chung, Seung-Yun; Lee, In-Goo; Kim, Jung-Min

    2017-07-01

    Valproate (VPA) is an antiepileptic drug (AED) used for initial monotherapy in treating childhood absence epilepsy (CAE). EEG might be an alternative approach to explore the effects of AEDs on the central nervous system. We performed a comparative analysis of background EEG activity during VPA treatment by using standardized, low-resolution, brain electromagnetic tomography (sLORETA) to explore the effect of VPA in patients with CAE. In 17 children with CAE, non-parametric statistical analyses using sLORETA were performed to compare the current density distribution of four frequency bands (delta, theta, alpha, and beta) between the untreated and treated condition. Maximum differences in current density were found in the left inferior frontal gyrus for the delta frequency band (log-F-ratio = -1.390, P > 0.05), the left medial frontal gyrus for the theta frequency band (log-F-ratio = -0.940, P > 0.05), the left inferior frontal gyrus for the alpha frequency band (log-F-ratio = -0.590, P > 0.05), and the left anterior cingulate for the beta frequency band (log-F-ratio = -1.318, P > 0.05). However, none of these differences were significant (threshold log-F-ratio = ±1.888, P < 0.01; threshold log-F-ratio = ±1.722, P < 0.05). Because EEG background is accepted as normal in CAE, VPA would not be expected to significantly change abnormal thalamocortical oscillations on a normal EEG background. Therefore, our results agree with currently accepted concepts but are not consistent with findings in some previous studies.

  13. Higher incidence of small Y chromosome in humans with trisomy 21 (Down syndrome).

    PubMed

    Verma, R S; Huq, A; Madahar, C; Qazi, Q; Dosik, H

    1982-09-01

    The length of the Y chromosome was measured in 42 black patients with trisomy 21 (47,XY,+21) and a similar number of normal individuals of American black ancestry. The length of the Y was expressed as a function of Y/F ratio and arbitrarily classified into five groups using subjectively defined criteria as follows: very small, small, average, large, and very large. Thirty-eight % of the trisomy 21 patients had small or very small Ys compared to 2.38% of the controls (P less than 0.01). In both populations the size of the Y was not normally distributed. In the normals it was skewed to the left, whereas in the Downs the distribution was flat (platykurtic). A significantly higher incidence of Y length heteromorphisms was noted in the Down as compared to the normal black population. In the light of our current understanding that about one-third of all trisomy 21 patients are due to paternal nondisjunction, it may be tempting to speculate that males with small Y are at an increased risk for nondisjunction of the 21 chromosome.

  14. A radiographic study of the mandibular third molar root development in different ethnic groups.

    PubMed

    Liversidge, H M; Peariasamy, K; Folayan, M O; Adeniyi, A O; Ngom, P I; Mikami, Y; Shimada, Y; Kuroe, K; Tvete, I F; Kvaal, S I

    2017-12-01

    The nature of differences in the timing of tooth formation between ethnic groups is important when estimating age. To calculate age of transition of the mandibular third (M3) molar tooth stages from archived dental radiographs from sub-Saharan Africa, Malaysia, Japan and two groups from London UK (Whites and Bangladeshi). The number of radiographs was 4555 (2028 males, 2527 females) with an age range 10-25 years. The left M3 was staged into Moorrees stages. A probit model was fitted to calculate mean ages for transitions between stages for males and females and each ethnic group separately. The estimated age distributions given each M3 stage was calculated. To assess differences in timing of M3 between ethnic groups, three models were proposed: a separate model for each ethnic group, a joint model and a third model combining some aspects across groups. The best model fit was tested using Bayesian and Akaikes information criteria (BIC and AIC) and log likelihood ratio test. Differences in mean ages of M3 root stages were found between ethnic groups, however all groups showed large standard deviation values. The AIC and log likelihood ratio test indicated that a separate model for each ethnic group was best. Small differences were also noted between timing of M3 between males and females, with the exception of the Malaysian group. These findings suggests that features of a reference data set (wide age range and uniform age distribution) and a Bayesian statistical approach are more important than population specific convenience samples to estimate age of an individual using M3. Some group differences were evident in M3 timing, however, this has some impact on the confidence interval of estimated age in females and little impact in males because of the large variation in age.

  15. A unifying theory for genetic epidemiological analysis of binary disease data

    PubMed Central

    2014-01-01

    Background Genetic selection for host resistance offers a desirable complement to chemical treatment to control infectious disease in livestock. Quantitative genetics disease data frequently originate from field studies and are often binary. However, current methods to analyse binary disease data fail to take infection dynamics into account. Moreover, genetic analyses tend to focus on host susceptibility, ignoring potential variation in infectiousness, i.e. the ability of a host to transmit the infection. This stands in contrast to epidemiological studies, which reveal that variation in infectiousness plays an important role in the progression and severity of epidemics. In this study, we aim at filling this gap by deriving an expression for the probability of becoming infected that incorporates infection dynamics and is an explicit function of both host susceptibility and infectiousness. We then validate this expression according to epidemiological theory and by simulating epidemiological scenarios, and explore implications of integrating this expression into genetic analyses. Results Our simulations show that the derived expression is valid for a range of stochastic genetic-epidemiological scenarios. In the particular case of variation in susceptibility only, the expression can be incorporated into conventional quantitative genetic analyses using a complementary log-log link function (rather than probit or logit). Similarly, if there is moderate variation in both susceptibility and infectiousness, it is possible to use a logarithmic link function, combined with an indirect genetic effects model. However, in the presence of highly infectious individuals, i.e. super-spreaders, the use of any model that is linear in susceptibility and infectiousness causes biased estimates. Thus, in order to identify super-spreaders, novel analytical methods using our derived expression are required. Conclusions We have derived a genetic-epidemiological function for quantitative genetic analyses of binary infectious disease data, which, unlike current approaches, takes infection dynamics into account and allows for variation in host susceptibility and infectiousness. PMID:24552188

  16. A unifying theory for genetic epidemiological analysis of binary disease data.

    PubMed

    Lipschutz-Powell, Debby; Woolliams, John A; Doeschl-Wilson, Andrea B

    2014-02-19

    Genetic selection for host resistance offers a desirable complement to chemical treatment to control infectious disease in livestock. Quantitative genetics disease data frequently originate from field studies and are often binary. However, current methods to analyse binary disease data fail to take infection dynamics into account. Moreover, genetic analyses tend to focus on host susceptibility, ignoring potential variation in infectiousness, i.e. the ability of a host to transmit the infection. This stands in contrast to epidemiological studies, which reveal that variation in infectiousness plays an important role in the progression and severity of epidemics. In this study, we aim at filling this gap by deriving an expression for the probability of becoming infected that incorporates infection dynamics and is an explicit function of both host susceptibility and infectiousness. We then validate this expression according to epidemiological theory and by simulating epidemiological scenarios, and explore implications of integrating this expression into genetic analyses. Our simulations show that the derived expression is valid for a range of stochastic genetic-epidemiological scenarios. In the particular case of variation in susceptibility only, the expression can be incorporated into conventional quantitative genetic analyses using a complementary log-log link function (rather than probit or logit). Similarly, if there is moderate variation in both susceptibility and infectiousness, it is possible to use a logarithmic link function, combined with an indirect genetic effects model. However, in the presence of highly infectious individuals, i.e. super-spreaders, the use of any model that is linear in susceptibility and infectiousness causes biased estimates. Thus, in order to identify super-spreaders, novel analytical methods using our derived expression are required. We have derived a genetic-epidemiological function for quantitative genetic analyses of binary infectious disease data, which, unlike current approaches, takes infection dynamics into account and allows for variation in host susceptibility and infectiousness.

  17. Comparison of tricuspid and bicuspid aortic valve hemodynamics under steady flow conditions

    NASA Astrophysics Data System (ADS)

    Seaman, Clara; Ward, James; Sucosky, Philippe

    2011-11-01

    The bicuspid aortic valve (BAV), a congenital valvular defect consisting of two leaflets instead of three, is associated with a high prevalence of calcific aortic valve disease (CAVD). CAVD also develops in the normal tricuspid aortic valve (TAV) but its progression in the BAV is more severe and rapid. Although hemodynamic abnormalities are increasingly considered potential pathogenic contributor, the native BAV hemodynamics remain largely unknown. Therefore, this study aims at comparing experimentally the hemodynamic environments in TAV and BAV anatomies. Particle-image velocimetry was used to characterize the flow downstream of a native TAV and a model BAV mounted in a left-heart simulator and subjected to three steady flow rates characterizing different phases of the cardiac cycle. While the TAV developed a jet aligned along the valve axis, the BAV was shown to develop a skewed systolic jet with skewness decreasing with increasing flow rate. Measurement of the transvalvular pressure revealed a valvular resistance up to 50% larger in the BAV than in the TAV. The increase in velocity between the TAV and BAV leads to an increase in shear stress downstream of the valve. This study reveals strong hemodynamic abnormalities in the BAV, which may contribute to CAVD pathogenesis.

  18. Adaptive linear rank tests for eQTL studies

    PubMed Central

    Szymczak, Silke; Scheinhardt, Markus O.; Zeller, Tanja; Wild, Philipp S.; Blankenberg, Stefan; Ziegler, Andreas

    2013-01-01

    Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal–Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. PMID:22933317

  19. Adaptive linear rank tests for eQTL studies.

    PubMed

    Szymczak, Silke; Scheinhardt, Markus O; Zeller, Tanja; Wild, Philipp S; Blankenberg, Stefan; Ziegler, Andreas

    2013-02-10

    Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Percentiles of the run-length distribution of the Exponentially Weighted Moving Average (EWMA) median chart

    NASA Astrophysics Data System (ADS)

    Tan, K. L.; Chong, Z. L.; Khoo, M. B. C.; Teoh, W. L.; Teh, S. Y.

    2017-09-01

    Quality control is crucial in a wide variety of fields, as it can help to satisfy customers’ needs and requirements by enhancing and improving the products and services to a superior quality level. The EWMA median chart was proposed as a useful alternative to the EWMA \\bar{X} chart because the median-type chart is robust against contamination, outliers or small deviation from the normality assumption compared to the traditional \\bar{X}-type chart. To provide a complete understanding of the run-length distribution, the percentiles of the run-length distribution should be investigated rather than depending solely on the average run length (ARL) performance measure. This is because interpretation depending on the ARL alone can be misleading, as the process mean shifts change according to the skewness and shape of the run-length distribution, varying from almost symmetric when the magnitude of the mean shift is large, to highly right-skewed when the process is in-control (IC) or slightly out-of-control (OOC). Before computing the percentiles of the run-length distribution, optimal parameters of the EWMA median chart will be obtained by minimizing the OOC ARL, while retaining the IC ARL at a desired value.

Top