Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per
2011-01-01
Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175
Defining surfaces for skewed, highly variable data
Helsel, D.R.; Ryker, S.J.
2002-01-01
Skewness of environmental data is often caused by more than simply a handful of outliers in an otherwise normal distribution. Statistical procedures for such datasets must be sufficiently robust to deal with distributions that are strongly non-normal, containing both a large proportion of outliers and a skewed main body of data. In the field of water quality, skewness is commonly associated with large variation over short distances. Spatial analysis of such data generally requires either considerable effort at modeling or the use of robust procedures not strongly affected by skewness and local variability. Using a skewed dataset of 675 nitrate measurements in ground water, commonly used methods for defining a surface (least-squares regression and kriging) are compared to a more robust method (loess). Three choices are critical in defining a surface: (i) is the surface to be a central mean or median surface? (ii) is either a well-fitting transformation or a robust and scale-independent measure of center used? (iii) does local spatial autocorrelation assist in or detract from addressing objectives? Published in 2002 by John Wiley & Sons, Ltd.
Inferring climate variability from skewed proxy records
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Tingley, M.
2013-12-01
Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and compared to other proxy records. (2) a multiproxy reconstruction of temperature over the Common Era (Mann et al., 2009), where we find that about one third of the records display significant departures from normality. Accordingly, accounting for skewness in proxy predictors has a notable influence on both reconstructed global mean and spatial patterns of temperature change. Inferring climate variability from skewed proxy records thus requires cares, but can be done with relatively simple tools. References - Mann, M. E., Z. Zhang, S. Rutherford, R. S. Bradley, M. K. Hughes, D. Shindell, C. Ammann, G. Faluvegi, and F. Ni (2009), Global signatures and dynamical origins of the little ice age and medieval climate anomaly, Science, 326(5957), 1256-1260, doi:10.1126/science.1177303. - Moy, C., G. Seltzer, D. Rodbell, and D. Anderson (2002), Variability of El Niño/Southern Oscillation activ- ity at millennial timescales during the Holocene epoch, Nature, 420(6912), 162-165.
Dichotomisation using a distributional approach when the outcome is skewed.
Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L
2015-04-24
Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.
Increased skewing of X chromosome inactivation in Rett syndrome patients and their mothers.
Knudsen, Gun Peggy S; Neilson, Tracey C S; Pedersen, June; Kerr, Alison; Schwartz, Marianne; Hulten, Maj; Bailey, Mark E S; Orstavik, Karen Helene
2006-11-01
Rett syndrome is a largely sporadic, X-linked neurological disorder with a characteristic phenotype, but which exhibits substantial phenotypic variability. This variability has been partly attributed to an effect of X chromosome inactivation (XCI). There have been conflicting reports regarding incidence of skewed X inactivation in Rett syndrome. In rare familial cases of Rett syndrome, favourably skewed X inactivation has been found in phenotypically normal carrier mothers. We have investigated the X inactivation pattern in DNA from blood and buccal cells of sporadic Rett patients (n=96) and their mothers (n=84). The mean degree of skewing in blood was higher in patients (70.7%) than controls (64.9%). Unexpectedly, the mothers of these patients also had a higher mean degree of skewing in blood (70.8%) than controls. In accordance with these findings, the frequency of skewed (XCI > or =80%) X inactivation in blood was also higher in both patients (25%) and mothers (30%) than in controls (11%). To test whether the Rett patients with skewed X inactivation were daughters of skewed mothers, 49 mother-daughter pairs were analysed. Of 14 patients with skewed X inactivation, only three had a mother with skewed X inactivation. Among patients, mildly affected cases were shown to be more skewed than more severely affected cases, and there was a trend towards preferential inactivation of the paternally inherited X chromosome in skewed cases. These findings, particularly the greater degree of X inactivation skewing in Rett syndrome patients, are of potential significance in the analysis of genotype-phenotype correlations in Rett syndrome.
Location tests for biomarker studies: a comparison using simulations for the two-sample case.
Scheinhardt, M O; Ziegler, A
2013-01-01
Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.
On the efficacy of procedures to normalize Ex-Gaussian distributions.
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2014-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.
The importance of normalisation in the construction of deprivation indices.
Gilthorpe, M S
1995-12-01
Measuring socio-economic deprivation is a major challenge usually addressed through the use of composite indices. This paper aims to clarify the technical details regarding composite index construction. The distribution of some variables, for example unemployment, varies over time, and these variations must be considered when composite indices are periodically re-evaluated. The process of normalisation is examined in detail and particular attention is paid to the importance of symmetry and skewness of the composite variable distributions. Four different solutions of the Townsend index of socioeconomic deprivation are compared to reveal the effects that differing transformation processes have on the meaning or interpretation of the final index values. Differences in the rank order and the relative separation between values are investigated. Constituent variables which have been transformed to yield a more symmetric distribution provide indices that behave similarly, irrespective of the actual transformation methods adopted. Normalisation is seen to be of less importance than the removal of variable skewness. Furthermore, the degree of success of the transformation in removing skewness has a major effect in determining the variation between the individual electoral ward scores. Constituent variables undergoing no transformation produce an index that is distorted by the inherent variable skewness, and this index is not consistent between re-evaluations, either temporally or spatially. Effective transformation of constituent variables should always be undertaken when generating a composite index. The most important aspect is the removal of variable skewness. There is no need for the transformed variables to be normally distributed, only symmetrically distributed, before standardisation. Even where additional parameter weights are to be applied, which significantly alter the final index, appropriate transformation procedures should be adopted for the purpose of consistency over time and between different geographical areas.
Generating Multivariate Ordinal Data via Entropy Principles.
Lee, Yen; Kaplan, David
2018-03-01
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.
On the efficacy of procedures to normalize Ex-Gaussian distributions
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2015-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588
Directional Dependence in Developmental Research
ERIC Educational Resources Information Center
von Eye, Alexander; DeShon, Richard P.
2012-01-01
In this article, we discuss and propose methods that may be of use to determine direction of dependence in non-normally distributed variables. First, it is shown that standard regression analysis is unable to distinguish between explanatory and response variables. Then, skewness and kurtosis are discussed as tools to assess deviation from…
Statistical analysis of the 70 meter antenna surface distortions
NASA Technical Reports Server (NTRS)
Kiedron, K.; Chian, C. T.; Chuang, K. L.
1987-01-01
Statistical analysis of surface distortions of the 70 meter NASA/JPL antenna, located at Goldstone, was performed. The purpose of this analysis is to verify whether deviations due to gravity loading can be treated as quasi-random variables with normal distribution. Histograms of the RF pathlength error distribution for several antenna elevation positions were generated. The results indicate that the deviations from the ideal antenna surface are not normally distributed. The observed density distribution for all antenna elevation angles is taller and narrower than the normal density, which results in large positive values of kurtosis and a significant amount of skewness. The skewness of the distribution changes from positive to negative as the antenna elevation changes from zenith to horizon.
Mapping of quantitative trait loci using the skew-normal distribution.
Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos
2007-11-01
In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.
Plasma Electrolyte Distributions in Humans-Normal or Skewed?
Feldman, Mark; Dickson, Beverly
2017-11-01
It is widely believed that plasma electrolyte levels are normally distributed. Statistical tests and calculations using plasma electrolyte data are often reported based on this assumption of normality. Examples include t tests, analysis of variance, correlations and confidence intervals. The purpose of our study was to determine whether plasma sodium (Na + ), potassium (K + ), chloride (Cl - ) and bicarbonate [Formula: see text] distributions are indeed normally distributed. We analyzed plasma electrolyte data from 237 consecutive adults (137 women and 100 men) who had normal results on a standard basic metabolic panel which included plasma electrolyte measurements. The skewness of each distribution (as a measure of its asymmetry) was compared to the zero skewness of a normal (Gaussian) distribution. The plasma Na + distribution was skewed slightly to the right, but the skew was not significantly different from zero skew. The plasma Cl - distribution was skewed slightly to the left, but again the skew was not significantly different from zero skew. On the contrary, both the plasma K + and [Formula: see text] distributions were significantly skewed to the right (P < 0.01 zero skew). There was also a suggestion from examining frequency distribution curves that K + and [Formula: see text] distributions were bimodal. In adults with a normal basic metabolic panel, plasma potassium and bicarbonate levels are not normally distributed and may be bimodal. Thus, statistical methods to evaluate these 2 plasma electrolytes should be nonparametric tests and not parametric ones that require a normal distribution. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
The formulation and estimation of a spatial skew-normal generalized ordered-response model.
DOT National Transportation Integrated Search
2016-06-01
This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...
Using the range to calculate the coefficient of variation.
Rhiel, G Steven
2004-12-01
In this research a coefficient of variation (CVhigh-low) is calculated from the highest and lowest values in a set of data. Use of CVhigh-low when the population is normal, leptokurtic, and skewed is discussed. The statistic is the most effective when sampling from the normal distribution. With the leptokurtic distributions, CVhigh-low works well for comparing the relative variability between two or more distributions but does not provide a very "good" point estimate of the population coefficient of variation. With skewed distributions CVhigh-low works well in identifying which data set has the more relative variation but does not specify how much difference there is in the variation. It also does not provide a "good" point estimate.
Hiroaki Ishii; Ken-Ichi Yoshimura; Akira Mori
2009-01-01
The branching pattern of A. amabilis was regular (normal shoot-length distribution, less variable branching angle and bifurcation ratio), whereas that of T. heterophylla was more plastic (positively skewed shoot-length distribution, more variable branching angle and bifurcation ratio). The two species had similar shoot...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-17
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less
Algae Tile Data: 2004-2007, BPA-51; Preliminary Report, October 28, 2008.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holderman, Charles
Multiple files containing 2004 through 2007 Tile Chlorophyll data for the Kootenai River sites designated as: KR1, KR2, KR3, KR4 (Downriver) and KR6, KR7, KR9, KR9.1, KR10, KR11, KR12, KR13, KR14 (Upriver) were received by SCS. For a complete description of the sites covered, please refer to http://ktoi.scsnetw.com. To maintain consistency with the previous SCS algae reports, all analyses were carried out separately for the Upriver and Downriver categories, as defined in the aforementioned paragraph. The Upriver designation, however, now includes three additional sites, KR11, KR12, and the nutrient addition site, KR9.1. Summary statistics and information on the four responses,more » chlorophyll a, chlorophyll a Accrual Rate, Total Chlorophyll, and Total Chlorophyll Accrual Rate are presented in Print Out 2. Computations were carried out separately for each river position (Upriver and Downriver) and year. For example, the Downriver position in 2004 showed an average Chlorophyll a level of 25.5 mg with a standard deviation of 21.4 and minimum and maximum values of 3.1 and 196 mg, respectively. The Upriver data in 2004 showed a lower overall average chlorophyll a level at 2.23 mg with a lower standard deviation (3.6) and minimum and maximum values of (0.13 and 28.7, respectively). A more comprehensive summary of each variable and position is given in Print Out 3. This lists the information above as well as other summary information such as the variance, standard error, various percentiles and extreme values. Using the 2004 Downriver Chlorophyll a as an example again, the variance of this data was 459.3 and the standard error of the mean was 1.55. The median value or 50th percentile was 21.3, meaning 50% of the data fell above and below this value. It should be noted that this value is somewhat different than the mean of 25.5. This is an indication that the frequency distribution of the data is not symmetrical (skewed). The skewness statistic, listed as part of the first section of each analysis, quantifies this. In a symmetric distribution, such as a Normal distribution, the skewness value would be 0. The tile chlorophyll data, however, shows larger values. Chlorophyll a, in the 2004 Downriver example, has a skewness statistic of 3.54, which is quite high. In the last section of the summary analysis, the stem and leaf plot graphically demonstrates the asymmetry, showing most of the data centered around 25 with a large value at 196. The final plot is referred to as a normal probability plot and graphically compares the data to a theoretical normal distribution. For chlorophyll a, the data (asterisks) deviate substantially from the theoretical normal distribution (diagonal reference line of pluses), indicating that the data is non-normal. Other response variables in both the Downriver and Upriver categories also indicated skewed distributions. Because the sample size and mean comparison procedures below require symmetrical, normally distributed data, each response in the data set was logarithmically transformed. The logarithmic transformation, in this case, can help mitigate skewness problems. The summary statistics for the four transformed responses (log-ChlorA, log-TotChlor, and log-accrual ) are given in Print Out 4. For the 2004 Downriver Chlorophyll a data, the logarithmic transformation reduced the skewness value to -0.36 and produced a more bell-shaped symmetric frequency distribution. Similar improvements are shown for the remaining variables and river categories. Hence, all subsequent analyses given below are based on logarithmic transformations of the original responses.« less
Modeling absolute differences in life expectancy with a censored skew-normal regression approach
Clough-Gorr, Kerri; Zwahlen, Marcel
2015-01-01
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544
ERIC Educational Resources Information Center
DeMars, Christine E.
2012-01-01
In structural equation modeling software, either limited-information (bivariate proportions) or full-information item parameter estimation routines could be used for the 2-parameter item response theory (IRT) model. Limited-information methods assume the continuous variable underlying an item response is normally distributed. For skewed and…
A log-sinh transformation for data normalization and variance stabilization
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.
2012-05-01
When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.
Muers, Mary R; Sharpe, Jacqueline A; Garrick, David; Sloane-Stanley, Jacqueline; Nolan, Patrick M; Hacker, Terry; Wood, William G; Higgs, Douglas R; Gibbons, Richard J
2007-06-01
Extreme skewing of X-chromosome inactivation (XCI) is rare in the normal female population but is observed frequently in carriers of some X-linked mutations. Recently, it has been shown that various forms of X-linked mental retardation (XLMR) have a strong association with skewed XCI in female carriers, but the mechanisms underlying this skewing are unknown. ATR-X syndrome, caused by mutations in a ubiquitously expressed, chromatin-associated protein, provides a clear example of XLMR in which phenotypically normal female carriers virtually all have highly skewed XCI biased against the X chromosome that harbors the mutant allele. Here, we have used a mouse model to understand the processes causing skewed XCI. In female mice heterozygous for a null Atrx allele, we found that XCI is balanced early in embryogenesis but becomes skewed over the course of development, because of selection favoring cells expressing the wild-type Atrx allele. Unexpectedly, selection does not appear to be the result of general cellular-viability defects in Atrx-deficient cells, since it is restricted to specific stages of development and is not ongoing throughout the life of the animal. Instead, there is evidence that selection results from independent tissue-specific effects. This illustrates an important mechanism by which skewed XCI may occur in carriers of XLMR and provides insight into the normal role of ATRX in regulating cell fate.
The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.
Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica
2014-05-01
The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.
Differential models of twin correlations in skew for body-mass index (BMI).
Tsang, Siny; Duncan, Glen E; Dinescu, Diana; Turkheimer, Eric
2018-01-01
Body Mass Index (BMI), like most human phenotypes, is substantially heritable. However, BMI is not normally distributed; the skew appears to be structural, and increases as a function of age. Moreover, twin correlations for BMI commonly violate the assumptions of the most common variety of the classical twin model, with the MZ twin correlation greater than twice the DZ correlation. This study aimed to decompose twin correlations for BMI using more general skew-t distributions. Same sex MZ and DZ twin pairs (N = 7,086) from the community-based Washington State Twin Registry were included. We used latent profile analysis (LPA) to decompose twin correlations for BMI into multiple mixture distributions. LPA was performed using the default normal mixture distribution and the skew-t mixture distribution. Similar analyses were performed for height as a comparison. Our analyses are then replicated in an independent dataset. A two-class solution under the skew-t mixture distribution fits the BMI distribution for both genders. The first class consists of a relatively normally distributed, highly heritable BMI with a mean in the normal range. The second class is a positively skewed BMI in the overweight and obese range, with lower twin correlations. In contrast, height is normally distributed, highly heritable, and is well-fit by a single latent class. Results in the replication dataset were highly similar. Our findings suggest that two distinct processes underlie the skew of the BMI distribution. The contrast between height and weight is in accord with subjective psychological experience: both are under obvious genetic influence, but BMI is also subject to behavioral control, whereas height is not.
No evidence that skewing of X chromosome inactivation patterns is transmitted to offspring in humans
Bolduc, Véronique; Chagnon, Pierre; Provost, Sylvie; Dubé, Marie-Pierre; Belisle, Claude; Gingras, Marianne; Mollica, Luigina; Busque, Lambert
2007-01-01
Skewing of X chromosome inactivation (XCI) can occur in normal females and increases in tissues with age. The mechanisms underlying skewing in normal females, however, remain controversial. To better understand the phenomenon of XCI in nondisease states, we evaluated XCI patterns in epithelial and hematopoietic cells of over 500 healthy female mother-neonate pairs. The incidence of skewing observed in mothers was twice that observed in neonates, and in both cohorts, the incidence of XCI was lower in epithelial cells than hematopoietic cells. These results suggest that XCI incidence varies by tissue type and that age-dependent mechanisms can influence skewing in both epithelial and hematopoietic cells. In both cohorts, a correlation was identified in the direction of skewing in epithelial and hematopoietic cells, suggesting common underlying skewing mechanisms across tissues. However, there was no correlation between the XCI patterns of mothers and their respective neonates, and skewed mothers gave birth to skewed neonates at the same frequency as nonskewed mothers. Taken together, our data suggest that in humans, the XCI pattern observed at birth does not reflect a single heritable genetic locus, but rather corresponds to a complex trait determined, at least in part, by selection biases occurring after XCI. PMID:18097474
Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
NASA Astrophysics Data System (ADS)
Csillik, O.; Evans, I. S.; Drăguţ, L.
2015-03-01
Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.
Testing models of parental investment strategy and offspring size in ants.
Gilboa, Smadar; Nonacs, Peter
2006-01-01
Parental investment strategies can be fixed or flexible. A fixed strategy predicts making all offspring a single 'optimal' size. Dynamic models predict flexible strategies with more than one optimal size of offspring. Patterns in the distribution of offspring sizes may thus reveal the investment strategy. Static strategies should produce normal distributions. Dynamic strategies should often result in non-normal distributions. Furthermore, variance in morphological traits should be positively correlated with the length of developmental time the traits are exposed to environmental influences. Finally, the type of deviation from normality (i.e., skewed left or right, or platykurtic) should be correlated with the average offspring size. To test the latter prediction, we used simulations to detect significant departures from normality and categorize distribution types. Data from three species of ants strongly support the predicted patterns for dynamic parental investment. Offspring size distributions are often significantly non-normal. Traits fixed earlier in development, such as head width, are less variable than final body weight. The type of distribution observed correlates with mean female dry weight. The overall support for a dynamic parental investment model has implications for life history theory. Predicted conflicts over parental effort, sex investment ratios, and reproductive skew in cooperative breeders follow from assumptions of static parental investment strategies and omnipresent resource limitations. By contrast, with flexible investment strategies such conflicts can be either absent or maladaptive.
Portfolio optimization with skewness and kurtosis
NASA Astrophysics Data System (ADS)
Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-04-01
Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.
NASA Astrophysics Data System (ADS)
Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.
2009-05-01
The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.
Learning a Novel Pattern through Balanced and Skewed Input
ERIC Educational Resources Information Center
McDonough, Kim; Trofimovich, Pavel
2013-01-01
This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…
Economic values under inappropriate normal distribution assumptions.
Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R
2012-08-01
The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.
Multiple imputation in the presence of non-normal data.
Lee, Katherine J; Carlin, John B
2017-02-20
Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Utility functions predict variance and skewness risk preferences in monkeys
Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram
2016-01-01
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.
A Better Lemon Squeezer? Maximum-Likelihood Regression with Beta-Distributed Dependent Variables
ERIC Educational Resources Information Center
Smithson, Michael; Verkuilen, Jay
2006-01-01
Uncorrectable skew and heteroscedasticity are among the "lemons" of psychological data, yet many important variables naturally exhibit these properties. For scales with a lower and upper bound, a suitable candidate for models is the beta distribution, which is very flexible and models skew quite well. The authors present…
Currie, L A
2001-07-01
Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.
2013-01-01
Background The biting cycle of anopheline mosquitoes is an important component in the transmission of malaria. Inter- and intraspecific biting patterns of anophelines have been investigated using the number of mosquitoes caught over time to compare general tendencies in host-seeking activity and cumulative catch. In this study, all-night biting catch data from 32 consecutive months of collections in three riverine villages were used to compare biting cycles of the five most abundant vector species using common statistics to quantify variability and deviations of nightly catches from a normal distribution. Methods Three communities were selected for study. All-night human landing catches of mosquitoes were made each month in the peridomestic environment of four houses (sites) for nine consecutive days from April 2003 to November 2005. Host-seeking activities of the five most abundant species that were previously captured infected with Plasmodium falciparum, Plasmodium malariae or Plasmodium vivax, were analysed and compared by measuring the amount of variation in numbers biting per unit time (co-efficient of variation, V), the degree to which the numbers of individuals per unit time were asymmetrical (skewness = g1) and the relative peakedness or flatness of the distribution (kurtosis = g2). To analyse variation in V, g1, and g2 within species and villages, we used mixed model nested ANOVAs (PROC GLM in SAS) with independent variables (sources of variation): year, month (year), night (year X month) and collection site (year X month). Results The biting cycles of the most abundant species, Anopheles darlingi, had the least pronounced biting peaks, the lowest mean V values, and typically non-significant departures from normality in g1 and g2. By contrast, the species with the most sharply defined crepuscular biting peaks, Anopheles marajoara, Anopheles nuneztovari and Anopheles triannulatus, showed high to moderate mean V values and, most commonly, significantly positive skewness (g1) and kurtosis (g2) moments. Anopheles intermedius was usually, but not always, crepuscular in host seeking, and showed moderate mean V values and typically positive skewness and kurtosis. Among sites within villages, significant differences in frequencies of departures from normality (g1 and g2) were detected for An. marajoara and An. darlingi, suggesting that local environments, such as host availability, may affect the shape of biting pattern curves of these two species. Conclusions Analyses of co-efficients of variation, skewness and kurtosis facilitated quantitative comparisons of host-seeking activity patterns that differ among species, sites, villages, and dates. The variable and heterogeneous nightly host-seeking behaviours of the five exophilic vector species contribute to the maintenance of stable malaria transmission in these Amazonian villages. The abundances of An. darlingi and An. marajoara, their propensities to seek hosts throughout the night, and their ability to adapt host-seeking behaviour to local environments, contribute to their impact as the most important of these vector species. PMID:23890413
Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin
2015-12-01
Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.
NASA Astrophysics Data System (ADS)
Biteau, J.; Giebels, B.
2012-12-01
Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.
Molenaar, Dylan; Bolsinova, Maria
2017-05-01
In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
Time-dependent breakdown of fiber networks: Uncertainty of lifetime
NASA Astrophysics Data System (ADS)
Mattsson, Amanda; Uesaka, Tetsu
2017-05-01
Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.
NASA Astrophysics Data System (ADS)
Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha
2017-11-01
Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.
A note on `Analysis of gamma-ray burst duration distribution using mixtures of skewed distributions'
NASA Astrophysics Data System (ADS)
Kwong, Hok Shing; Nadarajah, Saralees
2018-01-01
Tarnopolski [Monthly Notices of the Royal Astronomical Society, 458 (2016) 2024-2031] analysed data sets on gamma-ray burst durations using skew distributions. He showed that the best fits are provided by two skew normal and three Gaussian distributions. Here, we suggest other distributions, including some that are heavy tailed. At least one of these distributions is shown to provide better fits than those considered in Tarnopolski. Five criteria are used to assess best fits.
An Adaptive Method for Reducing Clock Skew in an Accumulative Z-Axis Interconnect System
NASA Technical Reports Server (NTRS)
Bolotin, Gary; Boyce, Lee
1997-01-01
This paper will present several methods for adjusting clock skew variations that occur in a n accumulative z-axis interconnect system. In such a system, delay between modules in a function of their distance from one another. Clock distribution in a high-speed system, where clock skew must be kept to a minimum, becomes more challenging when module order is variable before design.
Statistical hypothesis tests of some micrometeorological observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
SethuRaman, S.; Tichler, J.
Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g/sub 1/ has a good correlation with the chi-square values. Events withmore » vertical-barg/sub 1/vertical-bar<0.21 were normal to begin with and those with 0.21« less
Geostatistical interpolation of available copper in orchard soil as influenced by planting duration.
Fu, Chuancheng; Zhang, Haibo; Tu, Chen; Li, Lianzhen; Luo, Yongming
2018-01-01
Mapping the spatial distribution of available copper (A-Cu) in orchard soils is important in agriculture and environmental management. However, data on the distribution of A-Cu in orchard soils is usually highly variable and severely skewed due to the continuous input of fungicides. In this study, ordinary kriging combined with planting duration (OK_PD) is proposed as a method for improving the interpolation of soil A-Cu. Four normal distribution transformation methods, namely, the Box-Cox, Johnson, rank order, and normal score methods, were utilized prior to interpolation. A total of 317 soil samples were collected in the orchards of the Northeast Jiaodong Peninsula. Moreover, 1472 orchards were investigated to obtain a map of planting duration using Voronoi tessellations. The soil A-Cu content ranged from 0.09 to 106.05 with a mean of 18.10 mg kg -1 , reflecting the high availability of Cu in the soils. Soil A-Cu concentrations exhibited a moderate spatial dependency and increased significantly with increasing planting duration. All the normal transformation methods successfully decreased the skewness and kurtosis of the soil A-Cu and the associated residuals, and also computed more robust variograms. OK_PD could generate better spatial prediction accuracy than ordinary kriging (OK) for all transformation methods tested, and it also provided a more detailed map of soil A-Cu. Normal score transformation produced satisfactory accuracy and showed an advantage in ameliorating smoothing effect derived from the interpolation methods. Thus, normal score transformation prior to kriging combined with planting duration (NSOK_PD) is recommended for the interpolation of soil A-Cu in this area.
Metric adjusted skew information
Hansen, Frank
2008-01-01
We extend the concept of Wigner–Yanase–Dyson skew information to something we call “metric adjusted skew information” (of a state with respect to a conserved observable). This “skew information” is intended to be a non-negative quantity bounded by the variance (of an observable in a state) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova–Chentsov functions describing the possible quantum statistics is a Bauer simplex and determine its extreme points. We determine a particularly simple skew information, the “λ-skew information,” parametrized by a λ ∈ (0, 1], and show that the convex cone this family generates coincides with the set of all metric adjusted skew informations. PMID:18635683
Diurnal patterns and associations among salivary cortisol, DHEA and alpha-amylase in older adults.
Wilcox, Rand R; Granger, Douglas A; Szanton, Sarah; Clark, Florence
2014-04-22
Cortisol and dehydroepiandrosterone (DHEA) are considered to be valuable markers of the hypothalamus-pituitary-adrenal (HPA) axis, while salivary alpha-amylase (sAA) reflects the autonomic nervous system. Past studies have found certain diurnal patterns among these biomarkers, with some studies reporting results that differ from others. Also, some past studies have found an association among these three biomarkers while other studies have not. This study investigates these patterns and associations in older adults by taking advantage of modern statistical methods for dealing with non-normality, outliers and curvature. Basic characteristics of the data are reported as well, which are relevant to understanding the nature of any patterns and associations. Boxplots were used to check on the skewness and presence of outliers, including the impact of using simple transformations for dealing with non-normality. Diurnal patterns were investigated using recent advances aimed at comparing medians. When studying associations, the initial step was to check for curvature using a non-parametric regression estimator. Based on the resulting fit, a robust regression estimator was used that is designed to deal with skewed distributions and outliers. Boxplots indicated highly skewed distributions with outliers. Simple transformations (such as taking logs) did not deal with this issue in an effective manner. Consequently, diurnal patterns were investigated using medians and found to be consistent with some previous studies but not others. A positive association between awakening cortisol levels and DHEA was found when DHEA is relatively low; otherwise no association was found. The nature of the association between cortisol and DHEA was found to change during the course of the day. Upon awakening, cortisol was found to have no association with sAA when DHEA levels are relatively low, but otherwise there is a negative association. DHEA was found to have a positive association with sAA upon awakening. Shortly after awakening and for the remainder of the day, no association was found between DHEA and sAA ignoring cortisol. For DHEA and cortisol (taken as the independent variables) versus sAA (the dependent variable), again an association is found only upon awakening. Copyright © 2014 Elsevier Inc. All rights reserved.
Fesharaki, Maryam; Karagiannis, Peter; Tweed, Douglas; Sharpe, James A.; Wong, Agnes M. F.
2016-01-01
Purpose Skew deviation is a vertical strabismus caused by damage to the otolithic–ocular reflex pathway and is associated with abnormal ocular torsion. This study was conducted to determine whether patients with skew deviation show the normal pattern of three-dimensional eye control called Listing’s law, which specifies the eye’s torsional angle as a function of its horizontal and vertical position. Methods Ten patients with skew deviation caused by brain stem or cerebellar lesions and nine normal control subjects were studied. Patients with diplopia and neurologic symptoms less than 1 month in duration were designated as acute (n = 4) and those with longer duration were classified as chronic (n = 10). Serial recordings were made in the four patients with acute skew deviation. With the head immobile, subjects made saccades to a target that moved between straight ahead and eight eccentric positions, while wearing search coils. At each target position, fixation was maintained for 3 seconds before the next saccade. From the eye position data, the plane of best fit, referred to as Listing’s plane, was fitted. Violations of Listing’s law were quantified by computing the “thickness” of this plane, defined as the SD of the distances to the plane from the data points. Results Both the hypertropic and hypotropic eyes in patients with acute skew deviation violated Listing’s and Donders’ laws—that is, the eyes did not show one consistent angle of torsion in any given gaze direction, but rather an abnormally wide range of torsional angles. In contrast, each eye in patients with chronic skew deviation obeyed the laws. However, in chronic skew deviation, Listing’s planes in both eyes had abnormal orientations. Conclusions Patients with acute skew deviation violated Listing’s law, whereas those with chronic skew deviation obeyed it, indicating that despite brain lesions, neural adaptation can restore Listing’s law so that the neural linkage between horizontal, vertical, and torsional eye position remains intact. Violation of Listing’s and Donders’ laws during fixation arises primarily from torsional drifts, indicating that patients with acute skew deviation have unstable torsional gaze holding that is independent of their horizontal–vertical eye positions. PMID:18172094
Curran, Janet H.; Barth, Nancy A.; Veilleux, Andrea G.; Ourso, Robert T.
2016-03-16
Estimates of the magnitude and frequency of floods are needed across Alaska for engineering design of transportation and water-conveyance structures, flood-insurance studies, flood-plain management, and other water-resource purposes. This report updates methods for estimating flood magnitude and frequency in Alaska and conterminous basins in Canada. Annual peak-flow data through water year 2012 were compiled from 387 streamgages on unregulated streams with at least 10 years of record. Flood-frequency estimates were computed for each streamgage using the Expected Moments Algorithm to fit a Pearson Type III distribution to the logarithms of annual peak flows. A multiple Grubbs-Beck test was used to identify potentially influential low floods in the time series of peak flows for censoring in the flood frequency analysis.For two new regional skew areas, flood-frequency estimates using station skew were computed for stations with at least 25 years of record for use in a Bayesian least-squares regression analysis to determine a regional skew value. The consideration of basin characteristics as explanatory variables for regional skew resulted in improvements in precision too small to warrant the additional model complexity, and a constant model was adopted. Regional Skew Area 1 in eastern-central Alaska had a regional skew of 0.54 and an average variance of prediction of 0.45, corresponding to an effective record length of 22 years. Regional Skew Area 2, encompassing coastal areas bordering the Gulf of Alaska, had a regional skew of 0.18 and an average variance of prediction of 0.12, corresponding to an effective record length of 59 years. Station flood-frequency estimates for study sites in regional skew areas were then recomputed using a weighted skew incorporating the station skew and regional skew. In a new regional skew exclusion area outside the regional skew areas, the density of long-record streamgages was too sparse for regional analysis and station skew was used for all estimates. Final station flood frequency estimates for all study streamgages are presented for the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities.Regional multiple-regression analysis was used to produce equations for estimating flood frequency statistics from explanatory basin characteristics. Basin characteristics, including physical and climatic variables, were updated for all study streamgages using a geographical information system and geospatial source data. Screening for similar-sized nested basins eliminated hydrologically redundant sites, and screening for eligibility for analysis of explanatory variables eliminated regulated peaks, outburst peaks, and sites with indeterminate basin characteristics. An ordinary least‑squares regression used flood-frequency statistics and basin characteristics for 341 streamgages (284 in Alaska and 57 in Canada) to determine the most suitable combination of basin characteristics for a flood-frequency regression model and to explore regional grouping of streamgages for explaining variability in flood-frequency statistics across the study area. The most suitable model for explaining flood frequency used drainage area and mean annual precipitation as explanatory variables for the entire study area as a region. Final regression equations for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probability discharge in Alaska and conterminous basins in Canada were developed using a generalized least-squares regression. The average standard error of prediction for the regression equations for the various annual exceedance probabilities ranged from 69 to 82 percent, and the pseudo-coefficient of determination (pseudo-R2) ranged from 85 to 91 percent.The regional regression equations from this study were incorporated into the U.S. Geological Survey StreamStats program for a limited area of the State—the Cook Inlet Basin. StreamStats is a national web-based geographic information system application that facilitates retrieval of streamflow statistics and associated information. StreamStats retrieves published data for gaged sites and, for user-selected ungaged sites, delineates drainage areas from topographic and hydrographic data, computes basin characteristics, and computes flood frequency estimates using the regional regression equations.
A novel generalized normal distribution for human longevity and other negatively skewed data.
Robertson, Henry T; Allison, David B
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.
A Novel Generalized Normal Distribution for Human Longevity and other Negatively Skewed Data
Robertson, Henry T.; Allison, David B.
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution. PMID:22623974
Terluin, Berend; de Boer, Michiel R; de Vet, Henrica C W
2016-01-01
The network approach to psychopathology conceives mental disorders as sets of symptoms causally impacting on each other. The strengths of the connections between symptoms are key elements in the description of those symptom networks. Typically, the connections are analysed as linear associations (i.e., correlations or regression coefficients). However, there is insufficient awareness of the fact that differences in variance may account for differences in connection strength. Differences in variance frequently occur when subgroups are based on skewed data. An illustrative example is a study published in PLoS One (2013;8(3):e59559) that aimed to test the hypothesis that the development of psychopathology through "staging" was characterized by increasing connection strength between mental states. Three mental states (negative affect, positive affect, and paranoia) were studied in severity subgroups of a general population sample. The connection strength was found to increase with increasing severity in six of nine models. However, the method used (linear mixed modelling) is not suitable for skewed data. We reanalysed the data using inverse Gaussian generalized linear mixed modelling, a method suited for positively skewed data (such as symptoms in the general population). The distribution of positive affect was normal, but the distributions of negative affect and paranoia were heavily skewed. The variance of the skewed variables increased with increasing severity. Reanalysis of the data did not confirm increasing connection strength, except for one of nine models. Reanalysis of the data did not provide convincing evidence in support of staging as characterized by increasing connection strength between mental states. Network researchers should be aware that differences in connection strength between symptoms may be caused by differences in variances, in which case they should not be interpreted as differences in impact of one symptom on another symptom.
McFadden, J P; Thyssen, J P; Basketter, D A; Puangpet, P; Kimber, I
2015-03-01
During the last 50 years there has been a significant increase in Western societies of atopic disease and associated allergy. The balance between functional subpopulations of T helper cells (Th) determines the quality of the immune response provoked by antigen. One such subpopulation - Th2 cells - is associated with the production of IgE antibody and atopic allergy, whereas, Th1 cells antagonize IgE responses and the development of allergic disease. In seeking to provide a mechanistic basis for this increased prevalence of allergic disease, one proposal has been the 'hygiene hypothesis', which argues that in Westernized societies reduced exposure during early childhood to pathogenic microorganisms favours the development of atopic allergy. Pregnancy is normally associated with Th2 skewing, which persists for some months in the neonate before Th1/Th2 realignment occurs. In this review, we consider the immunophysiology of Th2 immune skewing during pregnancy. In particular, we explore the possibility that altered and increased patterns of exposure to certain chemicals have served to accentuate this normal Th2 skewing and therefore further promote the persistence of a Th2 bias in neonates. Furthermore, we propose that the more marked Th2 skewing observed in first pregnancy may, at least in part, explain the higher prevalence of atopic disease and allergy in the first born. © 2014 British Association of Dermatologists.
Determining Directional Dependency in Causal Associations
Pornprasertmanit, Sunthud; Little, Todd D.
2014-01-01
Directional dependency is a method to determine the likely causal direction of effect between two variables. This article aims to critique and improve upon the use of directional dependency as a technique to infer causal associations. We comment on several issues raised by von Eye and DeShon (2012), including: encouraging the use of the signs of skewness and excessive kurtosis of both variables, discouraging the use of D’Agostino’s K2, and encouraging the use of directional dependency to compare variables only within time points. We offer improved steps for determining directional dependency that fix the problems we note. Next, we discuss how to integrate directional dependency into longitudinal data analysis with two variables. We also examine the accuracy of directional dependency evaluations when several regression assumptions are violated. Directional dependency can suggest the direction of a relation if (a) the regression error in population is normal, (b) an unobserved explanatory variable correlates with any variables equal to or less than .2, (c) a curvilinear relation between both variables is not strong (standardized regression coefficient ≤ .2), (d) there are no bivariate outliers, and (e) both variables are continuous. PMID:24683282
Electronic skewing circuit monitors exact position of object underwater
NASA Technical Reports Server (NTRS)
Roller, R.; Yaroshuk, N.
1967-01-01
Linear Variable Differential Transformer /LVDT/ electronic skewing circuit guides a long cylindrical capsule underwater into a larger tube so that it does not contact the tube wall. This device detects movement of the capsule from a reference point and provides a continuous signal that is monitored on an oscilloscope.
Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.
Mohan, B M; Sinha, Arpita
2008-07-01
This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.
ERIC Educational Resources Information Center
Zimmerman, Donald W.
2011-01-01
This study investigated how population parameters representing heterogeneity of variance, skewness, kurtosis, bimodality, and outlier-proneness, drawn from normal and eleven non-normal distributions, also characterized the ranks corresponding to independent samples of scores. When the parameters of population distributions from which samples were…
Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
Inference of median difference based on the Box-Cox model in randomized clinical trials.
Maruo, K; Isogawa, N; Gosho, M
2015-05-10
In randomized clinical trials, many medical and biological measurements are not normally distributed and are often skewed. The Box-Cox transformation is a powerful procedure for comparing two treatment groups for skewed continuous variables in terms of a statistical test. However, it is difficult to directly estimate and interpret the location difference between the two groups on the original scale of the measurement. We propose a helpful method that infers the difference of the treatment effect on the original scale in a more easily interpretable form. We also provide statistical analysis packages that consistently include an estimate of the treatment effect, covariance adjustments, standard errors, and statistical hypothesis tests. The simulation study that focuses on randomized parallel group clinical trials with two treatment groups indicates that the performance of the proposed method is equivalent to or better than that of the existing non-parametric approaches in terms of the type-I error rate and power. We illustrate our method with cluster of differentiation 4 data in an acquired immune deficiency syndrome clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2016-08-12
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
NASA Astrophysics Data System (ADS)
Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto
2013-08-01
In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.
Jacobson, Lisa A.; Ryan, Matthew; Denckla, Martha B.; Mostofsky, Stewart H.; Mahone, E. Mark
2013-01-01
Children with Attention-Deficit/Hyperactivity Disorder (ADHD) demonstrate increased response variability compared with controls, which is thought to be associated with deficits in attention regulation and response control that subsequently affect performance of more cognitively demanding tasks, such as reading. The present study examined response variability during a computerized simple reaction time (RT) task in 67 children. Ex-Gaussian analyses separated the response time distribution into normal (mu and sigma) and exponential (tau) components; the association of each with reading fluency was examined. Children with ADHD had significantly slower, more variable, and more skewed RTs compared with controls. After controlling for ADHD symptom severity, tau (but not mu or mean RT) was significantly associated with reduced reading fluency, but not with single word reading accuracy. These data support the growing evidence that RT variability, but not simply slower mean response speed, is the characteristic of youth with ADHD and that longer response time latencies (tau) may be implicated in the poorer academic performance associated with ADHD. PMID:23838684
Jacobson, Lisa A; Ryan, Matthew; Denckla, Martha B; Mostofsky, Stewart H; Mahone, E Mark
2013-11-01
Children with Attention-Deficit/Hyperactivity Disorder (ADHD) demonstrate increased response variability compared with controls, which is thought to be associated with deficits in attention regulation and response control that subsequently affect performance of more cognitively demanding tasks, such as reading. The present study examined response variability during a computerized simple reaction time (RT) task in 67 children. Ex-Gaussian analyses separated the response time distribution into normal (mu and sigma) and exponential (tau) components; the association of each with reading fluency was examined. Children with ADHD had significantly slower, more variable, and more skewed RTs compared with controls. After controlling for ADHD symptom severity, tau (but not mu or mean RT) was significantly associated with reduced reading fluency, but not with single word reading accuracy. These data support the growing evidence that RT variability, but not simply slower mean response speed, is the characteristic of youth with ADHD and that longer response time latencies (tau) may be implicated in the poorer academic performance associated with ADHD.
Skewness in large-scale structure and non-Gaussian initial conditions
NASA Technical Reports Server (NTRS)
Fry, J. N.; Scherrer, Robert J.
1994-01-01
We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.
NASA Astrophysics Data System (ADS)
Zhang, H.; Harter, T.; Sivakumar, B.
2005-12-01
Facies-based geostatistical models have become important tools for the stochastic analysis of flow and transport processes in heterogeneous aquifers. However, little is known about the dependency of these processes on the parameters of facies- based geostatistical models. This study examines the nonpoint source solute transport normal to the major bedding plane in the presence of interconnected high conductivity (coarse- textured) facies in the aquifer medium and the dependence of the transport behavior upon the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute travel time probability distribution functions (pdfs) for solute flux from the water table to the bottom boundary (production horizon) of the aquifer. The cases examined include, two-, three-, and four-facies models with horizontal to vertical facies mean length anisotropy ratios, ek, from 25:1 to 300:1, and with a wide range of facies volume proportions (e.g, from 5% to 95% coarse textured facies). Predictions of travel time pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer, the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and - to a lesser degree - the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, travel time pdfs are not log- normally distributed as is often assumed. Also, macrodispersive behavior (variance of the travel time pdf) was found to not be a unique function of the conductivity variance. The skewness of the travel time pdf varied from negatively skewed to strongly positively skewed within the parameter range examined. We also show that the Markov chain approach may give significantly different travel time pdfs when compared to the more commonly used Gaussian random field approach even though the first and second order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport.
Viggiano, Emanuela; Ergoli, Manuela; Picillo, Esther; Politano, Luisa
2016-07-01
Duchenne and Becker dystrophinopathies (DMD and BMD) are X-linked recessive disorders caused by mutations in the dystrophin gene that lead to absent or reduced expression of dystrophin in both skeletal and heart muscles. DMD/BMD female carriers are usually asymptomatic, although about 8 % may exhibit muscle or cardiac symptoms. Several mechanisms leading to a reduced dystrophin have been hypothesized to explain the clinical manifestations and, in particular, the role of the skewed XCI is questioned. In this review, the mechanism of XCI and its involvement in the phenotype of BMD/DMD carriers with both a normal karyotype or with X;autosome translocations with breakpoints at Xp21 (locus of the DMD gene) will be analyzed. We have previously observed that DMD carriers with moderate/severe muscle involvement, exhibit a moderate or extremely skewed XCI, in particular if presenting with an early onset of symptoms, while DMD carriers with mild muscle involvement present a random XCI. Moreover, we found that among 87.1 % of the carriers with X;autosome translocations involving the locus Xp21 who developed signs and symptoms of dystrophinopathy such as proximal muscle weakness, difficulty to run, jump and climb stairs, 95.2 % had a skewed XCI pattern in lymphocytes. These data support the hypothesis that skewed XCI is involved in the onset of phenotype in DMD carriers, the X chromosome carrying the normal DMD gene being preferentially inactivated and leading to a moderate-severe muscle involvement.
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
Clark, James E; Osborne, Jason W; Gallagher, Peter; Watson, Stuart
2016-07-01
Neuroendocrine data are typically positively skewed and rarely conform to the expectations of a Gaussian distribution. This can be a problem when attempting to analyse results within the framework of the general linear model, which relies on assumptions that residuals in the data are normally distributed. One frequently used method for handling violations of this assumption is to transform variables to bring residuals into closer alignment with assumptions (as residuals are not directly manipulated). This is often attempted through ad hoc traditional transformations such as square root, log and inverse. However, Box and Cox (Box & Cox, ) observed that these are all special cases of power transformations and proposed a more flexible method of transformation for researchers to optimise alignment with assumptions. The goal of this paper is to demonstrate the benefits of the infinitely flexible Box-Cox transformation on neuroendocrine data using syntax in spss. When applied to positively skewed data typical of neuroendocrine data, the majority (~2/3) of cases were brought into strict alignment with Gaussian distribution (i.e. a non-significant Shapiro-Wilks test). Those unable to meet this challenge showed substantial improvement in distributional properties. The biggest challenge was distributions with a high ratio of kurtosis to skewness. We discuss how these cases might be handled, and we highlight some of the broader issues associated with transformation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie
2017-08-01
Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.
Silva, Mauricio Rocha e
2011-01-01
OBJECTIVE: Impact Factors (IF) are widely used surrogates to evaluate single articles, in spite of known shortcomings imposed by cite distribution skewness. We quantify this asymmetry and propose a simple computer-based procedure for evaluating individual articles. METHOD: (a) Analysis of symmetry. Journals clustered around nine Impact Factor points were selected from the medical “Subject Categories” in Journal Citation Reports 2010. Citable items published in 2008 were retrieved and ranked by granted citations over the Jan/2008 - Jun/2011 period. Frequency distribution of cites, normalized cumulative cites and absolute cites/decile were determined for each journal cluster. (b) Positive Predictive Value. Three arbitrarily established evaluation classes were generated: LOW (1.3≤IF<2.6); MID: (2.6≤IF<3.9); HIGH: (IF≥3.9). Positive Predictive Value for journal clusters within each class range was estimated. (c) Continuously Variable Rating. An alternative evaluation procedure is proposed to allow the rating of individually published articles in comparison to all articles published in the same journal within the same year of publication. The general guiding lines for the construction of a totally dedicated software program are delineated. RESULTS AND CONCLUSIONS: Skewness followed the Pareto Distribution for (1
Pouch, Alison M; Vergnat, Mathieu; McGarvey, Jeremy R; Ferrari, Giovanni; Jackson, Benjamin M; Sehgal, Chandra M; Yushkevich, Paul A; Gorman, Robert C; Gorman, Joseph H
2014-01-01
The basis of mitral annuloplasty ring design has progressed from qualitative surgical intuition to experimental and theoretical analysis of annular geometry with quantitative imaging techniques. In this work, we present an automated three-dimensional (3D) echocardiographic image analysis method that can be used to statistically assess variability in normal mitral annular geometry to support advancement in annuloplasty ring design. Three-dimensional patient-specific models of the mitral annulus were automatically generated from 3D echocardiographic images acquired from subjects with normal mitral valve structure and function. Geometric annular measurements including annular circumference, annular height, septolateral diameter, intercommissural width, and the annular height to intercommissural width ratio were automatically calculated. A mean 3D annular contour was computed, and principal component analysis was used to evaluate variability in normal annular shape. The following mean ± standard deviations were obtained from 3D echocardiographic image analysis: annular circumference, 107.0 ± 14.6 mm; annular height, 7.6 ± 2.8 mm; septolateral diameter, 28.5 ± 3.7 mm; intercommissural width, 33.0 ± 5.3 mm; and annular height to intercommissural width ratio, 22.7% ± 6.9%. Principal component analysis indicated that shape variability was primarily related to overall annular size, with more subtle variation in the skewness and height of the anterior annular peak, independent of annular diameter. Patient-specific 3D echocardiographic-based modeling of the human mitral valve enables statistical analysis of physiologically normal mitral annular geometry. The tool can potentially lead to the development of a new generation of annuloplasty rings that restore the diseased mitral valve annulus back to a truly normal geometry. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Marko, Nicholas F.; Weil, Robert J.
2012-01-01
Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863
A Skew-Normal Mixture Regression Model
ERIC Educational Resources Information Center
Liu, Min; Lin, Tsung-I
2014-01-01
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
Choosing the Best Correction Formula for the Pearson r[superscript 2] Effect Size
ERIC Educational Resources Information Center
Skidmore, Susan Troncoso; Thompson, Bruce
2011-01-01
In the present Monte Carlo simulation study, the authors compared bias and precision of 7 sampling error corrections to the Pearson r[superscript 2] under 6 x 3 x 6 conditions (i.e., population ρ values of 0.0, 0.1, 0.3, 0.5, 0.7, and 0.9, respectively; population shapes normal, skewness = kurtosis = 1, and skewness = -1.5 with kurtosis =…
Suppression of the Near Wall Burst Process of a Fully Developed Turbulent Pipe Flow
1993-05-01
tunmel turbulent boundary layer a) velocity fluctuation skewness levels and b) velocity fluctuation kurtosis levels ...by the undisturbed total uv level and u*. a) quadrants I and 2 and b) quadrants 3 and 4 ...................... 105 5.20 Spanwise development of the uw...and radial velocity skewness levels . Normalization with ref. u". .............................. 111 xi 5.23 Spanwise development of profi!s of the
ERIC Educational Resources Information Center
Ho, Andrew D.; Yu, Carol C.
2015-01-01
Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…
Onset of normal and inverse homoclinic bifurcation in a double plasma system near a plasma fireball
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitra, Vramori; Sarma, Bornali; Sarma, Arun
Plasma fireballs are generated due to a localized discharge and appear as a luminous glow with a sharp boundary, which suggests the presence of a localized electric field such as electrical sheath or double layer structure. The present work reports the observation of normal and inverse homoclinic bifurcation phenomena in plasma oscillations that are excited in the presence of fireball in a double plasma device. The controlling parameters for these observations are the ratio of target to source chamber (n{sub T}/n{sub S}) densities and applied electrode voltage. Homoclinic bifurcation is noticed in the plasma potential fluctuations as the system evolvesmore » from narrow to long time period oscillations and vice versa with the change of control parameter. The dynamical transition in plasma fireball is demonstrated by spectral analysis, recurrence quantification analysis (RQA), and statistical measures, viz., skewness and kurtosis. The increasing trend of normalized variance reflects that enhancing n{sub T}/n{sub S} induces irregularity in plasma dynamics. The exponential growth of the time period is strongly indicative of homoclinic bifurcation in the system. The gradual decrease of skewness and increase of kurtosis with the increase of n{sub T}/n{sub S} also reflect growing complexity in the system. The visual change of recurrence plot and gradual enhancement of RQA variables DET, L{sub max}, and ENT reflects the bifurcation behavior in the dynamics. The combination of RQA and spectral analysis is a clear evidence that homoclinic bifurcation occurs due to the presence of plasma fireball with different density ratios. However, inverse bifurcation takes place due to the change of fireball voltage. Some of the features observed in the experiment are consistent with a model that describes the dynamics of ionization instabilities.« less
Analysis of the labor productivity of enterprises via quantile regression
NASA Astrophysics Data System (ADS)
Türkan, Semra
2017-07-01
In this study, we have analyzed the factors that affect the performance of Turkey's Top 500 Industrial Enterprises using quantile regression. The variable about labor productivity of enterprises is considered as dependent variable, the variableabout assets is considered as independent variable. The distribution of labor productivity of enterprises is right-skewed. If the dependent distribution is skewed, linear regression could not catch important aspects of the relationships between the dependent variable and its predictors due to modeling only the conditional mean. Hence, the quantile regression, which allows modelingany quantilesof the dependent distribution, including the median,appears to be useful. It examines whether relationships between dependent and independent variables are different for low, medium, and high percentiles. As a result of analyzing data, the effect of total assets is relatively constant over the entire distribution, except the upper tail. It hasa moderately stronger effect in the upper tail.
Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai
2017-10-01
Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.
Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D
2017-01-01
If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.
ERIC Educational Resources Information Center
Molenaar, Dylan; Dolan, Conor V.; de Boeck, Paul
2012-01-01
The Graded Response Model (GRM; Samejima, "Estimation of ability using a response pattern of graded scores," Psychometric Monograph No. 17, Richmond, VA: The Psychometric Society, 1969) can be derived by assuming a linear regression of a continuous variable, Z, on the trait, [theta], to underlie the ordinal item scores (Takane & de Leeuw in…
Parametric modelling of cost data in medical studies.
Nixon, R M; Thompson, S G
2004-04-30
The cost of medical resources used is often recorded for each patient in clinical studies in order to inform decision-making. Although cost data are generally skewed to the right, interest is in making inferences about the population mean cost. Common methods for non-normal data, such as data transformation, assuming asymptotic normality of the sample mean or non-parametric bootstrapping, are not ideal. This paper describes possible parametric models for analysing cost data. Four example data sets are considered, which have different sample sizes and degrees of skewness. Normal, gamma, log-normal, and log-logistic distributions are fitted, together with three-parameter versions of the latter three distributions. Maximum likelihood estimates of the population mean are found; confidence intervals are derived by a parametric BC(a) bootstrap and checked by MCMC methods. Differences between model fits and inferences are explored.Skewed parametric distributions fit cost data better than the normal distribution, and should in principle be preferred for estimating the population mean cost. However for some data sets, we find that models that fit badly can give similar inferences to those that fit well. Conversely, particularly when sample sizes are not large, different parametric models that fit the data equally well can lead to substantially different inferences. We conclude that inferences are sensitive to choice of statistical model, which itself can remain uncertain unless there is enough data to model the tail of the distribution accurately. Investigating the sensitivity of conclusions to choice of model should thus be an essential component of analysing cost data in practice. Copyright 2004 John Wiley & Sons, Ltd.
Anorexia Nervosa: Analysis of Trabecular Texture with CT
Tabari, Azadeh; Torriani, Martin; Miller, Karen K.; Klibanski, Anne; Kalra, Mannudeep K.
2017-01-01
Purpose To determine indexes of skeletal integrity by using computed tomographic (CT) trabecular texture analysis of the lumbar spine in patients with anorexia nervosa and normal-weight control subjects and to determine body composition predictors of trabecular texture. Materials and Methods This cross-sectional study was approved by the institutional review board and compliant with HIPAA. Written informed consent was obtained. The study included 30 women with anorexia nervosa (mean age ± standard deviation, 26 years ± 6) and 30 normal-weight age-matched women (control group). All participants underwent low-dose single-section quantitative CT of the L4 vertebral body with use of a calibration phantom. Trabecular texture analysis was performed by using software. Skewness (asymmetry of gray-level pixel distribution), kurtosis (pointiness of pixel distribution), entropy (inhomogeneity of pixel distribution), and mean value of positive pixels (MPP) were assessed. Bone mineral density and abdominal fat and paraspinal muscle areas were quantified with quantitative CT. Women with anorexia nervosa and normal-weight control subjects were compared by using the Student t test. Linear regression analyses were performed to determine associations between trabecular texture and body composition. Results Women with anorexia nervosa had higher skewness and kurtosis, lower MPP (P < .001), and a trend toward lower entropy (P = .07) compared with control subjects. Bone mineral density, abdominal fat area, and paraspinal muscle area were inversely associated with skewness and kurtosis and positively associated with MPP and entropy. Texture parameters, but not bone mineral density, were associated with lowest lifetime weight and duration of amenorrhea in anorexia nervosa. Conclusion Patients with anorexia nervosa had increased skewness and kurtosis and decreased entropy and MPP compared with normal-weight control subjects. These parameters were associated with lowest lifetime weight and duration of amenorrhea, but there were no such associations with bone mineral density. These findings suggest that trabecular texture analysis might contribute information about bone health in anorexia nervosa that is independent of that provided with bone mineral density. © RSNA, 2016 PMID:27797678
Anorexia Nervosa: Analysis of Trabecular Texture with CT.
Tabari, Azadeh; Torriani, Martin; Miller, Karen K; Klibanski, Anne; Kalra, Mannudeep K; Bredella, Miriam A
2017-04-01
Purpose To determine indexes of skeletal integrity by using computed tomographic (CT) trabecular texture analysis of the lumbar spine in patients with anorexia nervosa and normal-weight control subjects and to determine body composition predictors of trabecular texture. Materials and Methods This cross-sectional study was approved by the institutional review board and compliant with HIPAA. Written informed consent was obtained. The study included 30 women with anorexia nervosa (mean age ± standard deviation, 26 years ± 6) and 30 normal-weight age-matched women (control group). All participants underwent low-dose single-section quantitative CT of the L4 vertebral body with use of a calibration phantom. Trabecular texture analysis was performed by using software. Skewness (asymmetry of gray-level pixel distribution), kurtosis (pointiness of pixel distribution), entropy (inhomogeneity of pixel distribution), and mean value of positive pixels (MPP) were assessed. Bone mineral density and abdominal fat and paraspinal muscle areas were quantified with quantitative CT. Women with anorexia nervosa and normal-weight control subjects were compared by using the Student t test. Linear regression analyses were performed to determine associations between trabecular texture and body composition. Results Women with anorexia nervosa had higher skewness and kurtosis, lower MPP (P < .001), and a trend toward lower entropy (P = .07) compared with control subjects. Bone mineral density, abdominal fat area, and paraspinal muscle area were inversely associated with skewness and kurtosis and positively associated with MPP and entropy. Texture parameters, but not bone mineral density, were associated with lowest lifetime weight and duration of amenorrhea in anorexia nervosa. Conclusion Patients with anorexia nervosa had increased skewness and kurtosis and decreased entropy and MPP compared with normal-weight control subjects. These parameters were associated with lowest lifetime weight and duration of amenorrhea, but there were no such associations with bone mineral density. These findings suggest that trabecular texture analysis might contribute information about bone health in anorexia nervosa that is independent of that provided with bone mineral density. © RSNA, 2016.
Hoentjen, Frank; Hanauer, Stephen B; de Boer, Nanne K; Rubin, David T
2012-01-01
Thiopurine therapy effectively maintains remission in inflammatory bowel disease. However, many patients are unable to achieve optimum benefits from azathioprine or 6-mercaptopurine because of undesirable metabolism related to high thiopurine methyltransferase (TPMT) activity characterized by hepatic transaminitis secondary to increased 6-methylmercaptopurine (6-MMP) production and reduced levels of therapeutic 6-thioguanine nucleotide (6-TGN). Allopurinol can optimize this skewed metabolism. We discuss two brothers who were both diagnosed with ulcerative colitis (UC). Their disease remained active despite oral and topical mesalamines. Steroids followed by 6-mercaptopurine (MP) were unsuccessfully introduced for both patients and both were found to have high 6-MMP and low 6-TGN levels, despite normal TMPT enzyme activity, accompanied by transaminitis. Allopurinol was introduced in combination with MP dose reduction. For both brothers addition of allopurinol was associated with successful remission and optimized MP metabolites. These siblings with active UC illustrate that skewed thiopurine metabolism may occur despite normal TPMT enzyme activity and can lead to adverse events in the absence of disease control. We confirm previous data showing that addition of allopurinol can reverse this skewed metabolism, and reduce both hepatotoxicity and disease activity, but we now also introduce the concept of a family history of preferential MP metabolism as a clue to effective management for other family members.
Explorations in statistics: the log transformation.
Curran-Everett, Douglas
2018-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.
Off-axis impact of unidirectional composites with cracks: Dynamic stress intensification
NASA Technical Reports Server (NTRS)
Sih, G. C.; Chen, E. P.
1979-01-01
The dynamic response of unidirectional composites under off axis (angle loading) impact is analyzed by assuming that the composite contains an initial flaw in the matrix material. The analytical method utilizes Fourier transform for the space variable and Laplace transform for the time variable. The off axis impact is separated into two parts, one being symmetric and the other skew-symmetric with reference to the crack plane. Transient boundary conditions of normal and shear tractions are applied to a crack embedded in the matrix of the unidirectional composite. The two boundary conditions are solved independently and the results superimposed. Mathematically, these conditions reduce the problem to a system of dual integral equations which are solved in the Laplace transform plane for the transformation of the dynamic stress intensity factor. The time inversion is carried out numerically for various combinations of the material properties of the composite and the results are displayed graphically.
An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis
ERIC Educational Resources Information Center
Diwakar, Rekha
2017-01-01
Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…
Considerations on the mechanisms of alternating skew deviation in patients with cerebellar lesions.
Zee, D S
1996-01-01
Alternating skew deviation, in which the side of the higher eye changes depending upon whether gaze is directed to the left or the right, is a frequent sign in patients with posterior fossa lesions, including those restricted to the cerebellum. Here we propose a mechanism for alternating skews related to the otolith-ocular responses to fore and aft pitch of the head in lateral-eyed animals. In lateral-eyed animals the expected response to a static head pitch is cyclorotation of the eyes. But if the eyes are rotated horizontally in the orbit, away from the primary position, a compensatory skew deviation should also appear. The direction of the skew would depend upon whether the eyes were directed to the right (left eye forward, right eye backward) or to the left (left eye backward, right eye forward). In contrast, for frontal-eyed animals, skew deviations are counterproductive because they create diplopia and interfere with binocular vision. We attribute the emergence of skew deviations in frontal-eyed animals in pathological conditions to 1) an imbalance in otolithocular pathways and 2) a loss of the component of ocular motor innervation that normally corrects for the differences in pulling directions and strengths of the various ocular muscles as the eyes change position in the orbit. Such a compensatory mechanism is necessary to ensure optimal binocular visual function during and after head motion. This compensatory mechanism may depend upon the cerebellum.
Kattah, Jorge C; Talkad, Arun V; Wang, David Z; Hsieh, Yu-Hsiang; Newman-Toker, David E
2009-11-01
Acute vestibular syndrome (AVS) is often due to vestibular neuritis but can result from vertebrobasilar strokes. Misdiagnosis of posterior fossa infarcts in emergency care settings is frequent. Bedside oculomotor findings may reliably identify stroke in AVS, but prospective studies have been lacking. The authors conducted a prospective, cross-sectional study at an academic hospital. Consecutive patients with AVS (vertigo, nystagmus, nausea/vomiting, head-motion intolerance, unsteady gait) with >or=1 stroke risk factor underwent structured examination, including horizontal head impulse test of vestibulo-ocular reflex function, observation of nystagmus in different gaze positions, and prism cross-cover test of ocular alignment. All underwent neuroimaging and admission (generally <72 hours after symptom onset). Strokes were diagnosed by MRI or CT. Peripheral lesions were diagnosed by normal MRI and clinical follow-up. One hundred one high-risk patients with AVS included 25 peripheral and 76 central lesions (69 ischemic strokes, 4 hemorrhages, 3 other). The presence of normal horizontal head impulse test, direction-changing nystagmus in eccentric gaze, or skew deviation (vertical ocular misalignment) was 100% sensitive and 96% specific for stroke. Skew was present in 17% and associated with brainstem lesions (4% peripheral, 4% pure cerebellar, 30% brainstem involvement; chi(2), P=0.003). Skew correctly predicted lateral pontine stroke in 2 of 3 cases in which an abnormal horizontal head impulse test erroneously suggested peripheral localization. Initial MRI diffusion-weighted imaging was falsely negative in 12% (all <48 hours after symptom onset). Skew predicts brainstem involvement in AVS and can identify stroke when an abnormal horizontal head impulse test falsely suggests a peripheral lesion. A 3-step bedside oculomotor examination (HINTS: Head-Impulse-Nystagmus-Test-of-Skew) appears more sensitive for stroke than early MRI in AVS.
Robust Bayesian Factor Analysis
ERIC Educational Resources Information Center
Hayashi, Kentaro; Yuan, Ke-Hai
2003-01-01
Bayesian factor analysis (BFA) assumes the normal distribution of the current sample conditional on the parameters. Practical data in social and behavioral sciences typically have significant skewness and kurtosis. If the normality assumption is not attainable, the posterior analysis will be inaccurate, although the BFA depends less on the current…
Li, Xiaohong; Brock, Guy N; Rouchka, Eric C; Cooper, Nigel G F; Wu, Dongfeng; O'Toole, Timothy E; Gill, Ryan S; Eteleeb, Abdallah M; O'Brien, Liz; Rai, Shesh N
2017-01-01
Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level.
Li, Xiaohong; Brock, Guy N.; Rouchka, Eric C.; Cooper, Nigel G. F.; Wu, Dongfeng; O’Toole, Timothy E.; Gill, Ryan S.; Eteleeb, Abdallah M.; O’Brien, Liz
2017-01-01
Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level. PMID:28459823
Universal statistics of selected values
NASA Astrophysics Data System (ADS)
Smerlak, Matteo; Youssef, Ahmed
2017-03-01
Selection, the tendency of some traits to become more frequent than others under the influence of some (natural or artificial) agency, is a key component of Darwinian evolution and countless other natural and social phenomena. Yet a general theory of selection, analogous to the Fisher-Tippett-Gnedenko theory of extreme events, is lacking. Here we introduce a probabilistic definition of selection and show that selected values are attracted to a universal family of limiting distributions which generalize the log-normal distribution. The universality classes and scaling exponents are determined by the tail thickness of the random variable under selection. Our results provide a possible explanation for skewed distributions observed in diverse contexts where selection plays a key role, from molecular biology to agriculture and sport.
NASA Astrophysics Data System (ADS)
Wang, W.
2017-12-01
Theory resultsWang wanli left-skew L distribution density function is formula below, its interval is from -∞ to +1 , x indicates center pressure of hurricane, xA represents its long term mean, [(x-xA)/x] is standard random variable on boundary condition f(+1) =0 and f(-∞) =0 Standard variable is negative when x is less than xA ;standard variable is positive when x is more than xA : the standard variable is equal to zero when x is just xA; thus, standard variable is just -∞ if x is zero ,standard variable is also +1 if x is +∞ , finally standard random variable fall into interval of - ∞ 1 to +1 Application in table "-" signal presents individual hurricane center pressure is less than the hurricane long term averaged value; "+" signal presents individual hurricane center pressure is more than the hurricane its mean of long term, of course the mean (xA) is also substituted by other "standard" or "expected value" Tab multi-levels of hurricane strength or intense Index of Hurricane [(X-XA)/X]% XA / X Categories Descriptions X/ XA Probabilities Formula -∞ +∞ → 0 → 0 …… …… …… …… …… …… < -900 > 10.0 < -15 > extreme ( Ⅵ ) < 0.10 -800, -900 9.0, 10.0 -15 extreme ( Ⅵ ) 0.11, 0.10 -700, -800 8.0, 9.0 -14 extreme ( Ⅴ ) 0.13, 0.11 -600, -700 7.0, 8.0 -13 extreme ( Ⅳ ) 0.14, 0.13 -500, -600 6.0, 7.0 -12 extreme ( Ⅲ ) 0.17, 0.14 0.05287 % L(-5.0)- L(-6.0) -400, -500 5.0, 6.0 -11 extreme ( Ⅱ ) 0.20, 0.17 0.003 % L(-4.0)- L(-5.0) -300, -400 4.0, 5.0 -10 extreme ( Ⅰ ) 0.25, 0.20 0.132 % L(-3.0)- L(-4.0) -267, -300 3.67, 4.00 -9 strongest ( Ⅲ )-superior 0.27, 0.25 0.24 % L(-2.67)-L(-3.00) -233, -267 3.33, 3.67 -8 strongest ( Ⅱ )-medium 0.30, 0.27 0.61 % L(-2.33)-L(-2.67) -200, -233 3.00, 3.33 -7 strongest ( Ⅰ )-inferior 0.33, 0.30 1.28 % L(-2.00)- L(-2.33) -167, -200 2.67, 3.00 -6 strong ( Ⅲ )-superior 0.37, 0.33 2.47 % L(-1.67)-L(-2.00) -133, -167 2.33, 2.67 -5 strong ( Ⅱ )-medium 0.43, 0.37 4.43 % L(-1.33)- L(-1.67) -100, -133 2.00, 2.33 -4 strong ( Ⅰ )-inferior 0.50, 0.43 6.69 % L(-1.00) -L(-1.33) -67, -100 1.67, 2.00 -3 normal ( Ⅲ ) -superior 0.60, 0.50 9.27 % L(-0.67)-L(-1.00) -33, -67 1.33, 1.67 -2 normal ( Ⅱ )-medium 0.75, 0.60 11.93 % L(-0.33)-L(-0.67) 00, -33 1.00, 1.33 -1 normal ( Ⅰ )-inferior 1.0, 0.75 12.93 % L(0.00)-L(-0.33) 33, 00 0.67, 1.00 +1 normal 1.49, 1.00 34.79 % L(0.33)-L(0.00) 67, 33 0.33, 0.67 +2 weak 3.03, 1.49 12.12 % L(0.67)-L(0.33) 100, 67 0.00, 0.33 +3 more weaker ∞, 3.03 3.08 % L(1.00)-L(0.67)
NASA Astrophysics Data System (ADS)
Miranda, Rodrigo A.; Schelin, Adriane B.; Chian, Abraham C.-L.; Ferreira, José L.
2018-03-01
In a recent paper (Chian et al., 2016) it was shown that magnetic reconnection at the interface region between two magnetic flux ropes is responsible for the genesis of interplanetary intermittent turbulence. The normalized third-order moment (skewness) and the normalized fourth-order moment (kurtosis) display a quadratic relation with a parabolic shape that is commonly observed in observational data from turbulence in fluids and plasmas, and is linked to non-Gaussian fluctuations due to coherent structures. In this paper we perform a detailed study of the relation between the skewness and the kurtosis of the modulus of the magnetic field |B| during a triple interplanetary magnetic flux rope event. In addition, we investigate the skewness-kurtosis relation of two-point differences of |B| for the same event. The parabolic relation displays scale dependence and is found to be enhanced during magnetic reconnection, rendering support for the generation of non-Gaussian coherent structures via rope-rope magnetic reconnection. Our results also indicate that a direct coupling between the scales of magnetic flux ropes and the scales within the inertial subrange occurs in the solar wind.
Mason, Jane A; Aung, Hnin T; Nandini, Adayapalam; Woods, Rickie G; Fairbairn, David J; Rowell, John A; Young, David; Susman, Rachel D; Brown, Simon A; Hyland, Valentine J; Robertson, Jeremy D
2018-05-01
We report a kindred referred for molecular investigation of severe hemophilia A in a young female in which extremely skewed X-inactivation was observed in both the proband and her clinically normal mother. Bidirectional Sanger sequencing of all F8 gene coding regions and exon/intron boundaries was undertaken. Methylation-sensitive restriction enzymes were utilized to investigate skewed X-inactivation using both a classical human androgen receptor (HUMARA) assay, and a novel method targeting differential methylation patterns in multiple informative X-chromosome SNPs. Illumina Whole-Genome Infinium microarray analysis was performed in the case-parent trio (proband and both parents), and the proband's maternal grandmother. The proband was a cytogenetically normal female with severe hemophilia A resulting from a heterozygous F8 pathogenic variant inherited from her similarly affected father. No F8 mutation was identified in the proband's mother, however, both the proband and her mother both demonstrated completely skewed X-chromosome inactivation (100%) in association with a previously unreported 2.3 Mb deletion at Xp22.2. At least three disease-associated genes (FANCB, AP1S2, and PIGA) were contained within the deleted region. We hypothesize that true "extreme" skewing of X-inactivation (≥95%) is a rare occurrence, but when defined correctly there is a high probability of finding an X-chromosome disease-causing variant or larger deletion resulting in X-inactivation through a survival disadvantage or cell lethal mechanism. We postulate that the 2.3 Mb Xp22.2 deletion identified in our kindred arose de novo in the proband's mother (on the grandfather's homolog), and produced extreme skewing of X-inactivation via a "cell lethal" mechanism. We introduce a novel multitarget approach for X-inactivation analysis using multiple informative differentially methylated SNPs, as an alternative to the classical single locus (HUMARA) method. We propose that for females with unexplained severe phenotypic expression of an X-linked recessive disorder trio-SNP microarray should be undertaken in combination with X-inactivation analysis. © 2018 The Authors. Molecular Genetics & Genomic Medicine published by Wiley Periodicals, Inc.
Confidence Intervals for True Scores Using the Skew-Normal Distribution
ERIC Educational Resources Information Center
Garcia-Perez, Miguel A.
2010-01-01
A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…
NASA Astrophysics Data System (ADS)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-01
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness all quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the lognormal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.
Gentilini, Davide; Garagnani, Paolo; Pisoni, Serena; Bacalini, Maria Giulia; Calzari, Luciano; Mari, Daniela; Vitale, Giovanni; Franceschi, Claudio; Di Blasio, Anna Maria
2015-08-01
In this study we applied a new analytical strategy to investigate the relations between stochastic epigenetic mutations (SEMs) and aging. We analysed methylation levels through the Infinium HumanMethylation27 and HumanMethylation450 BeadChips in a population of 178 subjects ranging from 3 to 106 years. For each CpG probe, epimutated subjects were identified as the extreme outliers with methylation level exceeding three times interquartile ranges the first quartile (Q1-(3 x IQR)) or the third quartile (Q3+(3 x IQR)). We demonstrated that the number of SEMs was low in childhood and increased exponentially during aging. Using the HUMARA method, skewing of X chromosome inactivation (XCI) was evaluated in heterozygotes women. Multivariate analysis indicated a significant correlation between log(SEMs) and degree of XCI skewing after adjustment for age (β = 0.41; confidence interval: 0.14, 0.68; p-value = 0.0053). The PATH analysis tested the complete model containing the variables: skewing of XCI, age, log(SEMs) and overall CpG methylation. After adjusting for the number of epimutations we failed to confirm the well reported correlation between skewing of XCI and aging. This evidence might suggest that the known correlation between XCI skewing and aging could not be a direct association but mediated by the number of SEMs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, Larry K.; Newsom, Rob K.; Turner, David D.
One year of Coherent Doppler Lidar (CDL) data collected at the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) site in Oklahoma is analyzed to provide profiles of vertical velocity variance, skewness, and kurtosis for cases of cloud-free convective boundary layers. The variance was scaled by the Deardorff convective velocity scale, which was successful when the boundary layer depth was stationary but failed in situations when the layer was changing rapidly. In this study the data are sorted according to time of day, season, wind direction, surface shear stress, degree of instability, and wind shear across the boundary-layer top. Themore » normalized variance was found to have its peak value near a normalized height of 0.25. The magnitude of the variance changes with season, shear stress, and degree of instability, but was not impacted by wind shear across the boundary-layer top. The skewness was largest in the top half of the boundary layer (with the exception of wintertime conditions). The skewness was found to be a function of the season, shear stress, wind shear across the boundary-layer top, with larger amounts of shear leading to smaller values. Like skewness, the vertical profile of kurtosis followed a consistent pattern, with peak values near the boundary-layer top (also with the exception of wintertime data). The altitude of the peak values of kurtosis was found to be lower when there was a large amount of wind shear at the boundary-layer top.« less
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
LETTER TO THE EDITOR: Exact energy distribution function in a time-dependent harmonic oscillator
NASA Astrophysics Data System (ADS)
Robnik, Marko; Romanovski, Valery G.; Stöckmann, Hans-Jürgen
2006-09-01
Following a recent work by Robnik and Romanovski (2006 J. Phys. A: Math. Gen. 39 L35, 2006 Open Syst. Inf. Dyn. 13 197-222), we derive an explicit formula for the universal distribution function of the final energies in a time-dependent 1D harmonic oscillator, whose functional form does not depend on the details of the frequency ω(t) and is closely related to the conservation of the adiabatic invariant. The normalized distribution function is P(x) = \\pi^{-1} (2\\mu^2 - x^2)^{-\\frac{1}{2}} , where x=E_1- \\skew3\\bar{E}_1 ; E1 is the final energy, \\skew3\\bar{E}_1 is its average value and µ2 is the variance of E1. \\skew3\\bar{E}_1 and µ2 can be calculated exactly using the WKB approach to all orders.
NASA Astrophysics Data System (ADS)
Forooghi, Pourya; Stroh, Alexander; Schlatter, Philipp; Frohnapfel, Bettina
2018-04-01
Direct numerical simulations are used to investigate turbulent flow in rough channels, in which topographical parameters of the rough wall are systematically varied at a fixed friction Reynolds number of 500, based on a mean channel half-height h and friction velocity. The utilized roughness generation approach allows independent variation of moments of the surface height probability distribution function [thus root-mean-square (rms) surface height, skewness, and kurtosis], surface mean slope, and standard deviation of the roughness peak sizes. Particular attention is paid to the effect of the parameter Δ defined as the normalized height difference between the highest and lowest roughness peaks. This parameter is used to understand the trends of the investigated flow variables with departure from the idealized case where all roughness elements have the same height (Δ =0 ). All calculations are done in the fully rough regime and for surfaces with high slope (effective slope equal to 0.6-0.9). The rms roughness height is fixed for all cases at 0.045 h and the skewness and kurtosis of the surface height probability density function vary in the ranges -0.33 to 0.67 and 1.9 to 2.6, respectively. The goal of the paper is twofold: first, to investigate the possible effect of topographical parameters on the mean turbulent flow, Reynolds, and dispersive stresses particularly in the vicinity of the roughness crest, and second, to investigate the possibility of using the wall-normal turbulence intensity as a physical parameter for parametrization of the flow. Such a possibility, already suggested for regular roughness in the literature, is here extended to irregular roughness.
NASA Astrophysics Data System (ADS)
Jafari, Mehrnoosh; Minaei, Saeid; Safaie, Naser; Torkamani-Azar, Farah
2016-05-01
Spatial and temporal changes in surface temperature of infected and non-infected rose plant (Rosa hybrida cv. 'Angelina') leaves were visualized using digital infrared thermography. Infected areas exhibited a presymptomatic decrease in leaf temperature up to 2.3 °C. In this study, two experiments were conducted: one in the greenhouse (semi-controlled ambient conditions) and the other, in a growth chamber (controlled ambient conditions). Effect of drought stress and darkness on the thermal images were also studied in this research. It was found that thermal histograms of the infected leaves closely follow a standard normal distribution. They have a skewness near zero, kurtosis under 3, standard deviation larger than 0.6, and a Maximum Temperature Difference (MTD) more than 4. For each thermal histogram, central tendency, variability, and parameters of the best fitted Standard Normal and Laplace distributions were estimated. To classify healthy and infected leaves, feature selection was conducted and the best extracted thermal features with the largest linguistic hedge values were chosen. Among those features independent of absolute temperature measurement, MTD, SD, skewness, R2l, kurtosis and bn were selected. Then, a neuro-fuzzy classifier was trained to recognize the healthy leaves from the infected ones. The k-means clustering method was utilized to obtain the initial parameters and the fuzzy "if-then" rules. Best estimation rates of 92.55% and 92.3% were achieved in training and testing the classifier with 8 clusters. Results showed that drought stress had an adverse effect on the classification of healthy leaves. More healthy leaves under drought stress condition were classified as infected causing PPV and Specificity index values to decrease, accordingly. Image acquisition in the dark had no significant effect on the classification performance.
Measuring Skew in Average Surface Roughness as a Function of Surface Preparation
NASA Technical Reports Server (NTRS)
Stahl, Mark
2015-01-01
Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.
NASA Astrophysics Data System (ADS)
Sadeghi, Seyed Hamidreza; Singh, Vijay P.
2017-11-01
Spatiotemporal behavior of sediment yield is a key for proper watershed management. This study analyzed statistical characteristics and trends of suspended sediment concentration (SCS), flow discharge (FD) and sediment particle sizes using data from 24 gage stations scattered throughout the United States. Analysis showed significant time- and location-specific differences of these variables. The median values of SSC, FD and percentage of particle sizes smaller than 63 μm (P63) for all 24 gage stations were found to be 510.236 mg l-1 (right skewed), 45.406 m3 s-1 (left skewed) and 78.648% (right skewed), respectively. Most of the stations exhibited significant trends (P < 0.001) in daily SSC (18 stations; one increasing and 17 decreasing), FD (19 stations; seven increasing and 12 decreasing), and P63 (15 stations; five increasing and 10 decreasing) as well. Further, 46% of the stations exhibited significant trends in all three variables. The wash load significantly contributed (79.085 ± 11.343%) to sediment load recorded at the gage stations. Results of the study can be used for developing best watershed management practices which may call for local or regional planning based on natural (i.e., precipitation amount, type and erosivity, watershed area, and soil erodibility) and human-affected (i.e., land use and hydraulic structures and water resources management) factors governing the study variables.
Tan, Kun; An, Lei; Miao, Kai; Ren, Likun; Hou, Zhuocheng; Tao, Li; Zhang, Zhenni; Wang, Xiaodong; Xia, Wei; Liu, Jinghao; Wang, Zhuqing; Xi, Guangyin; Gao, Shuai; Sui, Linlin; Zhu, De-Sheng; Wang, Shumin; Wu, Zhonghong; Bach, Ingolf; Chen, Dong-bao; Tian, Jianhui
2016-01-01
Dynamic epigenetic reprogramming occurs during normal embryonic development at the preimplantation stage. Erroneous epigenetic modifications due to environmental perturbations such as manipulation and culture of embryos during in vitro fertilization (IVF) are linked to various short- or long-term consequences. Among these, the skewed sex ratio, an indicator of reproductive hazards, was reported in bovine and porcine embryos and even human IVF newborns. However, since the first case of sex skewing reported in 1991, the underlying mechanisms remain unclear. We reported herein that sex ratio is skewed in mouse IVF offspring, and this was a result of female-biased peri-implantation developmental defects that were originated from impaired imprinted X chromosome inactivation (iXCI) through reduced ring finger protein 12 (Rnf12)/X-inactive specific transcript (Xist) expression. Compensation of impaired iXCI by overexpression of Rnf12 to up-regulate Xist significantly rescued female-biased developmental defects and corrected sex ratio in IVF offspring. Moreover, supplementation of an epigenetic modulator retinoic acid in embryo culture medium up-regulated Rnf12/Xist expression, improved iXCI, and successfully redeemed the skewed sex ratio to nearly 50% in mouse IVF offspring. Thus, our data show that iXCI is one of the major epigenetic barriers for the developmental competence of female embryos during preimplantation stage, and targeting erroneous epigenetic modifications may provide a potential approach for preventing IVF-associated complications. PMID:26951653
Method of estimating flood-frequency parameters for streams in Idaho
Kjelstrom, L.C.; Moffatt, R.L.
1981-01-01
Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)
Leão, William L.; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210
Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.
Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential value.
Bishop, Dorothy V M; Thompson, Paul A
2016-01-01
Background. The p-curve is a plot of the distribution of p-values reported in a set of scientific studies. Comparisons between ranges of p-values have been used to evaluate fields of research in terms of the extent to which studies have genuine evidential value, and the extent to which they suffer from bias in the selection of variables and analyses for publication, p-hacking. Methods. p-hacking can take various forms. Here we used R code to simulate the use of ghost variables, where an experimenter gathers data on several dependent variables but reports only those with statistically significant effects. We also examined a text-mined dataset used by Head et al. (2015) and assessed its suitability for investigating p-hacking. Results. We show that when there is ghost p-hacking, the shape of the p-curve depends on whether dependent variables are intercorrelated. For uncorrelated variables, simulated p-hacked data do not give the "p-hacking bump" just below .05 that is regarded as evidence of p-hacking, though there is a negative skew when simulated variables are inter-correlated. The way p-curves vary according to features of underlying data poses problems when automated text mining is used to detect p-values in heterogeneous sets of published papers. Conclusions. The absence of a bump in the p-curve is not indicative of lack of p-hacking. Furthermore, while studies with evidential value will usually generate a right-skewed p-curve, we cannot treat a right-skewed p-curve as an indicator of the extent of evidential value, unless we have a model specific to the type of p-values entered into the analysis. We conclude that it is not feasible to use the p-curve to estimate the extent of p-hacking and evidential value unless there is considerable control over the type of data entered into the analysis. In particular, p-hacking with ghost variables is likely to be missed.
Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin
2013-01-01
In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436
Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin
2013-10-15
In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.
Moran, John L; Solomon, Patricia J
2012-05-16
For the analysis of length-of-stay (LOS) data, which is characteristically right-skewed, a number of statistical estimators have been proposed as alternatives to the traditional ordinary least squares (OLS) regression with log dependent variable. Using a cohort of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 2008-2009, 12 different methods were used for estimation of intensive care (ICU) length of stay. These encompassed risk-adjusted regression analysis of firstly: log LOS using OLS, linear mixed model [LMM], treatment effects, skew-normal and skew-t models; and secondly: unmodified (raw) LOS via OLS, generalised linear models [GLMs] with log-link and 4 different distributions [Poisson, gamma, negative binomial and inverse-Gaussian], extended estimating equations [EEE] and a finite mixture model including a gamma distribution. A fixed covariate list and ICU-site clustering with robust variance were utilised for model fitting with split-sample determination (80%) and validation (20%) data sets, and model simulation was undertaken to establish over-fitting (Copas test). Indices of model specification using Bayesian information criterion [BIC: lower values preferred] and residual analysis as well as predictive performance (R2, concordance correlation coefficient (CCC), mean absolute error [MAE]) were established for each estimator. The data-set consisted of 111663 patients from 131 ICUs; with mean(SD) age 60.6(18.8) years, 43.0% were female, 40.7% were mechanically ventilated and ICU mortality was 7.8%. ICU length-of-stay was 3.4(5.1) (median 1.8, range (0.17-60)) days and demonstrated marked kurtosis and right skew (29.4 and 4.4 respectively). BIC showed considerable spread, from a maximum of 509801 (OLS-raw scale) to a minimum of 210286 (LMM). R2 ranged from 0.22 (LMM) to 0.17 and the CCC from 0.334 (LMM) to 0.149, with MAE 2.2-2.4. Superior residual behaviour was established for the log-scale estimators. There was a general tendency for over-prediction (negative residuals) and for over-fitting, the exception being the GLM negative binomial estimator. The mean-variance function was best approximated by a quadratic function, consistent with log-scale estimation; the link function was estimated (EEE) as 0.152(0.019, 0.285), consistent with a fractional-root function. For ICU length of stay, log-scale estimation, in particular the LMM, appeared to be the most consistently performing estimator(s). Neither the GLM variants nor the skew-regression estimators dominated.
Rivera, Ana Leonor; Estañol, Bruno; Sentíes-Madrid, Horacio; Fossion, Ruben; Toledo-Roy, Juan C.; Mendoza-Temis, Joel; Morales, Irving O.; Landa, Emmanuel; Robles-Cabrera, Adriana; Moreno, Rene; Frank, Alejandro
2016-01-01
Diabetes Mellitus (DM) affects the cardiovascular response of patients. To study this effect, interbeat intervals (IBI) and beat-to-beat systolic blood pressure (SBP) variability of patients during supine, standing and controlled breathing tests were analyzed in the time domain. Simultaneous noninvasive measurements of IBI and SBP for 30 recently diagnosed and 15 long-standing DM patients were compared with the results for 30 rigorously screened healthy subjects (control). A statistically significant distinction between control and diabetic subjects was provided by the standard deviation and the higher moments of the distributions (skewness, and kurtosis) with respect to the median. To compare IBI and SBP for different populations, we define a parameter, α, that combines the variability of the heart rate and the blood pressure, as the ratio of the radius of the moments for IBI and the same radius for SBP. As diabetes evolves, α decreases, standard deviation of the IBI detrended signal diminishes (heart rate signal becomes more “rigid”), skewness with respect to the median approaches zero (signal fluctuations gain symmetry), and kurtosis increases (fluctuations concentrate around the median). Diabetes produces not only a rigid heart rate, but also increases symmetry and has leptokurtic distributions. SBP time series exhibit the most variable behavior for recently diagnosed DM with platykurtic distributions. Under controlled breathing, SBP has symmetric distributions for DM patients, while control subjects have non-zero skewness. This may be due to a progressive decrease of parasympathetic and sympathetic activity to the heart and blood vessels as diabetes evolves. PMID:26849653
Rivera, Ana Leonor; Estañol, Bruno; Sentíes-Madrid, Horacio; Fossion, Ruben; Toledo-Roy, Juan C; Mendoza-Temis, Joel; Morales, Irving O; Landa, Emmanuel; Robles-Cabrera, Adriana; Moreno, Rene; Frank, Alejandro
2016-01-01
Diabetes Mellitus (DM) affects the cardiovascular response of patients. To study this effect, interbeat intervals (IBI) and beat-to-beat systolic blood pressure (SBP) variability of patients during supine, standing and controlled breathing tests were analyzed in the time domain. Simultaneous noninvasive measurements of IBI and SBP for 30 recently diagnosed and 15 long-standing DM patients were compared with the results for 30 rigorously screened healthy subjects (control). A statistically significant distinction between control and diabetic subjects was provided by the standard deviation and the higher moments of the distributions (skewness, and kurtosis) with respect to the median. To compare IBI and SBP for different populations, we define a parameter, α, that combines the variability of the heart rate and the blood pressure, as the ratio of the radius of the moments for IBI and the same radius for SBP. As diabetes evolves, α decreases, standard deviation of the IBI detrended signal diminishes (heart rate signal becomes more "rigid"), skewness with respect to the median approaches zero (signal fluctuations gain symmetry), and kurtosis increases (fluctuations concentrate around the median). Diabetes produces not only a rigid heart rate, but also increases symmetry and has leptokurtic distributions. SBP time series exhibit the most variable behavior for recently diagnosed DM with platykurtic distributions. Under controlled breathing, SBP has symmetric distributions for DM patients, while control subjects have non-zero skewness. This may be due to a progressive decrease of parasympathetic and sympathetic activity to the heart and blood vessels as diabetes evolves.
A Bayesian estimate of the concordance correlation coefficient with skewed data.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.
Pegoraro, E; Whitaker, J; Mowery-Rushton, P; Surti, U; Lanasa, M; Hoffman, E P
1997-01-01
We report a family ascertained for molecular diagnosis of muscular dystrophy in a young girl, in which preferential activation (> or = 95% of cells) of the paternal X chromosome was seen in both the proband and her mother. To determine the molecular basis for skewed X inactivation, we studied X-inactivation patterns in peripheral blood and/or oral mucosal cells from 50 members of this family and from a cohort of normal females. We found excellent concordance between X-inactivation patterns in blood and oral mucosal cell nuclei in all females. Of the 50 female pedigree members studied, 16 showed preferential use (> or = 95% cells) of the paternal X chromosome; none of 62 randomly selected females showed similarly skewed X inactivation was maternally inherited in this family. A linkage study using the molecular trait of skewed X inactivation as the scored phenotype localized this trait to Xq28 (DXS1108; maximum LOD score [Zmax] = 4.34, recombination fraction [theta] = 0). Both genotyping of additional markers and FISH of a YAC probe in Xq28 showed a deletion spanning from intron 22 of the factor VIII gene to DXS115-3. This deletion completely cosegregated with the trait (Zmax = 6.92, theta = 0). Comparison of clinical findings between affected and unaffected females in the 50-member pedigree showed a statistically significant increase in spontaneous-abortion rate in the females carrying the trait (P < .02). To our knowledge, this is the first gene-mapping study of abnormalities of X-inactivation patterns and is the first association of a specific locus for recurrent spontaneous abortion in a cytogenetically normal family. The involvement of this locus in cell lethality, cell-growth disadvantage, developmental abnormalities, or the X-inactivation process is discussed. Images Figure 4 Figure 7 PMID:9245997
NASA Astrophysics Data System (ADS)
Lahmiri, S.; Boukadoum, M.
2015-10-01
Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.
Accumulation risk assessment for the flooding hazard
NASA Astrophysics Data System (ADS)
Roth, Giorgio; Ghizzoni, Tatiana; Rudari, Roberto
2010-05-01
One of the main consequences of the demographic and economic development and of markets and trades globalization is represented by risks cumulus. In most cases, the cumulus of risks intuitively arises from the geographic concentration of a number of vulnerable elements in a single place. For natural events, risks cumulus can be associated, in addition to intensity, also to event's extension. In this case, the magnitude can be such that large areas, that may include many regions or even large portions of different countries, are stroked by single, catastrophic, events. Among natural risks, the impact of the flooding hazard cannot be understated. To cope with, a variety of mitigation actions can be put in place: from the improvement of monitoring and alert systems to the development of hydraulic structures, throughout land use restrictions, civil protection, financial and insurance plans. All of those viable options present social and economic impacts, either positive or negative, whose proper estimate should rely on the assumption of appropriate - present and future - flood risk scenarios. It is therefore necessary to identify proper statistical methodologies, able to describe the multivariate aspects of the involved physical processes and their spatial dependence. In hydrology and meteorology, but also in finance and insurance practice, it has early been recognized that classical statistical theory distributions (e.g., the normal and gamma families) are of restricted use for modeling multivariate spatial data. Recent research efforts have been therefore directed towards developing statistical models capable of describing the forms of asymmetry manifest in data sets. This, in particular, for the quite frequent case of phenomena whose empirical outcome behaves in a non-normal fashion, but still maintains some broad similarity with the multivariate normal distribution. Fruitful approaches were recognized in the use of flexible models, which include the normal distribution as a special or limiting case (e.g., the skew-normal or skew-t distributions). The present contribution constitutes an attempt to provide a better estimation of the joint probability distribution able to describe flood events in a multi-site multi-basin fashion. This goal will be pursued through the multivariate skew-t distribution, which allows to analytically define the joint probability distribution. Performances of the skew-t distribution will be discussed with reference to the Tanaro River in Northwestern Italy. To enhance the characteristics of the correlation structure, both nested and non-nested gauging stations will be selected, with significantly different contributing areas.
Modeling multivariate time series on manifolds with skew radial basis functions.
Jamshidi, Arta A; Kirby, Michael J
2011-01-01
We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.
Response to treatment of myasthenia gravis according to clinical subtype.
Akaishi, Tetsuya; Suzuki, Yasushi; Imai, Tomihiro; Tsuda, Emiko; Minami, Naoya; Nagane, Yuriko; Uzawa, Akiyuki; Kawaguchi, Naoki; Masuda, Masayuki; Konno, Shingo; Suzuki, Hidekazu; Murai, Hiroyuki; Aoki, Masashi; Utsugisawa, Kimiaki
2016-11-17
We have previously reported using two-step cluster analysis to classify myasthenia gravis (MG) patients into the following five subtypes: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; anti-acetylcholine receptor antibody (AChR-Ab)-negative MG; and AChR-Ab-positive MG without thymic abnormalities. The objectives of the present study were to examine the reproducibility of this five-subtype classification using a new data set of MG patients and to identify additional characteristics of these subtypes, particularly in regard to response to treatment. A total of 923 consecutive MG patients underwent two-step cluster analysis for the classification of subtypes. The variables used for classification were sex, age of onset, disease duration, presence of thymoma or thymic hyperplasia, positivity for AChR-Ab or anti-muscle-specific tyrosine kinase antibody, positivity for other concurrent autoantibodies, and disease condition at worst and current. The period from the start of treatment until the achievement of minimal manifestation status (early-stage response) was determined and then compared between subtypes using Kaplan-Meier analysis and the log-rank test. In addition, between subtypes, the rate of the number of patients who maintained minimal manifestations during the study period/that of patients who only achieved the status once (stability of improved status) was compared. As a result of two-step cluster analysis, 923 MG patients were classified into five subtypes as follows: ocular MG (AChR-Ab-positivity, 77%; histogram of onset age, skewed to older age); thymoma-associated MG (100%; normal distribution); MG with thymic hyperplasia (89%; skewed to younger age); AChR-Ab-negative MG (0%; normal distribution); and AChR-Ab-positive MG without thymic abnormalities (100%, skewed to older age). Furthermore, patients classified as ocular MG showed the best early-stage response to treatment and stability of improved status, followed by those classified as thymoma-associated MG and AChR-Ab-positive MG without thymic abnormalities; by contrast, those classified as AChR-Ab-negative MG showed the worst early-stage response to treatment and stability of improved status. Differences were seen between the five subtypes in demographic characteristics, clinical severity, and therapeutic response. Our five-subtype classification approach would be beneficial not only to elucidate disease subtypes, but also to plan treatment strategies for individual MG patients.
Measuring skew in average surface roughness as a function of surface preparation
NASA Astrophysics Data System (ADS)
Stahl, Mark T.
2015-08-01
Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.
Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A
2018-01-01
This study compares conventional grab sampling to incremental sampling methodology (ISM) to characterize metal contamination at a military small-arms-range. Grab sample results had large variances, positively skewed non-normal distributions, extreme outliers, and poor agreement between duplicate samples even when samples were co-located within tens of centimeters of each other. The extreme outliers strongly influenced the grab sample means for the primary contaminants lead (Pb) and antinomy (Sb). In contrast, median and mean metal concentrations were similar for the ISM samples. ISM significantly reduced measurement uncertainty of estimates of the mean, increasing data quality (e.g., for environmental risk assessments) with fewer samples (e.g., decreasing total project costs). Based on Monte Carlo resampling simulations, grab sampling resulted in highly variable means and upper confidence limits of the mean relative to ISM.
Analysis of line structure in handwritten documents using the Hough transform
NASA Astrophysics Data System (ADS)
Ball, Gregory R.; Kasiviswanathan, Harish; Srihari, Sargur N.; Narayanan, Aswin
2010-01-01
In the analysis of handwriting in documents a central task is that of determining line structure of the text, e.g., number of text lines, location of their starting and end-points, line-width, etc. While simple methods can handle ideal images, real world documents have complexities such as overlapping line structure, variable line spacing, line skew, document skew, noisy or degraded images etc. This paper explores the application of the Hough transform method to handwritten documents with the goal of automatically determining global document line structure in a top-down manner which can then be used in conjunction with a bottom-up method such as connected component analysis. The performance is significantly better than other top-down methods, such as the projection profile method. In addition, we evaluate the performance of skew analysis by the Hough transform on handwritten documents.
Exact statistical results for binary mixing and reaction in variable density turbulence
NASA Astrophysics Data System (ADS)
Ristorcelli, J. R.
2017-02-01
We report a number of rigorous statistical results on binary active scalar mixing in variable density turbulence. The study is motivated by mixing between pure fluids with very different densities and whose density intensity is of order unity. Our primary focus is the derivation of exact mathematical results for mixing in variable density turbulence and we do point out the potential fields of application of the results. A binary one step reaction is invoked to derive a metric to asses the state of mixing. The mean reaction rate in variable density turbulent mixing can be expressed, in closed form, using the first order Favre mean variables and the Reynolds averaged density variance, ⟨ρ2⟩ . We show that the normalized density variance, ⟨ρ2⟩ , reflects the reduction of the reaction due to mixing and is a mix metric. The result is mathematically rigorous. The result is the variable density analog, the normalized mass fraction variance ⟨c2⟩ used in constant density turbulent mixing. As a consequence, we demonstrate that use of the analogous normalized Favre variance of the mass fraction, c″ 2˜ , as a mix metric is not theoretically justified in variable density turbulence. We additionally derive expressions relating various second order moments of the mass fraction, specific volume, and density fields. The central role of the density specific volume covariance ⟨ρ v ⟩ is highlighted; it is a key quantity with considerable dynamical significance linking various second order statistics. For laboratory experiments, we have developed exact relations between the Reynolds scalar variance ⟨c2⟩ its Favre analog c″ 2˜ , and various second moments including ⟨ρ v ⟩ . For moment closure models that evolve ⟨ρ v ⟩ and not ⟨ρ2⟩ , we provide a novel expression for ⟨ρ2⟩ in terms of a rational function of ⟨ρ v ⟩ that avoids recourse to Taylor series methods (which do not converge for large density differences). We have derived analytic results relating several other second and third order moments and see coupling between odd and even order moments demonstrating a natural and inherent skewness in the mixing in variable density turbulence. The analytic results have applications in the areas of isothermal material mixing, isobaric thermal mixing, and simple chemical reaction (in progress variable formulation).
Forecasting stock market volatility: Do realized skewness and kurtosis help?
NASA Astrophysics Data System (ADS)
Mei, Dexiang; Liu, Jing; Ma, Feng; Chen, Wang
2017-09-01
In this study, we investigate the predictability of the realized skewness (RSK) and realized kurtosis (RKU) to stock market volatility, that has not been addressed in the existing studies. Out-of-sample results show that RSK, which can significantly improve forecast accuracy in mid- and long-term, is more powerful than RKU in forecasting volatility. Whereas these variables are useless in short-term forecasting. Furthermore, we employ the realized kernel (RK) for the robustness analysis and the conclusions are consistent with the RV measures. Our results are of great importance for portfolio allocation and financial risk management.
Mean velocities and Reynolds stresses upstream of a simulated wing-fuselage juncture
NASA Technical Reports Server (NTRS)
Mcmahon, H.; Hubbartt, J.; Kubendran, L. R.
1983-01-01
Values of three mean velocity components and six turbulence stresses measured in a turbulent shear layer upstream of a simulated wing-fuselage juncture and immediately downstream of the start of the juncture are presented nd discussed. Two single-sensor hot-wire probes were used in the measurements. The separated region just upstream of the wing contains an area of reversed flow near the fuselage surface where the turbulence level is high. Outside of this area the flow skews as it passes around the body, and in this skewed region the magnitude and distribution of the turbulent normal and shear stresses within the shear layer are modified slightly by the skewing and deceleration of the flow. A short distance downstream of the wing leading edge the secondary flow vortext is tightly rolled up and redistributes both mean flow and turbulence in the juncture. The data acquisition technique employed here allows a hot wire to be used in a reversed flow region to indicate flow direction.
Cauley, K A; Hu, Y; Och, J; Yorks, P J; Fielden, S W
2018-04-01
The majority of brain growth and development occur in the first 2 years of life. This study investigated these changes by analysis of the brain radiodensity histogram of head CT scans from the clinical population, 0-2 years of age. One hundred twenty consecutive head CTs with normal findings meeting the inclusion criteria from children from birth to 2 years were retrospectively identified from 3 different CT scan platforms. Histogram analysis was performed on brain-extracted images, and histogram mean, mode, full width at half maximum, skewness, kurtosis, and SD were correlated with subject age. The effects of scan platform were investigated. Normative curves were fitted by polynomial regression analysis. Average total brain volume was 360 cm 3 at birth, 948 cm 3 at 1 year, and 1072 cm 3 at 2 years. Total brain tissue density showed an 11% increase in mean density at 1 year and 19% at 2 years. Brain radiodensity histogram skewness was positive at birth, declining logarithmically in the first 200 days of life. The histogram kurtosis also decreased in the first 200 days to approach a normal distribution. Direct segmentation of CT images showed that changes in brain radiodensity histogram skewness correlated with, and can be explained by, a relative increase in gray matter volume and an increase in gray and white matter tissue density that occurs during this period of brain maturation. Normative metrics of the brain radiodensity histogram derived from routine clinical head CT images can be used to develop a model of normal brain development. © 2018 by American Journal of Neuroradiology.
Bazavov, A.; Ding, H. -T.; Hegde, P.; ...
2017-10-27
In this paper, we present results for the ratios of mean (M B), variance (σmore » $$2\\atop{B}$$), skewness (S B) and kurtosis (κ B) of net baryon-number fluctuations obtained in lattice QCD calculations with physical values of light and strange quark masses. Using next-to-leading order Taylor expansions in baryon chemical potential we find that qualitative features of these ratios closely resemble the corresponding experimentally measured cumulants ratios of net proton-number fluctuations for beam energies down to √sNN ≥ 19.6 GeV. We show that the difference in cumulant ratios for the mean net baryon-number, M B/σ$$2\\atop{B}$$ = χ$$B\\atop{1}$$ (T, µ B)/χ$$B\\atop{2}$$ (T, µ B) and the normalized skewness, S Bσ B = χ$$B\\atop{3}$$ (T, µB)/χ2 (T, µB ), nat-urally arises in QCD thermodynamics. Moreover, we establish a close relation between skewness and kurtosis ratios, S Bσ$$B\\atop{3}$$/M B = χ$$B\\atop{3}$$ (T, µ B)/χ$$B\\atop{1}$$ (T,µ B) and κ Bσ$$2\\atop{B}$$ = χ$$B\\atop{4}$$ (T,μ B)/χ$$B\\atop{2}$$ (T,μ B), valid at small values of the baryon chemical potential.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazavov, A.; Ding, H. -T.; Hegde, P.
In this paper, we present results for the ratios of mean (M B), variance (σmore » $$2\\atop{B}$$), skewness (S B) and kurtosis (κ B) of net baryon-number fluctuations obtained in lattice QCD calculations with physical values of light and strange quark masses. Using next-to-leading order Taylor expansions in baryon chemical potential we find that qualitative features of these ratios closely resemble the corresponding experimentally measured cumulants ratios of net proton-number fluctuations for beam energies down to √sNN ≥ 19.6 GeV. We show that the difference in cumulant ratios for the mean net baryon-number, M B/σ$$2\\atop{B}$$ = χ$$B\\atop{1}$$ (T, µ B)/χ$$B\\atop{2}$$ (T, µ B) and the normalized skewness, S Bσ B = χ$$B\\atop{3}$$ (T, µB)/χ2 (T, µB ), nat-urally arises in QCD thermodynamics. Moreover, we establish a close relation between skewness and kurtosis ratios, S Bσ$$B\\atop{3}$$/M B = χ$$B\\atop{3}$$ (T, µ B)/χ$$B\\atop{1}$$ (T,µ B) and κ Bσ$$2\\atop{B}$$ = χ$$B\\atop{4}$$ (T,μ B)/χ$$B\\atop{2}$$ (T,μ B), valid at small values of the baryon chemical potential.« less
Juang, K W; Lee, D Y; Ellsworth, T R
2001-01-01
The spatial distribution of a pollutant in contaminated soils is usually highly skewed. As a result, the sample variogram often differs considerably from its regional counterpart and the geostatistical interpolation is hindered. In this study, rank-order geostatistics with standardized rank transformation was used for the spatial interpolation of pollutants with a highly skewed distribution in contaminated soils when commonly used nonlinear methods, such as logarithmic and normal-scored transformations, are not suitable. A real data set of soil Cd concentrations with great variation and high skewness in a contaminated site of Taiwan was used for illustration. The spatial dependence of ranks transformed from Cd concentrations was identified and kriging estimation was readily performed in the standardized-rank space. The estimated standardized rank was back-transformed into the concentration space using the middle point model within a standardized-rank interval of the empirical distribution function (EDF). The spatial distribution of Cd concentrations was then obtained. The probability of Cd concentration being higher than a given cutoff value also can be estimated by using the estimated distribution of standardized ranks. The contour maps of Cd concentrations and the probabilities of Cd concentrations being higher than the cutoff value can be simultaneously used for delineation of hazardous areas of contaminated soils.
NASA Astrophysics Data System (ADS)
Cardona, Javier Fernando; García Bonilla, Alba Carolina; Tomás García, Rogelio
2017-11-01
This article shows that the effect of all quadrupole errors present in an interaction region with low β * can be modeled by an equivalent magnetic kick, which can be estimated from action and phase jumps found on beam position data. This equivalent kick is used to find the strengths that certain normal and skew quadrupoles located on the IR must have to make an effective correction in that region. Additionally, averaging techniques to reduce noise on beam position data, which allows precise estimates of equivalent kicks, are presented and mathematically justified. The complete procedure is tested with simulated data obtained from madx and 2015-LHC experimental data. The analyses performed in the experimental data indicate that the strengths of the IR skew quadrupole correctors and normal quadrupole correctors can be estimated within a 10% uncertainty. Finally, the effect of IR corrections in the β* is studied, and a correction scheme that returns this parameter to its designed value is proposed.
Robustness of S1 statistic with Hodges-Lehmann for skewed distributions
NASA Astrophysics Data System (ADS)
Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping
2016-10-01
Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.
Inferring network structure in non-normal and mixed discrete-continuous genomic data.
Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran
2018-03-01
Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.
Inferring network structure in non-normal and mixed discrete-continuous genomic data
Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran
2017-01-01
Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. PMID:28437848
NASA Astrophysics Data System (ADS)
Huete, Alfredo R.; Didan, Kamel; van Leeuwen, Willem J. D.; Vermote, Eric F.
1999-12-01
Vegetation indices have emerged as important tools in the seasonal and inter-annual monitoring of the Earth's vegetation. They are radiometric measures of the amount and condition of vegetation. In this study, the Sea-viewing Wide Field-of-View sensor (SeaWiFS) is used to investigate coarse resolution monitoring of vegetation with multiple indices. A 30-day series of SeaWiFS data, corrected for molecular scattering and absorption, was composited to cloud-free, single channel reflectance images. The normalized difference vegetation index (NDVI) and an optimized index, the enhanced vegetation index (EVI), were computed over various 'continental' regions. The EVI had a normal distribution of values over the continental set of biomes while the NDVI was skewed toward higher values and saturated over forested regions. The NDVI resembled the skewed distributions found in the red band while the EVI resembled the normal distributions found in the NIR band. The EVI minimized smoke contamination over extensive portions of the tropics. As a result, major biome types with continental regions were discriminable in both the EVI imagery and histograms, whereas smoke and saturation considerably degraded the NDVI histogram structure preventing reliable discrimination of biome types.
Some issues in the statistical analysis of vehicle emissions
DOT National Transportation Integrated Search
2000-09-01
Some of the issues complicating the statistical analysis of vehicle emissions and the effectiveness of emissions control programs are presented in this article. Issues discussed include: the variability of inter- and intra-vehicle emissions; the skew...
Mechanisms controlling the complete accretionary beach state sequence
NASA Astrophysics Data System (ADS)
Dubarbier, Benjamin; Castelle, Bruno; Ruessink, Gerben; Marieu, Vincent
2017-06-01
Accretionary downstate beach sequence is a key element of observed nearshore morphological variability along sandy coasts. We present and analyze the first numerical simulation of such a sequence using a process-based morphodynamic model that solves the coupling between waves, depth-integrated currents, and sediment transport. The simulation evolves from an alongshore uniform barred beach (storm profile) to an almost featureless shore-welded terrace (summer profile) through the highly alongshore variable detached crescentic bar and transverse bar/rip system states. A global analysis of the full sequence allows determining the varying contributions of the different hydro-sedimentary processes. Sediment transport driven by orbital velocity skewness is critical to the overall onshore sandbar migration, while gravitational downslope sediment transport acts as a damping term inhibiting further channel growth enforced by rip flow circulation. Accurate morphological diffusivity and inclusion of orbital velocity skewness opens new perspectives in terms of morphodynamic modeling of real beaches.
Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential value
Thompson, Paul A.
2016-01-01
Background. The p-curve is a plot of the distribution of p-values reported in a set of scientific studies. Comparisons between ranges of p-values have been used to evaluate fields of research in terms of the extent to which studies have genuine evidential value, and the extent to which they suffer from bias in the selection of variables and analyses for publication, p-hacking. Methods. p-hacking can take various forms. Here we used R code to simulate the use of ghost variables, where an experimenter gathers data on several dependent variables but reports only those with statistically significant effects. We also examined a text-mined dataset used by Head et al. (2015) and assessed its suitability for investigating p-hacking. Results. We show that when there is ghost p-hacking, the shape of the p-curve depends on whether dependent variables are intercorrelated. For uncorrelated variables, simulated p-hacked data do not give the “p-hacking bump” just below .05 that is regarded as evidence of p-hacking, though there is a negative skew when simulated variables are inter-correlated. The way p-curves vary according to features of underlying data poses problems when automated text mining is used to detect p-values in heterogeneous sets of published papers. Conclusions. The absence of a bump in the p-curve is not indicative of lack of p-hacking. Furthermore, while studies with evidential value will usually generate a right-skewed p-curve, we cannot treat a right-skewed p-curve as an indicator of the extent of evidential value, unless we have a model specific to the type of p-values entered into the analysis. We conclude that it is not feasible to use the p-curve to estimate the extent of p-hacking and evidential value unless there is considerable control over the type of data entered into the analysis. In particular, p-hacking with ghost variables is likely to be missed. PMID:26925335
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Team clinician variability in return-to-play decisions.
Shultz, Rebecca; Bido, Jennifer; Shrier, Ian; Meeuwisse, Willem H; Garza, Daniel; Matheson, Gordon O
2013-11-01
To describe the variability in the return-to-play (RTP) decisions of experienced team clinicians and to assess their clinical opinion as to the relevance of 19 factors described in a RTP decision-making model. Survey questionnaire. Advanced Team Physician Course. Sixty-seven of 101 sports medicine clinicians completed the questionnaire. Results were analyzed using descriptive statistics. For categorical variables, we report percentage and frequency. For continuous variables, we report mean (SD) if data were approximately normally distributed and frequencies for clinically relevant categories for skewed data. The average number of years of clinical sports medicine experience was 13.6 (9.8). Of the 62 clinicians who responded fully, 35% (n = 22) would "clear" (vs "not clear") an athlete to participate in sport even if the risk of an acute reinjury or long-term sequelae is increased. When respondents were given 6 different RTP options rather than binary choices, there were increased discrepancies across some injury risk scenarios. For example, 8.1% to 16.1% of respondents who chose to clear an athlete when presented with binary choices, later chose to "not clear" an athlete when given 6 graded RTP options. The respondents often considered factors of potential importance to athletes as nonimportant to the RTP decision process if risk of reinjury was unaffected (range, n = 4 [10%] to n = 19 [45%]). There is a high degree of variability in how different clinicians weight the different factors related to RTP decision making. More precise definitions decrease but do not eliminate this variability.
Regression away from the mean: Theory and examples.
Schwarz, Wolf; Reike, Dennis
2018-02-01
Using a standard repeated measures model with arbitrary true score distribution and normal error variables, we present some fundamental closed-form results which explicitly indicate the conditions under which regression effects towards (RTM) and away from the mean are expected. Specifically, we show that for skewed and bimodal distributions many or even most cases will show a regression effect that is in expectation away from the mean, or that is not just towards but actually beyond the mean. We illustrate our results in quantitative detail with typical examples from experimental and biometric applications, which exhibit a clear regression away from the mean ('egression from the mean') signature. We aim not to repeal cautionary advice against potential RTM effects, but to present a balanced view of regression effects, based on a clear identification of the conditions governing the form that regression effects take in repeated measures designs. © 2017 The British Psychological Society.
Crowding Effects in Vehicular Traffic
Combinido, Jay Samuel L.; Lim, May T.
2012-01-01
While the impact of crowding on the diffusive transport of molecules within a cell is widely studied in biology, it has thus far been neglected in traffic systems where bulk behavior is the main concern. Here, we study the effects of crowding due to car density and driving fluctuations on the transport of vehicles. Using a microscopic model for traffic, we found that crowding can push car movement from a superballistic down to a subdiffusive state. The transition is also associated with a change in the shape of the probability distribution of positions from a negatively-skewed normal to an exponential distribution. Moreover, crowding broadens the distribution of cars’ trap times and cluster sizes. At steady state, the subdiffusive state persists only when there is a large variability in car speeds. We further relate our work to prior findings from random walk models of transport in cellular systems. PMID:23139762
Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I
2003-01-01
Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel
Runoff characteristics of California streams
Rantz, S.E.
1972-01-01
California streams exhibit a wide range of runoff characteristics that are related to the climatologic, topographic, and geologic characteristics of the basins they drain. The annual volume of runoff of a stream, expressed in inches, may be large or small, and daily discharge rates may be highly variable or relatively steady. The bulk of the annual runoff may be storm runoff, or snowmelt runoff, or a combination of both. The streamflow may be ephemeral, intermittent, or perennial; if perennial, base flow may be well sustained or poorly sustained. In this report the various runoff characteristics are identified by numerical index values. They are shown to be related generally to mean annual precipitation, altitude, latitude, and location with respect to the 11 geomorphic provinces in the California Region. With respect to mean annual precipitation on the watershed, streamflow is generally (1) ephemeral if the mean annual precipitation is less than 10 inches, (2) intermittent if the mean annual precipitation is between 10 and 40 inches, and (3) perennial if the mean annual precipitation is more than 40 inches. Departures from those generalizations are associated with (a) the areal variation of such geologic factors as the infiltration and storage capacities of the rocks underlying the watersheds, and (b) the areal variation of evapotranspiration loss as influenced by varying conditions of climate, soil, vegetal cover, and geologic structure. Latitude and altitude determine the proportion of the winter precipitation that will be stored for subsequent runoff in the late spring and summer. In general, if a watershed has at least 30 percent of its area above the normal altitude of the snowline on April 1, it will have significant snowmelt runoff. Snowmelt runoff in California is said to be significant if at least 30 percent of the annual runoff occurs during the 4 months, April through July. Storm runoff is said to be predominant if at least 65 percent of the annual runoff occurs during the 6 months, October through March. Base flow (ground-water outflow), as a factor in the regimen of streamflow, is qualified on the basis of the percentage of the mean annual runoff that occurs during the fair-weather months of August and September. If the sum of the August and September runoff exceeds 3.0 percent of the annual runoff, base flow is considered to be well sustained; if the percentage is between 1.5 and 3.0, base flow is considered to be fairly well sustained; if the percentage is less than 1.5, baseflow is considered to be poorly sustained. The characteristics of duration curves of daily streamflow are influenced by the regimen of runoff. The distribution of daily flow is skewed for all streams, but it is more skewed for streams whose flow is predominantly storm runoff than for streams that carry significantly large quantities of snowmelt. Least skewed is the distribution for streams that carry large quantities of base flow. Either of two characteristics of the duration curve may be used as an index of skew--the percentage of time that the mean discharge is equaled or exceeded or the ratio of the median discharge to the mean discharge. As for variability of daily discharge, the variability of storm-runoff streams is greater than that of snowmelt streams, and the lowest values of variability are associated with streams that carry large quantities of base flow. The index of variability used in this study was the ratio of the discharge equaled or exceeded 10 percent of the time to the discharge equaled or exceeded 90 percent of the time. The identification of streamflow characteristics by numerical index figures greatly facilitates comparison of the diverse runoff regimens of streams in the California Region.
Manual choice reaction times in the rate-domain
Harris, Christopher M.; Waddington, Jonathan; Biscione, Valerio; Manzi, Sean
2014-01-01
Over the last 150 years, human manual reaction times (RTs) have been recorded countless times. Yet, our understanding of them remains remarkably poor. RTs are highly variable with positively skewed frequency distributions, often modeled as an inverse Gaussian distribution reflecting a stochastic rise to threshold (diffusion process). However, latency distributions of saccades are very close to the reciprocal Normal, suggesting that “rate” (reciprocal RT) may be the more fundamental variable. We explored whether this phenomenon extends to choice manual RTs. We recorded two-alternative choice RTs from 24 subjects, each with 4 blocks of 200 trials with two task difficulties (easy vs. difficult discrimination) and two instruction sets (urgent vs. accurate). We found that rate distributions were, indeed, very close to Normal, shifting to lower rates with increasing difficulty and accuracy, and for some blocks they appeared to become left-truncated, but still close to Normal. Using autoregressive techniques, we found temporal sequential dependencies for lags of at least 3. We identified a transient and steady-state component in each block. Because rates were Normal, we were able to estimate autoregressive weights using the Box-Jenkins technique, and convert to a moving average model using z-transforms to show explicit dependence on stimulus input. We also found a spatial sequential dependence for the previous 3 lags depending on whether the laterality of previous trials was repeated or alternated. This was partially dissociated from temporal dependency as it only occurred in the easy tasks. We conclude that 2-alternative choice manual RT distributions are close to reciprocal Normal and not the inverse Gaussian. This is not consistent with stochastic rise to threshold models, and we propose a simple optimality model in which reward is maximized to yield to an optimal rate, and hence an optimal time to respond. We discuss how it might be implemented. PMID:24959134
Analyzing repeated measures semi-continuous data, with application to an alcohol dependence study.
Liu, Lei; Strawderman, Robert L; Johnson, Bankole A; O'Quigley, John M
2016-02-01
Two-part random effects models (Olsen and Schafer,(1) Tooze et al.(2)) have been applied to repeated measures of semi-continuous data, characterized by a mixture of a substantial proportion of zero values and a skewed distribution of positive values. In the original formulation of this model, the natural logarithm of the positive values is assumed to follow a normal distribution with a constant variance parameter. In this article, we review and consider three extensions of this model, allowing the positive values to follow (a) a generalized gamma distribution, (b) a log-skew-normal distribution, and (c) a normal distribution after the Box-Cox transformation. We allow for the possibility of heteroscedasticity. Maximum likelihood estimation is shown to be conveniently implemented in SAS Proc NLMIXED. The performance of the methods is compared through applications to daily drinking records in a secondary data analysis from a randomized controlled trial of topiramate for alcohol dependence treatment. We find that all three models provide a significantly better fit than the log-normal model, and there exists strong evidence for heteroscedasticity. We also compare the three models by the likelihood ratio tests for non-nested hypotheses (Vuong(3)). The results suggest that the generalized gamma distribution provides the best fit, though no statistically significant differences are found in pairwise model comparisons. © The Author(s) 2012.
Box–Cox Transformation and Random Regression Models for Fecal egg Count Data
da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P.; Sonstegard, Tad S.; Cobuci, Jaime Araujo; Gasbarre, Louis C.
2012-01-01
Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box–Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box–Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated. PMID:22303406
Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.
da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C
2011-01-01
Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.
The statistical properties and possible causes of polar motion prediction errors
NASA Astrophysics Data System (ADS)
Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria
2015-08-01
The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.
Clark, Jeremy S C; Kaczmarczyk, Mariusz; Mongiało, Zbigniew; Ignaczak, Paweł; Czajkowski, Andrzej A; Klęsk, Przemysław; Ciechanowicz, Andrzej
2013-08-01
Gompertz-related distributions have dominated mortality studies for 187 years. However, nonrelated distributions also fit well to mortality data. These compete with the Gompertz and Gompertz-Makeham data when applied to data with varying extents of truncation, with no consensus as to preference. In contrast, Gaussian-related distributions are rarely applied, despite the fact that Lexis in 1879 suggested that the normal distribution itself fits well to the right of the mode. Study aims were therefore to compare skew-t fits to Human Mortality Database data, with Gompertz-nested distributions, by implementing maximum likelihood estimation functions (mle2, R package bbmle; coding given). Results showed skew-t fits obtained lower Bayesian information criterion values than Gompertz-nested distributions, applied to low-mortality country data, including 1711 and 1810 cohorts. As Gaussian-related distributions have now been found to have almost universal application to error theory, one conclusion could be that a Gaussian-related distribution might replace Gompertz-related distributions as the basis for mortality studies.
NASA Astrophysics Data System (ADS)
Skrypnyk, T.
2017-08-01
We study the problem of separation of variables for classical integrable Hamiltonian systems governed by non-skew-symmetric non-dynamical so(3)\\otimes so(3) -valued elliptic r-matrices with spectral parameters. We consider several examples of such models, and perform separation of variables for classical anisotropic one- and two-spin Gaudin-type models in an external magnetic field, and for Jaynes-Cummings-Dicke-type models without the rotating wave approximation.
Counihan, T.D.; Miller, Allen I.; Parsley, M.J.
1999-01-01
The development of recruitment monitoring programs for age-0 white sturgeons Acipenser transmontanus is complicated by the statistical properties of catch-per-unit-effort (CPUE) data. We found that age-0 CPUE distributions from bottom trawl surveys violated assumptions of statistical procedures based on normal probability theory. Further, no single data transformation uniformly satisfied these assumptions because CPUE distribution properties varied with the sample mean (??(CPUE)). Given these analytic problems, we propose that an additional index of age-0 white sturgeon relative abundance, the proportion of positive tows (Ep), be used to estimate sample sizes before conducting age-0 recruitment surveys and to evaluate statistical hypothesis tests comparing the relative abundance of age-0 white sturgeons among years. Monte Carlo simulations indicated that Ep was consistently more precise than ??(CPUE), and because Ep is binomially rather than normally distributed, surveys can be planned and analyzed without violating the assumptions of procedures based on normal probability theory. However, we show that Ep may underestimate changes in relative abundance at high levels and confound our ability to quantify responses to management actions if relative abundance is consistently high. If data suggest that most samples will contain age-0 white sturgeons, estimators of relative abundance other than Ep should be considered. Because Ep may also obscure correlations to climatic and hydrologic variables if high abundance levels are present in time series data, we recommend ??(CPUE) be used to describe relations to environmental variables. The use of both Ep and ??(CPUE) will facilitate the evaluation of hypothesis tests comparing relative abundance levels and correlations to variables affecting age-0 recruitment. Estimated sample sizes for surveys should therefore be based on detecting predetermined differences in Ep, but data necessary to calculate ??(CPUE) should also be collected.
NASA Astrophysics Data System (ADS)
Poveda, GermáN.; Jaramillo, Alvaro; Gil, Marta MaríA.; Quiceno, Natalia; Mantilla, Ricardo I.
2001-08-01
An analysis of hydrologic variability in Colombia shows different seasonal effects associated with El Niño/Southern Oscillation (ENSO) phenomenon. Spectral and cross-correlation analyses are developed between climatic indices of the tropical Pacific Ocean and the annual cycle of Colombia's hydrology: precipitation, river flows, soil moisture, and the Normalized Difference Vegetation Index (NDVI). Our findings indicate stronger anomalies during December-February and weaker during March-May. The effects of ENSO are stronger for streamflow than for precipitation, owing to concomitant effects on soil moisture and evapotranspiration. We studied time variability of 10-day average volumetric soil moisture, collected at the tropical Andes of central Colombia at depths of 20 and 40 cm, in coffee growing areas characterized by shading vegetation ("shaded coffee"), forest, and sunlit coffee. The annual and interannual variability of soil moisture are highly intertwined for the period 1997-1999, during strong El Niño and La Niña events. Soil moisture exhibited greater negative anomalies during 1997-1998 El Niño, being strongest during the two dry seasons that normally occur in central Colombia. Soil moisture deficits were more drastic at zones covered by sunlit coffee than at those covered by forest and shaded coffee. Soil moisture responds to wetter than normal precipitation conditions during La Niña 1998-1999, reaching maximum levels throughout that period. The probability density function of soil moisture records is highly skewed and exhibits different kinds of multimodality depending upon land cover type. NDVI exhibits strong negative anomalies throughout the year during El Niños, in particular during September-November (year 0) and June-August (year 0). The strong negative relation between NDVI and El Niño has enormous implications for carbon, water, and energy budgets over the region, including the tropical Andes and Amazon River basin.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Semi-nonparametric VaR forecasts for hedge funds during the recent crisis
NASA Astrophysics Data System (ADS)
Del Brio, Esther B.; Mora-Valencia, Andrés; Perote, Javier
2014-05-01
The need to provide accurate value-at-risk (VaR) forecasting measures has triggered an important literature in econophysics. Although these accurate VaR models and methodologies are particularly demanded for hedge fund managers, there exist few articles specifically devoted to implement new techniques in hedge fund returns VaR forecasting. This article advances in these issues by comparing the performance of risk measures based on parametric distributions (the normal, Student’s t and skewed-t), semi-nonparametric (SNP) methodologies based on Gram-Charlier (GC) series and the extreme value theory (EVT) approach. Our results show that normal-, Student’s t- and Skewed t- based methodologies fail to forecast hedge fund VaR, whilst SNP and EVT approaches accurately success on it. We extend these results to the multivariate framework by providing an explicit formula for the GC copula and its density that encompasses the Gaussian copula and accounts for non-linear dependences. We show that the VaR obtained by the meta GC accurately captures portfolio risk and outperforms regulatory VaR estimates obtained through the meta Gaussian and Student’s t distributions.
Extremal optimization for Sherrington-Kirkpatrick spin glasses
NASA Astrophysics Data System (ADS)
Boettcher, S.
2005-08-01
Extremal Optimization (EO), a new local search heuristic, is used to approximate ground states of the mean-field spin glass model introduced by Sherrington and Kirkpatrick. The implementation extends the applicability of EO to systems with highly connected variables. Approximate ground states of sufficient accuracy and with statistical significance are obtained for systems with more than N=1000 variables using ±J bonds. The data reproduces the well-known Parisi solution for the average ground state energy of the model to about 0.01%, providing a high degree of confidence in the heuristic. The results support to less than 1% accuracy rational values of ω=2/3 for the finite-size correction exponent, and of ρ=3/4 for the fluctuation exponent of the ground state energies, neither one of which has been obtained analytically yet. The probability density function for ground state energies is highly skewed and identical within numerical error to the one found for Gaussian bonds. But comparison with infinite-range models of finite connectivity shows that the skewness is connectivity-dependent.
Shape Analysis of the Peripapillary RPE Layer in Papilledema and Ischemic Optic Neuropathy
Kupersmith, Mark J.; Rohlf, F. James
2011-01-01
Purpose. Geometric morphometrics (GM) was used to analyze the shape of the peripapillary retinal pigment epithelium–Bruch's membrane (RPE/BM) layer imaged on the SD-OCT 5-line raster in normal subjects and in patients with papilledema and ischemic optic neuropathy. Methods. Three groups of subjects were compared: 30 normals, 20 with anterior ischemic optic neuropathy (AION), and 25 with papilledema and intracranial hypertension. Twenty equidistant semilandmarks were digitized on OCT images of the RPE/BM layer spanning 2500 μm on each side of the neural canal opening (NCO). The data were analyzed using standard GM techniques, including a generalized least-squares Procrustes superimposition, principal component analysis, thin-plate spline (to visualize deformations), and permutation statistical analysis to evaluate differences in shape variables. Results. The RPE/BM layer in normals and AION have a characteristic V shape pointing away from the vitreous; the RPE/BM layer in papilledema has an inverted U shape, skewed nasally inward toward the vitreous. The differences were statistically significant. There was no significant difference in shapes between normals and AION. Pre- and posttreatment OCTs, in select cases of papilledema, showed that the inverted U-shaped RPE/BM moved posteriorly into a normal V shape as the papilledema resolved with weight loss or shunting. Conclusions. The shape difference in papilledema, absent in AION, cannot be explained by disc edema alone. The difference is a consequence of both the translaminar pressure gradient and the material properties of the peripapillary sclera. GM offers a novel way of statistically assessing shape differences of the peripapillary optic nerve head. PMID:21896851
Bayesian models for cost-effectiveness analysis in the presence of structural zero costs
Baio, Gianluca
2014-01-01
Bayesian modelling for cost-effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost-effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality-adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual-level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost-effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:24343868
Bayesian models for cost-effectiveness analysis in the presence of structural zero costs.
Baio, Gianluca
2014-05-20
Bayesian modelling for cost-effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost-effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality-adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual-level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost-effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.
Ohmaru, Natsuki; Nakatsu, Takaaki; Izumi, Reishi; Mashima, Keiichi; Toki, Misako; Kobayashi, Asako; Ogawa, Hiroko; Hirohata, Satoshi; Ikeda, Satoru; Kusachi, Shozo
2011-01-01
Even high-normal albuminuria is reportedly associated with cardiovascular events. We determined the urine albumin creatinine ratio (UACR) in spot urine samples and analyzed the UACR distribution and the prevalence of high-normal levels. The UACR was determined using immunoturbidimetry in 332 untreated asymptomatic non-diabetic Japanese patients with hypertension and in 69 control subjects. The microalbuminuria and macroalbuminuria levels were defined as a UCAR ≥30 and <300 µg/mg·creatinine and a UCAR ≥300 µg/mg·creatinine, respectively. The distribution patterns showed a highly skewed distribution for the lower levels, and a common logarithmic transformation produced a close fit to a Gaussian distribution with median, 25th and 75th percentile values of 22.6, 13.5 and 48.2 µg/mg·creatinine, respectively. When a high-normal UACR was set at >20 to <30 µg/mg·creatinine, 19.9% (66/332) of the hypertensive patients exhibited a high-normal UACR. Microalbuminuria and macroalbuminuria were observed in 36.1% (120/336) and 2.1% (7/332) of the patients, respectively. UACR was significantly correlated with the systolic and diastolic blood pressures and the pulse pressure. A stepwise multivariate analysis revealed that these pressures as well as age were independent factors that increased UACR. The UACR distribution exhibited a highly skewed pattern, with approximately 60% of untreated, non-diabetic hypertensive patients exhibiting a high-normal or larger UACR. Both hypertension and age are independent risk factors that increase the UACR. The present study indicated that a considerable percentage of patients require anti-hypertensive drugs with antiproteinuric effects at the start of treatment.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco
2016-03-01
We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
NASA Astrophysics Data System (ADS)
Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William
2017-02-01
For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0, is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0. We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.
Fowler, Mike S; Ruokolainen, Lasse
2013-01-01
The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let the characteristics of known natural environmental covariates (e.g., colour and distribution shape) guide us in our choice of how to best model the impact of coloured environmental variation on population dynamics.
Patterns in food intake correlate with body mass index.
Periwal, Vipul; Chow, Carson C
2006-11-01
Quantifying eating behavior may give clues to both the physiological and behavioral mechanisms behind weight regulation. We analyzed year-long dietary records of 29 stable-weight subjects. The records showed wide daily variations of food intake. We computed the temporal autocorrelation and skewness of food intake mass, energy, carbohydrate, fat, and protein. We also computed the cross-correlation coefficient between intake mass and intake energy. The mass of the food intake exhibited long-term trends that were positively skewed, with wide variability among individuals. The average duration of the trends (P = 0.003) and the skewness (P = 0.006) of the food intake mass were significantly correlated with mean body mass index (BMI). We also found that the lower the correlation coefficient between the energy content and the mass of food intake, the higher the BMI. Our results imply that humans in neutral energy balance eating ad libitum exhibit a long-term positive bias in the food intake that operates partially through the mass of food eaten to defend against eating too little more vigorously than eating too much.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-26
... various pollution controls in its BART analysis for Big Stone I, its cost impact analysis is skewed in... took into account the State's consideration of environmental impacts when reviewing the Big Stone I SO... shutdown are part of normal operations at facilities like Big Stone, and because these emissions impact...
Evaluation of a New Mean Scaled and Moment Adjusted Test Statistic for SEM
ERIC Educational Resources Information Center
Tong, Xiaoxiao; Bentler, Peter M.
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and 2 well-known robust test…
ERIC Educational Resources Information Center
Sengul Avsar, Asiye; Tavsancil, Ezel
2017-01-01
This study analysed polytomous items' psychometric properties according to nonparametric item response theory (NIRT) models. Thus, simulated datasets--three different test lengths (10, 20 and 30 items), three sample distributions (normal, right and left skewed) and three samples sizes (100, 250 and 500)--were generated by conducting 20…
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-01-30
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18 F-FLT PET SUV distributions (P > 0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
NASA Astrophysics Data System (ADS)
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-02-01
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18F-FLT PET SUV distributions (P > 0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
Scoring in genetically modified organism proficiency tests based on log-transformed results.
Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P
2006-01-01
The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.
NASA Astrophysics Data System (ADS)
Ali, Naseem; Aseyev, A.; McCraney, J.; Vuppuluri, V.; Abbass, O.; Al Jubaree, T.; Melius, M.; Cal, R. B.
2014-11-01
Hot-wire measurements obtained in a 3 × 3 wind turbine array boundary layer are utilized to analyze higher order statistics which include skewness, kurtosis as well as the ratios of structure functions and spectra. The ratios consist of wall-normal to streamwise components for both quantities. The aim is to understand the degree of anisotropy in the flow for the near- and far-wakes of the flow field where profiles at one diameter and five diameters are considered, respectively. The skewness at top tip for both wakes show a negative skewness while below the turbine canopy, this terms are positive. The kurtosis shows a Gaussian behavior in the near-wake immediately at hub-height. In addition, the effect due to the passage of the rotor in tandem with the shear layer at the top tip renders relatively high differences in the fourth order moment. The second order structure function and spectral ratios are found to exhibit anisotropic behavior at the top and bottom-tips for the large scales. Mixed structure functions and co-spectra are also considered in the context of isotropy.
Daskivich, Timothy; Luu, Michael; Noah, Benjamin; Fuller, Garth; Anger, Jennifer; Spiegel, Brennan
2018-05-09
Health care consumers are increasingly using online ratings to select providers, but differences in the distribution of scores across specialties and skew of the data have the potential to mislead consumers about the interpretation of ratings. The objective of our study was to determine whether distributions of consumer ratings differ across specialties and to provide specialty-specific data to assist consumers and clinicians in interpreting ratings. We sampled 212,933 health care providers rated on the Healthgrades consumer ratings website, representing 29 medical specialties (n=128,678), 15 surgical specialties (n=72,531), and 6 allied health (nonmedical, nonnursing) professions (n=11,724) in the United States. We created boxplots depicting distributions and tested the normality of overall patient satisfaction scores. We then determined the specialty-specific percentile rank for scores across groupings of specialties and individual specialties. Allied health providers had higher median overall satisfaction scores (4.5, interquartile range [IQR] 4.0-5.0) than physicians in medical specialties (4.0, IQR 3.3-4.5) and surgical specialties (4.2, IQR 3.6-4.6, P<.001). Overall satisfaction scores were highly left skewed (normal between -0.5 and 0.5) for all specialties, but skewness was greatest among allied health providers (-1.23, 95% CI -1.280 to -1.181), followed by surgical (-0.77, 95% CI -0.787 to -0.755) and medical specialties (-0.64, 95% CI -0.648 to -0.628). As a result of the skewness, the percentages of overall satisfaction scores less than 4 were only 23% for allied health, 37% for surgical specialties, and 50% for medical specialties. Percentile ranks for overall satisfaction scores varied across specialties; percentile ranks for scores of 2 (0.7%, 2.9%, 0.8%), 3 (5.8%, 16.6%, 8.1%), 4 (23.0%, 50.3%, 37.3%), and 5 (63.9%, 89.5%, 86.8%) differed for allied health, medical specialties, and surgical specialties, respectively. Online consumer ratings of health care providers are highly left skewed, fall within narrow ranges, and differ by specialty, which precludes meaningful interpretation by health care consumers. Specialty-specific percentile ranks may help consumers to more meaningfully assess online physician ratings. ©Timothy Daskivich, Michael Luu, Benjamin Noah, Garth Fuller, Jennifer Anger, Brennan Spiegel. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 09.05.2018.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Harter, Thomas; Sivakumar, Bellie
2006-06-01
Facies-based geostatistical models have become important tools for analyzing flow and mass transport processes in heterogeneous aquifers. Yet little is known about the relationship between these latter processes and the parameters of facies-based geostatistical models. In this study, we examine the transport of a nonpoint source solute normal (perpendicular) to the major bedding plane of an alluvial aquifer medium that contains multiple geologic facies, including interconnected, high-conductivity (coarse textured) facies. We also evaluate the dependence of the transport behavior on the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system's hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute traveltime probability density function (pdf) for solute flux from the water table to the bottom boundary (the production horizon) of the aquifer. The cases examined include two-, three-, and four-facies models, with mean length anisotropy ratios for horizontal to vertical facies, ek, from 25:1 to 300:1 and with a wide range of facies volume proportions (e.g., from 5 to 95% coarse-textured facies). Predictions of traveltime pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer. Those predictions of traveltime pdfs also are affected by the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and, to a lesser degree, the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, traveltime is not lognormally distributed as is often assumed. Also, macrodispersive behavior (variance of the traveltime) is found not to be a unique function of the conductivity variance. For the parameter range examined, the third moment of the traveltime pdf varies from negatively skewed to strongly positively skewed. We also show that the Markov chain approach may give significantly different traveltime distributions when compared to the more commonly used Gaussian random field approach, even when the first- and second-order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport, and uncertainty about that choice must be considered in evaluating the results.
Masoumi, Hamid Reza Fard; Basri, Mahiran; Kassim, Anuar; Abdullah, Dzulkefly Kuang; Abdollahi, Yadollah; Abd Gani, Siti Salwa; Rezaee, Malahat
2013-01-01
Lipase-catalyzed production of triethanolamine-based esterquat by esterification of oleic acid (OA) with triethanolamine (TEA) in n-hexane was performed in 2 L stirred-tank reactor. A set of experiments was designed by central composite design to process modeling and statistically evaluate the findings. Five independent process variables, including enzyme amount, reaction time, reaction temperature, substrates molar ratio of OA to TEA, and agitation speed, were studied under the given conditions designed by Design Expert software. Experimental data were examined for normality test before data processing stage and skewness and kurtosis indices were determined. The mathematical model developed was found to be adequate and statistically accurate to predict the optimum conversion of product. Response surface methodology with central composite design gave the best performance in this study, and the methodology as a whole has been proven to be adequate for the design and optimization of the enzymatic process.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis
Lin, Johnny; Bentler, Peter M.
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511
Is Coefficient Alpha Robust to Non-Normal Data?
Sheng, Yanyan; Sheng, Zhaohui
2011-01-01
Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306
Łoniewska, Beata; Kaczmarczyk, Mariusz; Clark, Jeremy Simon; Gorący, Iwona; Horodnicka-Józwa, Anita; Ciechanowicz, Andrzej
2015-03-16
A-Kinase Anchoring Proteins (AKAPs) coordinate the specificity of protein kinase A signaling by localizing the kinase to subcellular sites. The 1936G (V646) AKAP10 allele has been associated in adults with low cholinergic/vagus nerve sensitivity, shortened PR intervals in ECG recording and in newborns with increased blood pressure and higher cholesterol cord blood concentration. The aim of the study was to answer the question of whether 1936A > G AKAP10 polymorphism is associated with the newborn electrocardiographic variables. Electrocardiograms were recorded from 114 consecutive healthy Polish newborns (55 females, 59 males), born after 37 gestational weeks to healthy women with uncomplicated pregnancies. All recordings were made between 3(rd) and 7(th) day of life to avoid QT variability. The heart rate per minute and duration of PR, QRS, RR and QT intervals were usually measured. The ECGs were evaluated independently by three observers. At birth, cord blood of neonates was obtained for isolation of genomic DNA. The distribution of anthropometric and electrocardiographic variables in our cohort approached normality (skewness < 2 for all variables). No significant differences in anthropometric variables and electrocardiographic traits with respect to AKAP10 genotype were found. Multiple regression analysis with adjustment for gender, gestational age and birth mass revealed that QTc interval in GG AKAP10 homozygotes was significantly longer, but in range, when compared with A alleles carriers (AA + AG, recessive mode of inheritance). No rhythm disturbances were observed. Results demonstrate possible association between AKAP10 1936A > G variant and QTc interval in Polish newborns.
Larson, Nicholas B; Fogarty, Zachary C; Larson, Melissa C; Kalli, Kimberly R; Lawrenson, Kate; Gayther, Simon; Fridley, Brooke L; Goode, Ellen L; Winham, Stacey J
2017-12-01
X-chromosome inactivation (XCI) epigenetically silences transcription of an X chromosome in females; patterns of XCI are thought to be aberrant in women's cancers, but are understudied due to statistical challenges. We develop a two-stage statistical framework to assess skewed XCI and evaluate gene-level patterns of XCI for an individual sample by integration of RNA sequence, copy number alteration, and genotype data. Our method relies on allele-specific expression (ASE) to directly measure XCI and does not rely on male samples or paired normal tissue for comparison. We model ASE using a two-component mixture of beta distributions, allowing estimation for a given sample of the degree of skewness (based on a composite likelihood ratio test) and the posterior probability that a given gene escapes XCI (using a Bayesian beta-binomial mixture model). To illustrate the utility of our approach, we applied these methods to data from tumors of ovarian cancer patients. Among 99 patients, 45 tumors were informative for analysis and showed evidence of XCI skewed toward a particular parental chromosome. For 397 X-linked genes, we observed tumor XCI patterns largely consistent with previously identified consensus states based on multiple normal tissue types. However, 37 genes differed in XCI state between ovarian tumors and the consensus state; 17 genes aberrantly escaped XCI in ovarian tumors (including many oncogenes), whereas 20 genes were unexpectedly inactivated in ovarian tumors (including many tumor suppressor genes). These results provide evidence of the importance of XCI in ovarian cancer and demonstrate the utility of our two-stage analysis. © 2017 WILEY PERIODICALS, INC.
Lin, Yuning; Li, Hui; Chen, Ziqian; Ni, Ping; Zhong, Qun; Huang, Huijuan; Sandrasegaran, Kumar
2015-05-01
The purpose of this study was to investigate the application of histogram analysis of apparent diffusion coefficient (ADC) in characterizing pathologic features of cervical cancer and benign cervical lesions. This prospective study was approved by the institutional review board, and written informed consent was obtained. Seventy-three patients with cervical cancer (33-69 years old; 35 patients with International Federation of Gynecology and Obstetrics stage IB cervical cancer) and 38 patients (38-61 years old) with normal cervix or cervical benign lesions (control group) were enrolled. All patients underwent 3-T diffusion-weighted imaging (DWI) with b values of 0 and 800 s/mm(2). ADC values of the entire tumor in the patient group and the whole cervix volume in the control group were assessed. Mean ADC, median ADC, 25th and 75th percentiles of ADC, skewness, and kurtosis were calculated. Histogram parameters were compared between different pathologic features, as well as between stage IB cervical cancer and control groups. Mean ADC, median ADC, and 25th percentile of ADC were significantly higher for adenocarcinoma (p = 0.021, 0.006, and 0.004, respectively), and skewness was significantly higher for squamous cell carcinoma (p = 0.011). Median ADC was statistically significantly higher for well or moderately differentiated tumors (p = 0.044), and skewness was statistically significantly higher for poorly differentiated tumors (p = 0.004). No statistically significant difference of ADC histogram was observed between lymphovascular space invasion subgroups. All histogram parameters differed significantly between stage IB cervical cancer and control groups (p < 0.05). Distribution of ADCs characterized by histogram analysis may help to distinguish early-stage cervical cancer from normal cervix or cervical benign lesions and may be useful for evaluating the different pathologic features of cervical cancer.
Sperm Retrieval in Patients with Klinefelter Syndrome: A Skewed Regression Model Analysis.
Chehrazi, Mohammad; Rahimiforoushani, Abbas; Sabbaghian, Marjan; Nourijelyani, Keramat; Sadighi Gilani, Mohammad Ali; Hoseini, Mostafa; Vesali, Samira; Yaseri, Mehdi; Alizadeh, Ahad; Mohammad, Kazem; Samani, Reza Omani
2017-01-01
The most common chromosomal abnormality due to non-obstructive azoospermia (NOA) is Klinefelter syndrome (KS) which occurs in 1-1.72 out of 500-1000 male infants. The probability of retrieving sperm as the outcome could be asymmetrically different between patients with and without KS, therefore logistic regression analysis is not a well-qualified test for this type of data. This study has been designed to evaluate skewed regression model analysis for data collected from microsurgical testicular sperm extraction (micro-TESE) among azoospermic patients with and without non-mosaic KS syndrome. This cohort study compared the micro-TESE outcome between 134 men with classic KS and 537 men with NOA and normal karyotype who were referred to Royan Institute between 2009 and 2011. In addition to our main outcome, which was sperm retrieval, we also used logistic and skewed regression analyses to compare the following demographic and hormonal factors: age, level of follicle stimulating hormone (FSH), luteinizing hormone (LH), and testosterone between the two groups. A comparison of the micro-TESE between the KS and control groups showed a success rate of 28.4% (38/134) for the KS group and 22.2% (119/537) for the control group. In the KS group, a significantly difference (P<0.001) existed between testosterone levels for the successful sperm retrieval group (3.4 ± 0.48 mg/mL) compared to the unsuccessful sperm retrieval group (2.33 ± 0.23 mg/mL). The index for quasi Akaike information criterion (QAIC) had a goodness of fit of 74 for the skewed model which was lower than logistic regression (QAIC=85). According to the results, skewed regression is more efficient in estimating sperm retrieval success when the data from patients with KS are analyzed. This finding should be investigated by conducting additional studies with different data structures.
Reliability of provocative tests of motion sickness susceptibility
NASA Technical Reports Server (NTRS)
Calkins, D. S.; Reschke, M. F.; Kennedy, R. S.; Dunlop, W. P.
1987-01-01
Test-retest reliability values were derived from motion sickness susceptibility scores obtained from two successive exposures to each of three tests: (1) Coriolis sickness sensitivity test; (2) staircase velocity movement test; and (3) parabolic flight static chair test. The reliability of the three tests ranged from 0.70 to 0.88. Normalizing values from predictors with skewed distributions improved the reliability.
ERIC Educational Resources Information Center
Pant, Mohan Dev
2011-01-01
The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and…
Handling Bias from Individual Differences in between-Subject Holistic Experimental Designs.
1985-10-30
leptukurtic) or flat ( platykurtic ) in the neighborhood of the mode. For normal distributions, B1-0 and B2-3. When Bl deviates from 0, the data is skewed. For B2...were obtained. This data is symmetrical, though peaked. The platykurtic data was obtained by taking the cube root of the discrete values on the table of
NASA Technical Reports Server (NTRS)
Johnson, B. V.; Wagner, J. H.; Steuber, G. D.
1993-01-01
An experimental program was conducted to investigate heat transfer and pressure loss characteristics of rotating multipass passages, for configurations and dimensions typical of modem turbine blades. This experimental program is one part of the NASA Hot Section Technology (HOST) Initiative, which has as its overall objective the development and verification of improved analysis methods that will form the basis for a design system that will produce turbine components with improved durability. The objective of this program was the generation of a data base of heat transfer and pressure loss data required to develop heat transfer correlations and to assess computational fluid dynamic techniques for rotating coolant passages. The experimental work was broken down into two phases. Phase 1 consists of experiments conducted in a smooth wall large scale heat transfer model. A detailed discussion of these results was presented in volume 1 of a NASA Report. In Phase 2 the large scale model was modified to investigate the effects of skewed and normal passage turbulators. The results of Phase 2 along with comparison to Phase 1 is the subject of this Volume 2 NASA Report.
Adaptation to Skew Distortions of Natural Scenes and Retinal Specificity of Its Aftereffects
Habtegiorgis, Selam W.; Rifai, Katharina; Lappe, Markus; Wahl, Siegfried
2017-01-01
Image skew is one of the prominent distortions that exist in optical elements, such as in spectacle lenses. The present study evaluates adaptation to image skew in dynamic natural images. Moreover, the cortical levels involved in skew coding were probed using retinal specificity of skew adaptation aftereffects. Left and right skewed natural image sequences were shown to observers as adapting stimuli. The point of subjective equality (PSE), i.e., the skew amplitude in simple geometrical patterns that is perceived to be unskewed, was used to quantify the aftereffect of each adapting skew direction. The PSE, in a two-alternative forced choice paradigm, shifted toward the adapting skew direction. Moreover, significant adaptation aftereffects were obtained not only at adapted, but also at non-adapted retinal locations during fixation. Skew adaptation information was transferred partially to non-adapted retinal locations. Thus, adaptation to skewed natural scenes induces coordinated plasticity in lower and higher cortical areas of the visual pathway. PMID:28751870
NASA Astrophysics Data System (ADS)
Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw
2011-07-01
Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.
Fang, Rui; Wey, Andrew; Bobbili, Naveen K; Leke, Rose F G; Taylor, Diane Wallace; Chen, John J
2017-07-17
Antibodies play an important role in immunity to malaria. Recent studies show that antibodies to multiple antigens, as well as, the overall breadth of the response are associated with protection from malaria. Yet, the variability and reliability of antibody measurements against a combination of malarial antigens using multiplex assays have not been well characterized. A normalization procedure for reducing between-plate variation using replicates of pooled positive and negative controls was investigated. Sixty test samples (30 from malaria-positive and 30 malaria-negative individuals), together with five pooled positive-controls and two pooled negative-controls, were screened for antibody levels to 9 malarial antigens, including merozoite antigens (AMA1, EBA175, MSP1, MSP2, MSP3, MSP11, Pf41), sporozoite CSP, and pregnancy-associated VAR2CSA. The antibody levels were measured in triplicate on each of 3 plates, and the experiments were replicated on two different days by the same technician. The performance of the proposed normalization procedure was evaluated with the pooled controls for the test samples on both the linear and natural-log scales. Compared with data on the linear scale, the natural-log transformed data were less skewed and reduced the mean-variance relationship. The proposed normalization procedure using pooled controls on the natural-log scale significantly reduced between-plate variation. For malaria-related research that measure antibodies to multiple antigens with multiplex assays, the natural-log transformation is recommended for data analysis and use of the normalization procedure with multiple pooled controls can improve the precision of antibody measurements.
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies
ERIC Educational Resources Information Center
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-01-01
Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…
Oberg, Kevin A.; Mades, Dean M.
1987-01-01
Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)
Foldnes, Njål; Olsson, Ulf Henning
2016-01-01
We present and investigate a simple way to generate nonnormal data using linear combinations of independent generator (IG) variables. The simulated data have prespecified univariate skewness and kurtosis and a given covariance matrix. In contrast to the widely used Vale-Maurelli (VM) transform, the obtained data are shown to have a non-Gaussian copula. We analytically obtain asymptotic robustness conditions for the IG distribution. We show empirically that popular test statistics in covariance analysis tend to reject true models more often under the IG transform than under the VM transform. This implies that overly optimistic evaluations of estimators and fit statistics in covariance structure analysis may be tempered by including the IG transform for nonnormal data generation. We provide an implementation of the IG transform in the R environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, W; Riyahi, S; Lu, W
Purpose: Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (radiation pneumonitis and radiation fibrosis). For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume. Methods: The free-breathing CTs of 14 lung SBRT patients were studied. Different sizes of GTVs were simulated with spheres placed at the upper lobe and lower lobe respectively in the normal lung (contralateral to tumor). 27 texture features (9 from intensity histogram, 8 from grey-level co-occurrence matrix [GLCM] and 10 from grey-level run-length matrix [GLRM])more » were extracted from [normal lung-GTV]. To measure the variability of a feature F, the relative difference D=|Fref -Fsim|/Fref*100% was calculated, where Fref was for the entire normal lung and Fsim was for [normal lung-GTV]. A feature was considered as robust if the largest non-outlier (Q3+1.5*IQR) D was less than 5%, and considered as not correlated with normal lung volume when their Pearson correlation was lower than 0.50. Results: Only 11 features were robust. All first-order intensity-histogram features (mean, max, etc.) were robust, while most higher-order features (skewness, kurtosis, etc.) were unrobust. Only two of the GLCM and four of the GLRM features were robust. Larger GTV resulted greater feature variation, this was particularly true for unrobust features. All robust features were not correlated with normal lung volume while three unrobust features showed high correlation. Excessive variations were observed in two low grey-level run features and were later identified to be from one patient with local lung diseases (atelectasis) in the normal lung. There was no dependence on GTV location. Conclusion: We identified 11 robust normal lung CT texture features that can be further examined for the prediction of radiation-induced lung disease. Interestingly, low grey-level run features identified normal lung diseases. This work was supported in part by the National Cancer Institute Grants R01CA172638.« less
The Affective Impact of Financial Skewness on Neural Activity and Choice
Wu, Charlene C.; Bossaerts, Peter; Knutson, Brian
2011-01-01
Few finance theories consider the influence of “skewness” (or large and asymmetric but unlikely outcomes) on financial choice. We investigated the impact of skewed gambles on subjects' neural activity, self-reported affective responses, and subsequent preferences using functional magnetic resonance imaging (FMRI). Neurally, skewed gambles elicited more anterior insula activation than symmetric gambles equated for expected value and variance, and positively skewed gambles also specifically elicited more nucleus accumbens (NAcc) activation than negatively skewed gambles. Affectively, positively skewed gambles elicited more positive arousal and negatively skewed gambles elicited more negative arousal than symmetric gambles equated for expected value and variance. Subjects also preferred positively skewed gambles more, but negatively skewed gambles less than symmetric gambles of equal expected value. Individual differences in both NAcc activity and positive arousal predicted preferences for positively skewed gambles. These findings support an anticipatory affect account in which statistical properties of gambles—including skewness—can influence neural activity, affective responses, and ultimately, choice. PMID:21347239
Simultaneous calibration of ensemble river flow predictions over an entire range of lead times
NASA Astrophysics Data System (ADS)
Hemri, S.; Fundel, F.; Zappa, M.
2013-10-01
Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.
Gvakharia, Alexander; Kort, Eric A; Brandt, Adam; Peischl, Jeff; Ryerson, Thomas B; Schwarz, Joshua P; Smith, Mackenzie L; Sweeney, Colm
2017-05-02
Incomplete combustion during flaring can lead to production of black carbon (BC) and loss of methane and other pollutants to the atmosphere, impacting climate and air quality. However, few studies have measured flare efficiency in a real-world setting. We use airborne data of plume samples from 37 unique flares in the Bakken region of North Dakota in May 2014 to calculate emission factors for BC, methane, ethane, and combustion efficiency for methane and ethane. We find no clear relationship between emission factors and aircraft-level wind speed or between methane and BC emission factors. Observed median combustion efficiencies for methane and ethane are close to expected values for typical flares according to the US EPA (98%). However, we find that the efficiency distribution is skewed, exhibiting log-normal behavior. This suggests incomplete combustion from flares contributes almost 1/5 of the total field emissions of methane and ethane measured in the Bakken shale, more than double the expected value if 98% efficiency was representative. BC emission factors also have a skewed distribution, but we find lower emission values than previous studies. The direct observation for the first time of a heavy-tail emissions distribution from flares suggests the need to consider skewed distributions when assessing flare impacts globally.
A nonparametric spatial scan statistic for continuous data.
Jung, Inkyung; Cho, Ho Jin
2015-10-20
Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.
ERIC Educational Resources Information Center
Custer, Michael; Omar, Md Hafidz; Pomplun, Mark
2006-01-01
This study compared vertical scaling results for the Rasch model from BILOG-MG and WINSTEPS. The item and ability parameters for the simulated vocabulary tests were scaled across 11 grades; kindergarten through 10th. Data were based on real data and were simulated under normal and skewed distribution assumptions. WINSTEPS and BILOG-MG were each…
DOT National Transportation Integrated Search
2017-08-01
Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...
DOT National Transportation Integrated Search
2017-08-01
Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...
Tips and Tricks for Successful Application of Statistical Methods to Biological Data.
Schlenker, Evelyn
2016-01-01
This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.
Using color histogram normalization for recovering chromatic illumination-changed images.
Pei, S C; Tseng, C L; Wu, C C
2001-11-01
We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.
Effect of Resonator Axis Skew on Normal Incidence Impedance
NASA Technical Reports Server (NTRS)
Parrott, Tony L.; Jones, Michael G.; Homeijer, Brian
2003-01-01
High by-pass turbofan engines have fewer fan blades and lower rotation speeds than their predecessors. Consequently, the noise suppression at the low frequency end of the noise spectra has become an increasing concern. This has led to a renewed emphasis on improving noise suppression efficiency of passive, duct liner treatments at the lower frequencies. For a variety of reasons, passive liners are comprised of locally-reacting, resonant absorbers. One reason for this design choice is to satisfy operational and economic requirements. The simplest liner design consists of a single layer of honeycomb core sandwiched between a porous facesheet and an impervious backing plate. These resonant absorbing structures are integrated into the nacelle wall and are very ef- ficient over a limited bandwidth centered on their resonance frequency. Increased noise suppression bandwidth and greater suppression at lower frequencies is typically achieved for conventional liners by increasing the liner depth and incorporating thin porous septa into the honeycomb core. However, constraints on liner depth in modern high by-pass engine nacelles severely limit the suppression bandwidth extension to lower frequencies. Also, current honeycomb core liners may not be suitable for irregular geometric volumes heretofore not considered. It is of interest, therefore, to find ways to circumvent liner depth restrictions and resonator cavity shape constraints. One way to increase effective liner depth is to skew the honeycomb core axis relative to the porous facesheet surface. Other possibilities are to alter resonator cavity shape, e.g. high aspect ratio, narrow channels that possibly include right angle bends, 180. channel fold-backs, and splayed channel walls to conform to irregular geometric constraints. These possibilities constitute the practical motivation for expanding impedance modeling capability to include unconventional resonator orientations and shapes. The work reported in this paper is in the nature of a progress report and is limited to examining the implications of resonator axis skew on the composite normal incidence impedance of an array of resonator channels. Specifically, experimental results are compared with a modified impedance prediction model for highaspect- ratio, rectangular, resonator channels with varying amounts of skew relative to the incident particle velocity. It is shown that for resonator channel widths of 1 to 2 mm, aspect ratios of 25 to 50, and skew angles of zero to sixty degrees, the surface impedance of test models can be predicted with good accuracy. Predicted resistances and reactances are particularly well correlated through the first resonance and first anti-resonance for all six test models investigated. Beyond the first anti-resonance, the impedance prediction model loses the ability to predict details of resistance and reactance but still predicts the mean trends very well.
Dynamic Modeling from Flight Data with Unknown Time Skews
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2016-01-01
A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.
Sex differences in the drivers of reproductive skew in a cooperative breeder.
Nelson-Flower, Martha J; Flower, Tom P; Ridley, Amanda R
2018-04-16
Many cooperatively breeding societies are characterized by high reproductive skew, such that some socially dominant individuals breed, while socially subordinate individuals provide help. Inbreeding avoidance serves as a source of reproductive skew in many high-skew societies, but few empirical studies have examined sources of skew operating alongside inbreeding avoidance or compared individual attempts to reproduce (reproductive competition) with individual reproductive success. Here, we use long-term genetic and observational data to examine factors affecting reproductive skew in the high-skew cooperatively breeding southern pied babbler (Turdoides bicolor). When subordinates can breed, skew remains high, suggesting factors additional to inbreeding avoidance drive skew. Subordinate females are more likely to compete to breed when older or when ecological constraints on dispersal are high, but heavy subordinate females are more likely to successfully breed. Subordinate males are more likely to compete when they are older, during high ecological constraints, or when they are related to the dominant male, but only the presence of within-group unrelated subordinate females predicts subordinate male breeding success. Reproductive skew is not driven by reproductive effort, but by forces such as intrinsic physical limitations and intrasexual conflict (for females) or female mate choice, male mate-guarding and potentially reproductive restraint (for males). Ecological conditions or "outside options" affect the occurrence of reproductive conflict, supporting predictions of recent synthetic skew models. Inbreeding avoidance together with competition for access to reproduction may generate high skew in animal societies, and disparate processes may be operating to maintain male vs. female reproductive skew in the same species. © 2018 John Wiley & Sons Ltd.
Over, Thomas M.; Saito, Riki J.; Soong, David T.
2016-06-30
The observed and adjusted values for each streamgage are tabulated. To illustrate the overall effect of the adjustments, differences in the mean, standard deviation, and skewness of the log-transformed observed and urbanization-adjusted peak discharge series by streamgage are computed. For almost every streamgage where an adjustment was applied (no increase in urbanization was reported for a few streamgages), the mean increased and the standard deviation decreased; the effect on skewness values was more variable but usually they increased. Significant positive peak discharge trends were common in the observed values, occurring at 27.3 percent of streamgages at a p-value of 0.05 according to a Kendall’s tau correlation test; in the adjusted values, the incidence of such trends was reduced to 7.0 percent.
A spectroscopic search for faint secondaries in cataclysmic variables
NASA Astrophysics Data System (ADS)
Vande Putte, D.; Smith, Robert Connon; Hawkins, N. A.; Martin, J. S.
2003-06-01
The secondary in cataclysmic variables (CVs) is usually detected by cross-correlation of the CV spectrum with that of a K or M dwarf template, to produce a radial velocity curve. Although this method has demonstrated its power, it has its limits in the case of noisy spectra, such as are found when the secondary is faint. A method of coadding spectra, called skew mapping, has been proposed in the past. Gradually, examples of its application are being published; none the less, so far no journal article has described the technique in detail. To answer this need, this paper explores in detail the capabilities of skew mapping when determining the amplitude of the radial velocity for faint secondaries. It demonstrates the power of the method over techniques that are more conventional, when the signal-to-noise ratio is poor. The paper suggests an approach to assessing the quality of results. This leads in the case of the investigated objects to a first tier of results, where we find K2= 127 +/- 23 km s-1 for SY Cnc, K2= 144 +/- 18 km s-1 for RW Sex and K2= 262 +/- 14 km s-1 for UX UMa. These we believe to be the first direct determinations of K2 for these objects. Furthermore, we also obtain K2= 263 +/- 30 km s-1 for RW Tri, close to a skew mapping result obtained elsewhere. In the first three cases, we use these results to derive the mass of the white dwarf companion. A second tier of results includes UU Aqr, EX Hya and LX Ser, for which we propose more tentative values of K2. Clear failures of the method are also discussed (EF Eri, VV Pup and SW Sex).
Hybrid excited claw pole generator with skewed and non-skewed permanent magnets
NASA Astrophysics Data System (ADS)
Wardach, Marcin
2017-12-01
This article contains simulation results of the Hybrid Excited Claw Pole Generator with skewed and non-skewed permanent magnets on rotor. The experimental machine has claw poles on two rotor sections, between which an excitation control coil is located. The novelty of this machine is existence of non-skewed permanent magnets on claws of one part of the rotor and skewed permanent magnets on the second one. The paper presents the construction of the machine and analysis of the influence of the PM skewing on the cogging torque and back-emf. Simulation studies enabled the determination of the cogging torque and the back-emf rms for both: the strengthening and the weakening of magnetic field. The influence of the magnets skewing on the cogging torque and the back-emf rms have also been analyzed.
A Posteriori Correction of Forecast and Observation Error Variances
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid
2005-01-01
Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.
Software Design Document MCC CSCI (1). Volume 1 Sections 1.0-2.18
1991-06-01
AssociationUserProtocol /simnet/common!include/prot ____________________ ____________________ ocol/p assoc.h Primitive long Standard C type...Information. 2.2.1.4.2 ProcessMessage ProcessMessage processes a message from another process. type describes the message as either one-way, a synchronous or...Macintosh Consoles. This is sometimes necessary due to normal clock skew so that operations among the MCC components will remain synchronized . This
Effect of cross grain on stress waves in lumber
C.C. Gerhards
1980-01-01
An evaluation is made of the effect of cross grain on the transit time of longitudinal compression stress waves in Douglas-fir 2 by 8 lumber. Cross grain causes the stress wave to advance with a front or contour skewed in the direction of the grain angle, rather than to advance with a front normal to the long axis of lumber. Thus, the timing of the stress wave in...
Topics in Statistical Calibration
2014-03-27
on a parametric bootstrap where, instead of sampling directly from the residuals , samples are drawn from a normal distribution. This procedure will...addition to centering them (Davison and Hinkley, 1997). When there are outliers in the residuals , the bootstrap distribution of x̂0 can become skewed or...based and inversion methods using the linear mixed-effects model. Then, a simple parametric bootstrap algorithm is proposed that can be used to either
NASA Technical Reports Server (NTRS)
Sedbrook, John C.; Carroll, Kathleen L.; Hung, Kai F.; Masson, Patrick H.; Somerville, Chris R.
2002-01-01
To investigate how roots respond to directional cues, we characterized a T-DNA-tagged Arabidopsis mutant named sku5 in which the roots skewed and looped away from the normal downward direction of growth on inclined agar surfaces. sku5 roots and etiolated hypocotyls were slightly shorter than normal and exhibited a counterclockwise (left-handed) axial rotation bias. The surface-dependent skewing phenotype disappeared when the roots penetrated the agar surface, but the axial rotation defect persisted, revealing that these two directional growth processes are separable. The SKU5 gene belongs to a 19-member gene family designated SKS (SKU5 Similar) that is related structurally to the multiple-copper oxidases ascorbate oxidase and laccase. However, the SKS proteins lack several of the conserved copper binding motifs characteristic of copper oxidases, and no enzymatic function could be assigned to the SKU5 protein. Analysis of plants expressing SKU5 reporter constructs and protein gel blot analysis showed that SKU5 was expressed most strongly in expanding tissues. SKU5 was glycosylated and modified by glycosyl phosphatidylinositol and localized to both the plasma membrane and the cell wall. Our observations suggest that SKU5 affects two directional growth processes, possibly by participating in cell wall expansion.
Robust functional statistics applied to Probability Density Function shape screening of sEMG data.
Boudaoud, S; Rix, H; Al Harrach, M; Marin, F
2014-01-01
Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.
Blood pressure in head‐injured patients
Mitchell, Patrick; Gregson, Barbara A; Piper, Ian; Citerio, Giuseppe; Mendelow, A David; Chambers, Iain R
2007-01-01
Objective To determine the statistical characteristics of blood pressure (BP) readings from a large number of head‐injured patients. Methods The BrainIT group has collected high time‐resolution physiological and clinical data from head‐injured patients who require intracranial pressure (ICP) monitoring. The statistical features of this dataset of BP measurements with time resolution of 1 min from 200 patients is examined. The distributions of BP measurements and their relationship with simultaneous ICP measurements are described. Results The distributions of mean, systolic and diastolic readings are close to normal with modest skewing towards higher values. There is a trend towards an increase in blood pressure with advancing age, but this is not significant. Simultaneous blood pressure and ICP values suggest a triphasic relationship with a BP rising at 0.28 mm Hg/mm Hg of ICP, for ICP up to 32 mm Hg, and 0.9 mm Hg/mm Hg of ICP for ICP from 33 to 55 mm Hg, and falling sharply with rising ICP for ICP >55 mm Hg. Conclusions Patients with head injury appear to have a near normal distribution of blood pressure readings that are skewed towards higher values. The relationship between BP and ICP may be triphasic. PMID:17138594
On the Effects of Wind Turbine Wake Skew Caused by Wind Veer: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchfield, Matthew J; Sirnivas, Senu
Because of Coriolis forces caused by the Earth's rotation, the structure of the atmospheric boundary layer often contains wind-direction change with height, also known as wind-direction veer. Under low turbulence conditions, such as in stably stratified atmospheric conditions, this veer can be significant, even across the vertical extent of a wind turbine's rotor disk. The veer then causes the wind turbine wake to skew as it advects downstream. This wake skew has been observed both experimentally and numerically. In this work, we attempt to examine the wake skewing process in some detail, and quantify how differently a skewed wake versusmore » a non skewed wake affects a downstream turbine. We do this by performing atmospheric large-eddy simulations to create turbulent inflow winds with and without veer. In the veer case, there is a roughly 8 degree wind direction change across the turbine rotor. We then perform subsequent large-eddy simulations using these inflow data with an actuator line rotor model to create wakes. The turbine modeled is a large, modern, offshore, multimegawatt turbine. We examine the unsteady wake data in detail and show that the skewed wake recovers faster than the non skewed wake. We also show that the wake deficit does not skew to the same degree that a passive tracer would if subject to veered inflow. Last, we use the wake data to place a hypothetical turbine 9 rotor diameters downstream by running aeroelastic simulations with the simulated wake data. We see differences in power and loads if this downstream turbine is subject to a skewed or non skewed wake. We feel that the differences observed between the skewed and nonskewed wake are important enough that the skewing effect should be included in engineering wake models.« less
On the Effects of Wind Turbine Wake Skew Caused by Wind Veer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchfield, Matthew J; Sirnivas, Senu
Because of Coriolis forces caused by the Earth's rotation, the structure of the atmospheric boundary layer often contains wind-direction change with height, also known as wind-direction veer. Under low turbulence conditions, such as in stably stratified atmospheric conditions, this veer can be significant, even across the vertical extent of a wind turbine's rotor disk. The veer then causes the wind turbine wake to skew as it advects downstream. This wake skew has been observed both experimentally and numerically. In this work, we attempt to examine the wake skewing process in some detail, and quantify how differently a skewed wake versusmore » a non skewed wake affects a downstream turbine. We do this by performing atmospheric large-eddy simulations to create turbulent inflow winds with and without veer. In the veer case, there is a roughly 8 degree wind direction change across the turbine rotor. We then perform subsequent large-eddy simulations using these inflow data with an actuator line rotor model to create wakes. The turbine modeled is a large, modern, offshore, multimegawatt turbine. We examine the unsteady wake data in detail and show that the skewed wake recovers faster than the non skewed wake. We also show that the wake deficit does not skew to the same degree that a passive tracer would if subject to veered inflow. Last, we use the wake data to place a hypothetical turbine 9 rotor diameters downstream by running aeroelastic simulations with the simulated wake data. We see differences in power and loads if this downstream turbine is subject to a skewed or non skewed wake. We feel that the differences observed between the skewed and nonskewed wake are important enough that the skewing effect should be included in engineering wake models.« less
Log-Normal Turbulence Dissipation in Global Ocean Models
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor
2018-03-01
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
NASA Astrophysics Data System (ADS)
Du, Zhong; Tian, Bo; Wu, Xiao-Yu; Liu, Lei; Sun, Yan
2017-07-01
Subpicosecond or femtosecond optical pulse propagation in the inhomogeneous fiber can be described by a higher-order nonlinear Schrödinger equation with variable coefficients, which is investigated in the paper. Via the Ablowitz-Kaup-Newell-Segur system and symbolic computation, the Lax pair and infinitely-many conservation laws are deduced. Based on the Lax pair and a modified Darboux transformation technique, the first- and second-order rogue wave solutions are constructed. Effects of the groupvelocity dispersion and third-order dispersion on the properties of the first- and second-order rouge waves are graphically presented and analyzed: The groupvelocity dispersion and third-order dispersion both affect the ranges and shapes of the first- and second-order rogue waves: The third-order dispersion can produce a skew angle of the first-order rogue wave and the skew angle rotates counterclockwise with the increase of the groupvelocity dispersion, when the groupvelocity dispersion and third-order dispersion are chosen as the constants; When the groupvelocity dispersion and third-order dispersion are taken as the functions of the propagation distance, the linear, X-shaped and parabolic trajectories of the rogue waves are obtained.
Network Skewness Measures Resilience in Lake Ecosystems
NASA Astrophysics Data System (ADS)
Langdon, P. G.; Wang, R.; Dearing, J.; Zhang, E.; Doncaster, P.; Yang, X.; Yang, H.; Dong, X.; Hu, Z.; Xu, M.; Yanjie, Z.; Shen, J.
2017-12-01
Changes in ecosystem resilience defy straightforward quantification from biodiversity metrics, which ignore influences of community structure. Naturally self-organized network structures show positive skewness in the distribution of node connections. Here we test for skewness reduction in lake diatom communities facing anthropogenic stressors, across a network of 273 lakes in China containing 452 diatom species. Species connections show positively skewed distributions in little-impacted lakes, switching to negative skewness in lakes associated with human settlement, surrounding land-use change, and higher phosphorus concentration. Dated sediment cores reveal a down-shifting of network skewness as human impacts intensify, and reversal with recovery from disturbance. The appearance and degree of negative skew presents a new diagnostic for quantifying system resilience and impacts from exogenous forcing on ecosystem communities.
Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis
NASA Astrophysics Data System (ADS)
Střelec, Luboš
2011-09-01
The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.
Continuity diaphragm for skewed continuous span precast prestressed concrete girder bridges.
DOT National Transportation Integrated Search
2004-10-01
Continuity diaphragms used on skewed bents in prestressed girder bridges cause difficulties in detailing and : construction. Details for bridges with large diaphragm skew angles (>30) have not been a problem for LA DOTD. : However, as the skew angl...
Approximate median regression for complex survey data with skewed response.
Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi
2016-12-01
The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.
Approximate Median Regression for Complex Survey Data with Skewed Response
Fraser, Raphael André; Lipsitz, Stuart R.; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Pan, Yi
2016-01-01
Summary The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling and weighting. In this paper, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS) based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. PMID:27062562
A Skew-t space-varying regression model for the spectral analysis of resting state brain activity.
Ismail, Salimah; Sun, Wenqi; Nathoo, Farouk S; Babul, Arif; Moiseev, Alexader; Beg, Mirza Faisal; Virji-Babul, Naznin
2013-08-01
It is known that in many neurological disorders such as Down syndrome, main brain rhythms shift their frequencies slightly, and characterizing the spatial distribution of these shifts is of interest. This article reports on the development of a Skew-t mixed model for the spatial analysis of resting state brain activity in healthy controls and individuals with Down syndrome. Time series of oscillatory brain activity are recorded using magnetoencephalography, and spectral summaries are examined at multiple sensor locations across the scalp. We focus on the mean frequency of the power spectral density, and use space-varying regression to examine associations with age, gender and Down syndrome across several scalp regions. Spatial smoothing priors are incorporated based on a multivariate Markov random field, and the markedly non-Gaussian nature of the spectral response variable is accommodated by the use of a Skew-t distribution. A range of models representing different assumptions on the association structure and response distribution are examined, and we conduct model selection using the deviance information criterion. (1) Our analysis suggests region-specific differences between healthy controls and individuals with Down syndrome, particularly in the left and right temporal regions, and produces smoothed maps indicating the scalp topography of the estimated differences.
Probability theory for 3-layer remote sensing radiative transfer model: univariate case.
Ben-David, Avishai; Davidson, Charles E
2012-04-23
A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics. © 2012 Optical Society of America
NASA Technical Reports Server (NTRS)
Kerr, R. A.
1983-01-01
In a three dimensional simulation higher order derivative correlations, including skewness and flatness factors, are calculated for velocity and passive scalar fields and are compared with structures in the flow. The equations are forced to maintain steady state turbulence and collect statistics. It is found that the scalar derivative flatness increases much faster with Reynolds number than the velocity derivative flatness, and the velocity and mixed derivative skewness do not increase with Reynolds number. Separate exponents are found for the various fourth order velocity derivative correlations, with the vorticity flatness exponent the largest. Three dimensional graphics show strong alignment between the vorticity, rate of strain, and scalar-gradient fields. The vorticity is concentrated in tubes with the scalar gradient and the largest principal rate of strain aligned perpendicular to the tubes. Velocity spectra, in Kolmogorov variables, collapse to a single curve and a short minus 5/3 spectral regime is observed.
NASA Astrophysics Data System (ADS)
Wei, Jun; Zhong, Fangyuan
Based on comparative experiment, this paper deals with using tangentially skewed rotor blades in axial-flow fan. It is seen from the comparison of the overall performance of the fan with skewed bladed rotor and radial bladed rotor that the skewed blades operate more efficiently than the radial blades, especially at low volume flows. Meanwhile, decrease in pressure rise and flow rate of axial-flow fan with skewed rotor blades is found. The rotor-stator interaction noise and broadband noise of axial-flow fan are reduced with skewed rotor blades. Forward skewed blades tend to reduce the accumulation of the blade boundary layer in the tip region resulting from the effect of centrifugal forces. The turning of streamlines from the outer radius region into inner radius region in blade passages due to the radial component of blade forces of skewed blades is the main reason for the decrease in pressure rise and flow rate.
Lactobacilli Activate Human Dendritic Cells that Skew T Cells Toward T Helper 1 Polarization
2005-01-06
Species Modulate the Phenotype and Function of MDCs. Previous studies have shown that Lactobacillus plantarum and Lactobacillus rhamnosus can induce...cell immune responses at both systemic and mucosal sites. Many Lactobacillus species are normal members of the human gut microflora and most are regarded...several well defined strains, representing three species of Lactobacillus on human myeloid DCs (MDCs) and found that they modulated the phenotype and
The Shock and Vibration Digest, Volume 16, Number 10
1984-10-01
shaped, arid a general polygonal-shaped membrane Fourier expansion -collocation method and the finite without symmetry They also derived, with the help...geometry is not applicable; therefore, a Fourier sine series expansion technique. The meth- not much work on the dynamic behavior of skew- od was applied...particular m.-de are obtained. This normal mode expansion form of deflection surface. The stability of motion approach has recently been used in a series of
DNA Asymmetric Strand Bias Affects the Amino Acid Composition of Mitochondrial Proteins
Min, Xiang Jia; Hickey, Donal A.
2007-01-01
Abstract Variations in GC content between genomes have been extensively documented. Genomes with comparable GC contents can, however, still differ in the apportionment of the G and C nucleotides between the two DNA strands. This asymmetric strand bias is known as GC skew. Here, we have investigated the impact of differences in nucleotide skew on the amino acid composition of the encoded proteins. We compared orthologous genes between animal mitochondrial genomes that show large differences in GC and AT skews. Specifically, we compared the mitochondrial genomes of mammals, which are characterized by a negative GC skew and a positive AT skew, to those of flatworms, which show the opposite skews for both GC and AT base pairs. We found that the mammalian proteins are highly enriched in amino acids encoded by CA-rich codons (as predicted by their negative GC and positive AT skews), whereas their flatworm orthologs were enriched in amino acids encoded by GT-rich codons (also as predicted from their skews). We found that these differences in mitochondrial strand asymmetry (measured as GC and AT skews) can have very large, predictable effects on the composition of the encoded proteins. PMID:17974594
Selection on skewed characters and the paradox of stasis
Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel
2018-01-01
Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modelling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold’s (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate samples size. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date – repeatedly described as more evolutionarily stable than expected –, so this skewness should be accounted for when investigating evolutionary dynamics in the wild. PMID:28921508
A multicenter examination and strategic revisions of the Yale Global Tic Severity Scale.
McGuire, Joseph F; Piacentini, John; Storch, Eric A; Murphy, Tanya K; Ricketts, Emily J; Woods, Douglas W; Walkup, John W; Peterson, Alan L; Wilhelm, Sabine; Lewin, Adam B; McCracken, James T; Leckman, James F; Scahill, Lawrence
2018-05-08
To examine the internal consistency and distribution of the Yale Global Tic Severity Scale (YGTSS) scores to inform modification of the measure. This cross-sectional study included 617 participants with a tic disorder (516 children and 101 adults), who completed an age-appropriate diagnostic interview and the YGTSS to evaluate tic symptom severity. The distributions of scores on YGTSS dimensions were evaluated for normality and skewness. For dimensions that were skewed across motor and phonic tics, a modified Delphi consensus process was used to revise selected anchor points. Children and adults had similar clinical characteristics, including tic symptom severity. All participants were examined together. Strong internal consistency was identified for the YGTSS Motor Tic score (α = 0.80), YGTSS Phonic Tic score (α = 0.87), and YGTSS Total Tic score (α = 0.82). The YGTSS Total Tic and Impairment scores exhibited relatively normal distributions. Several subscales and individual item scales departed from a normal distribution. Higher scores were more often used on the Motor Tic Number, Frequency, and Intensity dimensions and the Phonic Tic Frequency dimension. By contrast, lower scores were more often used on Motor Tic Complexity and Interference, and Phonic Tic Number, Intensity, Complexity, and Interference. The YGTSS exhibits good internal consistency across children and adults. The parallel findings across Motor and Phonic Frequency, Complexity, and Interference dimensions prompted minor revisions to the anchor point description to promote use of the full range of scores in each dimension. Specific minor revisions to the YGTSS Phonic Tic Symptom Checklist were also proposed. © 2018 American Academy of Neurology.
Yin, Ping; Xiong, Hua; Liu, Yi; Sah, Shambhu K; Zeng, Chun; Wang, Jingjie; Li, Yongmei; Hong, Nan
2018-01-01
To investigate the application value of using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) with extended Tofts linear model for relapsing-remitting multiple sclerosis (RRMS) and its correlation with expanded disability status scale (EDSS) scores and disease duration. Thirty patients with multiple sclerosis (MS) underwent conventional magnetic resonance imaging (MRI) and DCE-MRI with a 3.0 Tesla MR scanner. An extended Tofts linear model was used to quantitatively measure MR imaging biomarkers. The histogram parameters and correlation among imaging biomarkers, EDSS scores, and disease duration were also analyzed. The MR imaging biomarkers volume transfer constant (K trans ), volume of the extravascular extracellular space per unit volume of tissue (Ve), fractional plasma volume (V p ), cerebral blood flow (CBF), and cerebral blood volume (CBV) of contrast-enhancing (CE) lesions were significantly higher (P < 0.05) than those of nonenhancing (NE) lesions and normal-appearing white matter (NAWM) regions. The skewness of Ve value in CE lesions was more close to normal distribution. There was no significant correlation among the biomarkers with the EDSS scores and disease duration (P > 0.05). Our study demonstrates that the DCE-MRI with the extended Tofts linear model can measure the permeability and perfusion characteristic in MS lesions and in NAWM regions. The K trans , Ve, Vp, CBF, and CBV of CE lesions were significantly higher than that of NE lesions. The skewness of Ve value in CE lesions was more close to normal distribution, indicating that the histogram can be helpful to distinguish the pathology of MS lesions.
Somanath, Keerthan; Mau, Ted
2016-11-01
(1) To develop an automated algorithm to analyze electroglottographic (EGG) signal in continuous dysphonic speech, and (2) to identify EGG waveform parameters that correlate with the auditory-perceptual quality of strain in the speech of patients with adductor spasmodic dysphonia (ADSD). Software development with application in a prospective controlled study. EGG was recorded from 12 normal speakers and 12 subjects with ADSD reading excerpts from the Rainbow Passage. Data were processed by a new algorithm developed with the specific goal of analyzing continuous dysphonic speech. The contact quotient, pulse width, a new parameter peak skew, and various contact closing slope quotient and contact opening slope quotient measures were extracted. EGG parameters were compared between normal and ADSD speech. Within the ADSD group, intra-subject comparison was also made between perceptually strained syllables and unstrained syllables. The opening slope quotient SO7525 distinguished strained syllables from unstrained syllables in continuous speech within individual subjects with ADSD. The standard deviations, but not the means, of contact quotient, EGGW50, peak skew, and SO7525 were different between normal and ADSD speakers. The strain-stress pattern in continuous speech can be visualized as color gradients based on the variation of EGG parameter values. EGG parameters may provide a within-subject measure of vocal strain and serve as a marker for treatment response. The addition of EGG to multidimensional assessment may lead to improved characterization of the voice disturbance in ADSD. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Somanath, Keerthan; Mau, Ted
2016-01-01
Objectives (1) To develop an automated algorithm to analyze electroglottographic (EGG) signal in continuous, dysphonic speech, and (2) to identify EGG waveform parameters that correlate with the auditory-perceptual quality of strain in the speech of patients with adductor spasmodic dysphonia (ADSD). Study Design Software development with application in a prospective controlled study. Methods EGG was recorded from 12 normal speakers and 12 subjects with ADSD reading excerpts from the Rainbow Passage. Data were processed by a new algorithm developed with the specific goal of analyzing continuous dysphonic speech. The contact quotient (CQ), pulse width (EGGW), a new parameter peak skew, and various contact closing slope quotient (SC) and contact opening slope quotient (SO) measures were extracted. EGG parameters were compared between normal and ADSD speech. Within the ADSD group, intra-subject comparison was also made between perceptually strained syllables and unstrained syllables. Results The opening slope quotient SO7525 distinguished strained syllables from unstrained syllables in continuous speech within individual ADSD subjects. The standard deviations, but not the means, of CQ, EGGW50, peak skew, and SO7525 were different between normal and ADSD speakers. The strain-stress pattern in continuous speech can be visualized as color gradients based on the variation of EGG parameter values. Conclusions EGG parameters may provide a within-subject measure of vocal strain and serve as a marker for treatment response. The addition of EGG to multi-dimensional assessment may lead to improved characterization of the voice disturbance in ADSD. PMID:26739857
Frequency distribution of lithium in leaves of Lycium andersonii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romney, E.M.; Wallace, A.; Kinnear, J.
1977-01-01
Lycium andersonii A. Gray is an accumulator of Li. Assays were made of 200 samples of it collected from six different locations within the Northern Mojave Desert. Mean concentrations of Li varied from location to location and tended not to follow log/sub e/ normal distribution, and to follow a normal distribution only poorly. There was some negative skewness to the log/sub e/ distribution which did exist. The results imply that the variation in accumulation of Li depends upon native supply of Li. Possibly the Li supply and the ability of L. andersonii plants to accumulate it are both log/sub e/more » normally distributed. The mean leaf concentration of Li in all locations was 29 ..mu..g/g, but the maximum was 166 ..mu..g/g.« less
ERIC Educational Resources Information Center
Spencer, Sarah V.; Hawk, Larry W., Jr.; Richards, Jerry B.; Shiels, Keri; Pelham, William E., Jr.; Waxmonsky, James G.
2009-01-01
Recent research has suggested that intra-individual variability in reaction time (RT) distributions of children with ADHD is characterized by a particularly large rightward skew that may reflect lapses in attention. The purpose of the study was to provide the first randomized, placebo-controlled test of the effects of the stimulant methylphenidate…
Gluten-containing grains skew gluten assessment in oats due to sample grind non-homogeneity.
Fritz, Ronald D; Chen, Yumin; Contreras, Veronica
2017-02-01
Oats are easily contaminated with gluten-rich kernels of wheat, rye and barley. These contaminants are like gluten 'pills', shown here to skew gluten analysis results. Using R-Biopharm R5 ELISA, we quantified gluten in gluten-free oatmeal servings from an in-market survey. For samples with a 5-20ppm reading on a first test, replicate analyses provided results ranging <5ppm to >160ppm. This suggests sample grinding may inadequately disperse gluten to allow a single accurate gluten assessment. To ascertain this, and characterize the distribution of 0.25-g gluten test results for kernel contaminated oats, twelve 50g samples of pure oats, each spiked with a wheat kernel, showed that 0.25g test results followed log-normal-like distributions. With this, we estimate probabilities of mis-assessment for a 'single measure/sample' relative to the <20ppm regulatory threshold, and derive an equation relating the probability of mis-assessment to sample average gluten content. Copyright © 2016 Elsevier Ltd. All rights reserved.
Higher aluminum concentration in Alzheimer's disease after Box-Cox data transformation.
Rusina, Robert; Matěj, Radoslav; Kašparová, Lucie; Kukal, Jaromír; Urban, Pavel
2011-11-01
Evidence regarding the role of mercury and aluminum in the pathogenesis of Alzheimer's disease (AD) remains controversial. The aims of our project were to investigate the content of the selected metals in brain tissue samples and the use of a specific mathematical transform to eliminate the disadvantage of a strong positive skew in the original data distribution. In this study, we used atomic absorption spectrophotometry to determine mercury and aluminum concentrations in the hippocampus and associative visual cortex of 29 neuropathologically confirmed AD and 27 age-matched controls. The Box-Cox data transformation was used for statistical evaluation. AD brains had higher mean aluminum concentrations in the hippocampus than controls (0.357 vs. 0.090 μg/g; P = 0.039) after data transformation. Results for mercury were not significant. Original data regarding microelement concentrations are heavily skewed and do not pass the normality test in general. A Box-Cox transformation can eliminate this disadvantage and allow parametric testing.
Risk based approach for design and optimization of stomach specific delivery of rifampicin.
Vora, Chintan; Patadia, Riddhish; Mittal, Karan; Mashru, Rajashree
2013-10-15
The research envisaged focuses on risk management approach for better recognizing the risks, ways to mitigate them and propose a control strategy for the development of rifampicin gastroretentive tablets. Risk assessment using failure mode and effects analysis (FMEA) was done to depict the effects of specific failure modes related to respective formulation/process variable. A Box-Behnken design was used to investigate the effect of amount of sodium bicarbonate (X1), pore former HPMC (X2) and glyceryl behenate (X3) on percent drug release at 1st hour (Q1), 4th hour (Q4), 8th hour (Q8) and floating lag time (min). Main effects and interaction plots were generated to study effects of variables. Selection of the optimized formulation was done using desirability function and overlay contour plots. The optimized formulation exhibited Q1 of 20.9%, Q4 of 59.1%, Q8 of 94.8% and floating lag time of 4.0 min. Akaike information criteria and Model selection criteria revealed that the model was best described by Korsmeyer-Peppas power law. The residual plots demonstrated no existence of non-normality, skewness or outliers. The composite desirability for optimized formulation computed using equations and software were 0.84 and 0.86 respectively. FTIR, DSC and PXRD studies ruled out drug polymer interaction due to thermal treatment. Copyright © 2013 Elsevier B.V. All rights reserved.
Selection on skewed characters and the paradox of stasis.
Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel
2017-11-01
Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modeling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold's (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate sample sizes. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date - repeatedly described as more evolutionarily stable than expected - so this skewness should be accounted for when investigating evolutionary dynamics in the wild. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Reward skewness coding in the insula independent of probability and loss
Tobler, Philippe N.
2011-01-01
Rewards in the natural environment are rarely predicted with complete certainty. Uncertainty relating to future rewards has typically been defined as the variance of the potential outcomes. However, the asymmetry of predicted reward distributions, known as skewness, constitutes a distinct but neuroscientifically underexplored risk term that may also have an impact on preference. By changing only reward magnitudes, we study skewness processing in equiprobable ternary lotteries involving only gains and constant probabilities, thus excluding probability distortion or loss aversion as mechanisms for skewness preference formation. We show that individual preferences are sensitive to not only the mean and variance but also to the skewness of predicted reward distributions. Using neuroimaging, we show that the insula, a structure previously implicated in the processing of reward-related uncertainty, responds to the skewness of predicted reward distributions. Some insula responses increased in a monotonic fashion with skewness (irrespective of individual skewness preferences), whereas others were similarly elevated to both negative and positive as opposed to no reward skew. These data support the notion that the asymmetry of reward distributions is processed in the brain and, taken together with replicated findings of mean coding in the striatum and variance coding in the cingulate, suggest that the brain codes distinct aspects of reward distributions in a distributed fashion. PMID:21849610
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
Classical fragile-X phenotype in a female infant disclosed by comprehensive genomic studies.
Jorge, Paula; Garcia, Elsa; Gonçalves, Ana; Marques, Isabel; Maia, Nuno; Rodrigues, Bárbara; Santos, Helena; Fonseca, Jacinta; Soares, Gabriela; Correia, Cecília; Reis-Lima, Margarida; Cirigliano, Vincenzo; Santos, Rosário
2018-05-10
We describe a female infant with Fragile-X syndrome, with a fully expanded FMR1 allele and preferential inactivation of the homologous X-chromosome carrying a de novo deletion. This unusual and rare case demonstrates the importance of a detailed genomic approach, the absence of which could be misguiding, and calls for reflection on the current clinical and diagnostic workup for developmental disabilities. We present a female infant, referred for genetic testing due to psychomotor developmental delay without specific dysmorphic features or relevant family history. FMR1 mutation screening revealed a methylated full mutation and a normal but inactive FMR1 allele, which led to further investigation. Complete skewing of X-chromosome inactivation towards the paternally-inherited normal-sized FMR1 allele was found. No pathogenic variants were identified in the XIST promoter. Microarray analysis revealed a 439 kb deletion at Xq28, in a region known to be associated with extreme skewing of X-chromosome inactivation. Overall results enable us to conclude that the developmental delay is the cumulative result of a methylated FMR1 full mutation on the active X-chromosome and the inactivation of the other homologue carrying the de novo 439 kb deletion. Our findings should be taken into consideration in future guidelines for the diagnostic workup on the diagnosis of intellectual disabilities, particularly in female infant cases.
Ogle, K.M.; Lee, R.W.
1994-01-01
Radon-222 activity was measured for 27 water samples from streams, an alluvial aquifer, bedrock aquifers, and a geothermal system, in and near the 510-square mile area of Owl Creek Basin, north- central Wyoming. Summary statistics of the radon- 222 activities are compiled. For 16 stream-water samples, the arithmetic mean radon-222 activity was 20 pCi/L (picocuries per liter), geometric mean activity was 7 pCi/L, harmonic mean activity was 2 pCi/L and median activity was 8 pCi/L. The standard deviation of the arithmetic mean is 29 pCi/L. The activities in the stream-water samples ranged from 0.4 to 97 pCi/L. The histogram of stream-water samples is left-skewed when compared to a normal distribution. For 11 ground-water samples, the arithmetic mean radon- 222 activity was 486 pCi/L, geometric mean activity was 280 pCi/L, harmonic mean activity was 130 pCi/L and median activity was 373 pCi/L. The standard deviation of the arithmetic mean is 500 pCi/L. The activity in the ground-water samples ranged from 25 to 1,704 pCi/L. The histogram of ground-water samples is left-skewed when compared to a normal distribution. (USGS)
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena
2010-01-31
In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.
The Chaotic Light Curves of Accreting Black Holes
NASA Technical Reports Server (NTRS)
Kazanas, Demosthenes
2007-01-01
We present model light curves for accreting Black Hole Candidates (BHC) based on a recently developed model of these sources. According to this model, the observed light curves and aperiodic variability of BHC are due to a series of soft photon injections at random (Poisson) intervals and the stochastic nature of the Comptonization process in converting these soft photons to the observed high energy radiation. The additional assumption of our model is that the Comptonization process takes place in an extended but non-uniform hot plasma corona surrounding the compact object. We compute the corresponding Power Spectral Densities (PSD), autocorrelation functions, time skewness of the light curves and time lags between the light curves of the sources at different photon energies and compare our results to observation. Our model reproduces the observed light curves well, in that it provides good fits to their overall morphology (as manifest by the autocorrelation and time skewness) and also to their PSDs and time lags, by producing most of the variability power at time scales 2 a few seconds, while at the same time allowing for shots of a few msec in duration, in accordance with observation. We suggest that refinement of this type of model along with spectral and phase lag information can be used to probe the structure of this class of high energy sources.
A statistical evaluation of non-ergodic variogram estimators
Curriero, F.C.; Hohn, M.E.; Liebhold, A.M.; Lele, S.R.
2002-01-01
Geostatistics is a set of statistical techniques that is increasingly used to characterize spatial dependence in spatially referenced ecological data. A common feature of geostatistics is predicting values at unsampled locations from nearby samples using the kriging algorithm. Modeling spatial dependence in sampled data is necessary before kriging and is usually accomplished with the variogram and its traditional estimator. Other types of estimators, known as non-ergodic estimators, have been used in ecological applications. Non-ergodic estimators were originally suggested as a method of choice when sampled data are preferentially located and exhibit a skewed frequency distribution. Preferentially located samples can occur, for example, when areas with high values are sampled more intensely than other areas. In earlier studies the visual appearance of variograms from traditional and non-ergodic estimators were compared. Here we evaluate the estimators' relative performance in prediction. We also show algebraically that a non-ergodic version of the variogram is equivalent to the traditional variogram estimator. Simulations, designed to investigate the effects of data skewness and preferential sampling on variogram estimation and kriging, showed the traditional variogram estimator outperforms the non-ergodic estimators under these conditions. We also analyzed data on carabid beetle abundance, which exhibited large-scale spatial variability (trend) and a skewed frequency distribution. Detrending data followed by robust estimation of the residual variogram is demonstrated to be a successful alternative to the non-ergodic approach.
ERIC Educational Resources Information Center
Tabor, Josh
2010-01-01
On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)
Field, J; Solís, C R; Queller, D C; Strassmann, J E
1998-06-01
Recent models postulate that the members of a social group assess their ecological and social environments and agree a "social contract" of reproductive partitioning (skew). We tested social contracts theory by using DNA microsatellites to measure skew in 24 cofoundress associations of paper wasps, Polistes bellicosus. In contrast to theoretical predictions, there was little variation in cofoundress relatedness, and relatedness either did not predict skew or was negatively correlated with it; the dominant/subordinate size ratio, assumed to reflect relative fighting ability, did not predict skew; and high skew was associated with decreased aggression by the rank 2 subordinate toward the dominant. High skew was associated with increased group size. A difficulty with measuring skew in real systems is the frequent changes in group composition that commonly occur in social animals. In P. bellicosus, 61% of egg layers and an unknown number of non-egg layers were absent by the time nests were collected. The social contracts models provide an attractive general framework linking genetics, ecology, and behavior, but there have been few direct tests of their predictions. We question assumptions underlying the models and suggest directions for future research.
NASA Astrophysics Data System (ADS)
Bayat, M.; Daneshjoo, F.; Nisticò, N.
2017-01-01
In this study the probable seismic behavior of skewed bridges with continuous decks under earthquake excitations from different directions is investigated. A 45° skewed bridge is studied. A suite of 20 records is used to perform an Incremental Dynamic Analysis (IDA) for fragility curves. Four different earthquake directions have been considered: -45°, 0°, 22.5°, 45°. A sensitivity analysis on different spectral intensity meas ures is presented; efficiency and practicality of different intensity measures have been studied. The fragility curves obtained indicate that the critical direction for skewed bridges is the skew direction as well as the longitudinal direction. The study shows the importance of finding the most critical earthquake in understanding and predicting the behavior of skewed bridges.
NASA Astrophysics Data System (ADS)
Smith, N.; Blewitt, D.; Hebert, L. B.
2015-12-01
In coordination with oil and gas operators, we developed a high resolution (< 1 min) simulation of temporal variability in well-pad oil and gas emissions over a year. We include routine emissions from condensate tanks, dehydrators, pneumatic devices, fugitive leaks and liquids unloading. We explore the variability in natural gas emissions from these individual well-pad sources, and find that routine short-term episodic emissions such as tank flashing and liquids unloading result in the appearance of a skewed, or 'fat-tail' distribution of emissions, from an individual well-pad over time. Additionally, we explore the expected variability in emissions from multiple wells with different raw gas composition, gas/liquids production volumes and control equipment. Differences in well-level composition, production volume and control equipment translate into differences in well-level emissions leading to a fat-tail distribution of emissions in the absence of operational upsets. Our results have several implications for recent studies focusing on emissions from oil and gas sources. Time scale of emission estimates are important and have important policy implications. Fat tail distributions may not be entirely driven by avoidable mechanical failures, and are expected to occur under routine operational conditions from short-duration emissions (e.g., tank flashing, liquid unloading). An understanding of the expected distribution of emissions for a particular population of wells is necessary to evaluate whether the observed distribution is more skewed than expected. Temporal variability in well-pad emissions make comparisons to annual average emissions inventories difficult and may complicate the interpretation of long-term ambient fenceline monitoring data. Sophisticated change detection algorithms will be necessary to identify when true operational upsets occur versus routine short-term emissions.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord; Cornett, Frank N.
2006-04-18
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. Each of the plurality of delayed signals is compared to a reference signal to detect changes in the skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in the detected skew.
Investigating the detection of multi-homed devices independent of operating systems
2017-09-01
timestamp data was used to estimate clock skews using linear regression and linear optimization methods. Analysis revealed that detection depends on...the consistency of the estimated clock skew. Through vertical testing, it was also shown that clock skew consistency depends on the installed...optimization methods. Analysis revealed that detection depends on the consistency of the estimated clock skew. Through vertical testing, it was also
Saini, Jasmine; Hershberg, Uri
2015-01-01
The exceptional ability of B cells to diversify through somatic mutation and improve affinity of the repertoire towards the antigens is the cornerstone of adaptive immunity. Somatic mutation is not evenly distributed and exhibits certain micro-sequence specificities. We show here that the combination of somatic mutation targeting and the codon usage in human B cell receptor (BCR) Variable (V) genes create expected patterns of mutation and post mutation changes that are focused on their complementarity determining regions (CDR). T cell V genes are also skewed in targeting mutations but to a lesser extent and are lacking the codon usage bias observed in BCRs. This suggests that the observed skew in T cell receptors is due to their amino acid usage, which is similar to that of BCRs. The mutation targeting and the codon bias allow B cell CDRs to diversify by specifically accumulating nonconservative changes. We counted the distribution of mutations to CDR in 4 different human datasets. In all four cases we found that the number of actual mutations in the CDR correlated significantly with the V gene mutation biases to the CDR predicted by our models. Finally, it appears that the mutation bias in V genes indeed relates to their long-term survival in actual human repertoires. We observed that resting repertoires of B cells overexpressed V genes that were especially biased towards focused mutation and change in the CDR. This bias in V gene usage was somewhat relaxed at the height of the immune response to a vaccine, presumably because of the need for a wider diversity in a primary response. However, older patients did not retain this flexibility and were biased towards using only highly skewed V genes at all stages of their response. PMID:25660968
Saini, Jasmine; Hershberg, Uri
2015-05-01
The exceptional ability of B cells to diversify through somatic mutation and improve affinity of the repertoire toward the antigens is the cornerstone of adaptive immunity. Somatic mutation is not evenly distributed and exhibits certain micro-sequence specificities. We show here that the combination of somatic mutation targeting and the codon usage in human B cell receptor (BCR) Variable (V) genes create expected patterns of mutation and post mutation changes that are focused on their complementarity determining regions (CDR). T cell V genes are also skewed in targeting mutations but to a lesser extent and are lacking the codon usage bias observed in BCRs. This suggests that the observed skew in T cell receptors is due to their amino acid usage, which is similar to that of BCRs. The mutation targeting and the codon bias allow B cell CDRs to diversify by specifically accumulating nonconservative changes. We counted the distribution of mutations to CDR in 4 different human datasets. In all four cases we found that the number of actual mutations in the CDR correlated significantly with the V gene mutation biases to the CDR predicted by our models. Finally, it appears that the mutation bias in V genes indeed relates to their long-term survival in actual human repertoires. We observed that resting repertoires of B cells overexpressed V genes that were especially biased toward focused mutation and change in the CDR. This bias in V gene usage was somewhat relaxed at the height of the immune response to a vaccine, presumably because of the need for a wider diversity in a primary response. However, older patients did not retain this flexibility and were biased toward using only highly skewed V genes at all stages of their response. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spin Transparent Siberian Snake And Spin Rotator With Solenoids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koop, I. A.; Otboyev, A. V.; Shatunov, P. Yu.
2007-06-13
For intermediate energies of electrons and protons it happens that it is more convenient to construct Siberian snakes and spin rotators using solenoidal fields. Strong coupling caused by the solenoids is suppressed by a number of skew and normal quadrupole magnets. More complicate problem of the spin transparency of such devices also can be solved. This paper gives two examples: spin rotator for electron ring in the eRHIC project and Siberian snake for proton (antiproton) storage ring HESR, which cover whole machines working energy region.
Li, Zijian
2018-08-01
To evaluate whether pesticide maximum residue limits (MRLs) can protect public health, a deterministic dietary risk assessment of maximum pesticide legal exposure was conducted to convert global MRLs to theoretical maximum dose intake (TMDI) values by estimating the average food intake rate and human body weight for each country. A total of 114 nations (58% of the total nations in the world) and two international organizations, including the European Union (EU) and Codex (WHO) have regulated at least one of the most currently used pesticides in at least one of the most consumed agricultural commodities. In this study, 14 of the most commonly used pesticides and 12 of the most commonly consumed agricultural commodities were identified and selected for analysis. A health risk analysis indicated that nearly 30% of the computed pesticide TMDI values were greater than the acceptable daily intake (ADI) values; however, many nations lack common pesticide MRLs in many commonly consumed foods and other human exposure pathways, such as soil, water, and air were not considered. Normality tests of the TMDI values set indicated that all distributions had a right skewness due to large TMDI clusters at the low end of the distribution, which were caused by some strict pesticide MRLs regulated by the EU (normally a default MRL of 0.01 mg/kg when essential data are missing). The Box-Cox transformation and optimal lambda (λ) were applied to these TMDI distributions, and normality tests of the transformed data set indicated that the power transformed TMDI values of at least eight pesticides presented a normal distribution. It was concluded that unifying strict pesticide MRLs by nations worldwide could significantly skew the distribution of TMDI values to the right, lower the legal exposure to pesticide, and effectively control human health risks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Image statistics and the perception of surface gloss and lightness.
Kim, Juno; Anderson, Barton L
2010-07-01
Despite previous data demonstrating the critical importance of 3D surface geometry in the perception of gloss and lightness, I. Motoyoshi, S. Nishida, L. Sharan, and E. H. Adelson (2007) recently proposed that a simple image statistic--histogram or sub-band skew--is computed by the visual system to infer the gloss and albedo of surfaces. One key source of evidence used to support this claim was an experiment in which adaptation to skewed image statistics resulted in opponent aftereffects in observers' judgments of gloss and lightness. We report a series of adaptation experiments that were designed to assess the cause of these aftereffects. We replicated their original aftereffects in gloss but found no consistent aftereffect in lightness. We report that adaptation to zero-skew adaptors produced similar aftereffects as positively skewed adaptors, and that negatively skewed adaptors induced no reliable aftereffects. We further find that the adaptation effect observed with positively skewed adaptors is not robust to changes in mean luminance that diminish the intensity of the luminance extrema. Finally, we show that adaptation to positive skew reduces (rather than increases) the apparent lightness of light pigmentation on non-uniform albedo surfaces. These results challenge the view that the adaptation results reported by Motoyoshi et al. (2007) provide evidence that skew is explicitly computed by the visual system.
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.
2004-01-01
The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
NASA Astrophysics Data System (ADS)
Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.
2004-07-01
The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
Individual differences in loss aversion and preferences for skewed risks across adulthood.
Seaman, Kendra L; Green, Mikella A; Shu, Stephen; Samanez-Larkin, Gregory R
2018-06-01
In a previous study, we found adult age differences in the tendency to accept more positively skewed gambles (with a small chance of a large win) than other equivalent risks, or an age-related positive-skew bias. In the present study, we examined whether loss aversion explained this bias. A total of 508 healthy participants (ages 21-82) completed measures of loss aversion and skew preference. Age was not related to loss aversion. Although loss aversion was a significant predictor of gamble acceptance, it did not influence the age-related positive-skew bias. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Flow in Rotating Serpentine Coolant Passages With Skewed Trip Strips
NASA Technical Reports Server (NTRS)
Tse, David G.N.; Steuber, Gary
1996-01-01
Laser velocimetry was utilized to map the velocity field in serpentine turbine blade cooling passages with skewed trip strips. The measurements were obtained at Reynolds and Rotation numbers of 25,000 and 0.24 to assess the influence of trips, passage curvature and Coriolis force on the flow field. The interaction of the secondary flows induced by skewed trips with the passage rotation produces a swirling vortex and a corner recirculation zone. With trips skewed at +45 deg, the secondary flows remain unaltered as the cross-flow proceeds from the passage to the turn. However, the flow characteristics at these locations differ when trips are skewed at -45 deg. Changes in the flow structure are expected to augment heat transfer, in agreement with the heat transfer measurements of Johnson, et al. The present results show that trips are skewed at -45 deg in the outward flow passage and trips are skewed at +45 deg in the inward flow passage maximize heat transfer. Details of the present measurements were related to the heat transfer measurements of Johnson, et al. to relate fluid flow and heat transfer measurements.
Psychosocial Factors Influencing Smokeless Tobacco Use by Teenage Military Dependents
1994-01-01
age and grade Demographics both show strong bivariate relationships with the outcome MALE-male gender measures, we selected to use only one of these...a positive sign indicating a direct relation- no impact for either gender . Attitude and other tobacco influ- ship and a negative sign an inverse...by both genders . nlficant. However, if the interval includes one but is highly For both genders , the strongest explanatory variable for trial skewed
Parrett, Charles; Veilleux, Andrea; Stedinger, J.R.; Barth, N.A.; Knifong, Donna L.; Ferris, J.C.
2011-01-01
Improved flood-frequency information is important throughout California in general and in the Sacramento-San Joaquin River Basin in particular, because of an extensive network of flood-control levees and the risk of catastrophic flooding. A key first step in updating flood-frequency information is determining regional skew. A Bayesian generalized least squares (GLS) regression method was used to derive a regional-skew model based on annual peak-discharge data for 158 long-term (30 or more years of record) stations throughout most of California. The desert areas in southeastern California had too few long-term stations to reliably determine regional skew for that hydrologically distinct region; therefore, the desert areas were excluded from the regional skew analysis for California. Of the 158 long-term stations used to determine regional skew, 145 have minimally regulated annual-peak discharges, and 13 stations are dam sites for which unregulated peak discharges were estimated from unregulated daily maximum discharge data furnished by the U.S. Army Corp of Engineers. Station skew was determined by using an expected moments algorithm (EMA) program for fitting the Pearson Type 3 flood-frequency distribution to the logarithms of annual peak-discharge data. The Bayesian GLS regression method previously developed was modified because of the large cross correlations among concurrent recorded peak discharges in California and the use of censored data and historical flood information with the new expected moments algorithm. In particular, to properly account for these cross-correlation problems and develop a suitable regression model and regression diagnostics, a combination of Bayesian weighted least squares and generalized least squares regression was adopted. This new methodology identified a nonlinear function relating regional skew to mean basin elevation. The regional skew values ranged from -0.62 for a mean basin elevation of zero to 0.61 for a mean basin elevation of 11,000 feet. This relation between skew and elevation reflects the interaction of snow with rain, which increases with increased elevation. The equivalent record length for the new regional skew ranges from 52 to 65 years of record, depending upon mean basin elevation. The old regional skew map in Bulletin 17B, published by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data (1982), reported an equivalent record length of only 17 years. The newly developed regional skew relation for California was used to update flood frequency for the 158 sites used in the regional skew analysis as well as 206 selected sites in the Sacramento-San Joaquin River Basin. For these sites, annual-peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years were determined on the basis of data through water year 2006. The expected moments algorithm was used for determining the magnitude and frequency of floods at gaged sites by using regional skew values and using the basic approach outlined in Bulletin
A Developmental Approach to Machine Learning?
Smith, Linda B.; Slone, Lauren K.
2017-01-01
Visual learning depends on both the algorithms and the training material. This essay considers the natural statistics of infant- and toddler-egocentric vision. These natural training sets for human visual object recognition are very different from the training data fed into machine vision systems. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order – with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. We propose that the skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines. PMID:29259573
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
Iwata, H; Shiono, H; Kon, Y; Matsubara, K; Kimura, K; Kuwayama, T; Monji, Y
2008-05-01
The duration of sperm-oocyte co-incubation has been observed to affect the sex ratio of in vitro produced bovine embryos. The purpose of this study was to investigate some factors that may be responsible for the skewed sex ratio. The factors studied were selected combinations of the duration of co-incubation, the presence or absence of cumulus cells, and the level of hyaluronic acid (HA) in the culture medium. Experiment 1 examined the effect of selected combinations of different factors during the fertilization phase of in vitro oocyte culture. The factors were the nature of the sperm or its treatment, the duration of the sperm-oocyte co-incubation, and the level of hyaluronic acid in the culture medium. In experiment 2, the capacitation of frozen-thawed-Percoll-washed sperm (control), pre-incubated, and non-binding sperm was evaluated by the zona pellucida (ZP) binding assay and the hypo-osmotic swelling test (HOST). The purpose of experiment 3 was to determine the oocyte cleavage rate and sex ratio of the embryos (>5 cells) produced as a consequence of the 10 treatments used in experiment 1. In treatments 1-3 (experiments 1 and 3) COC were co-cultured with sperm for 1, 5 or 18 h. Polyspermic fertilization rose as the co-incubation period increased (1 h 6.5%, 5 h 15.9%, 18 h 41.8%; P<0.05), and the highest rate of normal fertilization was observed for 5h culture (73.4%; P<0.05). The sex ratio was significantly (P<0.05) skewed from the expected 50:50 towards males following 1 h (64.4%) and 5 h (67.3%) co-incubation, but was not affected by 18 h incubation (52.3%). In treatment 4, sperm was pre-incubated for 1h and cultured with COC for 5 h. Relative to control sperm, pre-incubation of sperm increased ZP binding (116 versus 180 per ZP; P<0.05) and decreased the proportion of HOST positive sperm (65.8-48.6%; P<0.05; experiment 2). Pre-incubation did not affect the rates of polyspermy, normal fertilization or the sex ratio of the embryos (experiments 1 and 3). The oocytes used in treatments 5-10 of experiments 1 and 3 were denuded prior to fertilization. Co-incubation of denuded oocytes for 1h (treatment 5) or 5h (treatment 6) resulted in levels of polyspermic fertilization similar to that for treatment 2 with significantly lower levels of normal fertilization (41.7% and 52.6%, respectively; P<0.05), and the 1h co-incubation significantly skewed (P<0.05) the proportion of male embryos to 70.0%. Denuded oocytes were fertilized for 5h with sperm unable to bind to cumulus cells (NB sperm) in treatment 7 or those that bound to cumulus cells (B) in treatment 8. These two treatments had similar rates of polyspermic, normal and non-fertilization. However, the B sperm caused the sex ratio of the embryos to be significantly skewed to males (63.9%; P<0.05). Fertilization of denuded oocytes in medium containing hyaluronic acid (0.1 mg/ml, treatment 9; 1.0 mg/ml treatment 10) significantly (P<0.05) reduced the incidence of polyspermic fertilization relative to treatments 2 and 6, and normal fertilization relative to treatment 2, but did not affect the sex ratio of the embryos. It was concluded that exposure of sperm to cumulus cells, either before fertilization of denuded oocytes or during the process of fertilization of complete COC, increased the proportion of male embryos produced by in vitro culture. It was hypothesized that this may be due to the capacitation state of the sperm, the cumulus-sperm interaction, and/or the ability of the sperm to bind to cumulus cells or oocytes.
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
Kannurpatti, Sridhar S; Motes, Michael A; Rypma, Bart; Biswal, Bharat B
2011-07-01
In this report we demonstrate a hemodynamic scaling method with resting-state fluctuation of amplitude (RSFA) in healthy adult younger and older subject groups. We show that RSFA correlated with breath hold (BH) responses throughout the brain in groups of younger and older subjects which RSFA and BH performed comparably in accounting for age-related hemodynamic coupling changes, and yielded more veridical estimates of age-related differences in task-related neural activity. BOLD data from younger and older adults performing motor and cognitive tasks were scaled using RSFA and BH related signal changes. Scaling with RSFA and BH reduced the skew of the BOLD response amplitude distribution in each subject and reduced mean BOLD amplitude and variability in both age groups. Statistically significant differences in intrasubject amplitude variation across regions of activated cortex, and intersubject amplitude variation in regions of activated cortex were observed between younger and older subject groups. Intra- and intersubject variability differences were mitigated after scaling. RSFA, though similar to BH in minimizing skew in the unscaled BOLD amplitude distribution, attenuated the neural activity-related BOLD amplitude significantly less than BH. The amplitude and spatial extent of group activation were lower in the older than in the younger group before and after scaling. After accounting for vascular variability differences through scaling, age-related decreases in activation volume were observed during the motor and cognitive tasks. The results suggest that RSFA-scaled data yield age-related neural activity differences during task performance with negligible effects from non-neural (i.e., vascular) sources. Copyright © 2010 Wiley-Liss, Inc.
Weak interactions, omnivory and emergent food-web properties.
Emmerson, Mark; Yearsley, Jon M
2004-02-22
Empirical studies have shown that, in real ecosystems, species-interaction strengths are generally skewed in their distribution towards weak interactions. Some theoretical work also suggests that weak interactions, especially in omnivorous links, are important for the local stability of a community at equilibrium. However, the majority of theoretical studies use uniform distributions of interaction strengths to generate artificial communities for study. We investigate the effects of the underlying interaction-strength distribution upon the return time, permanence and feasibility of simple Lotka-Volterra equilibrium communities. We show that a skew towards weak interactions promotes local and global stability only when omnivory is present. It is found that skewed interaction strengths are an emergent property of stable omnivorous communities, and that this skew towards weak interactions creates a dynamic constraint maintaining omnivory. Omnivory is more likely to occur when omnivorous interactions are skewed towards weak interactions. However, a skew towards weak interactions increases the return time to equilibrium, delays the recovery of ecosystems and hence decreases the stability of a community. When no skew is imposed, the set of stable omnivorous communities shows an emergent distribution of skewed interaction strengths. Our results apply to both local and global concepts of stability and are robust to the definition of a feasible community. These results are discussed in the light of empirical data and other theoretical studies, in conjunction with their broader implications for community assembly.
Sociality, mating system and reproductive skew in marmots: evidence and hypotheses.
Allainé
2000-10-05
Marmot species exhibit a great diversity of social structure, mating systems and reproductive skew. In particular, among the social species (i.e. all except Marmota monax), the yellow-bellied marmot appears quite different from the others. The yellow-bellied marmot is primarily polygynous with an intermediate level of sociality and low reproductive skew among females. In contrast, all other social marmot species are mainly monogamous, highly social and with marked reproductive skew among females. To understand the evolution of this difference in reproductive skew, I examined four possible explanations identified from reproductive skew theory. From the literature, I then reviewed evidence to investigate if marmot species differ in: (1) the ability of dominants to control the reproduction of subordinates; (2) the degree of relatedness between group members; (3) the benefit for subordinates of remaining in the social group; and (4) the benefit for dominants of retaining subordinates. I found that the optimal skew hypothesis may apply for both sets of species. I suggest that yellow-bellied marmot females may benefit from retaining subordinate females and in return have to concede them reproduction. On the contrary, monogamous marmot species may gain by suppressing the reproduction of subordinate females to maximise the efficiency of social thermoregulation, even at the risk of departure of subordinate females from the family group. Finally, I discuss scenarios for the simultaneous evolution of sociality, monogamy and reproductive skew in marmots.
Ho, Andrew D; Yu, Carol C
2015-06-01
Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.
New approach application of data transformation in mean centering of ratio spectra method
NASA Astrophysics Data System (ADS)
Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.
2015-05-01
Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.
Olson, Scott A.; with a section by Veilleux, Andrea G.
2014-01-01
This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
NASA Astrophysics Data System (ADS)
Rupel, Dylan
2015-03-01
The first goal of this note is to extend the well-known Feigin homomorphisms taking quantum groups to quantum polynomial algebras. More precisely, we define generalized Feigin homomorphisms from a quantum shuffle algebra to quantum polynomial algebras which extend the classical Feigin homomorphisms along the embedding of the quantum group into said quantum shuffle algebra. In a recent work of Berenstein and the author, analogous extensions of Feigin homomorphisms from the dual Hall-Ringel algebra of a valued quiver to quantum polynomial algebras were defined. To relate these constructions, we establish a homomorphism, dubbed the quantum shuffle character, from the dual Hall-Ringel algebra to the quantum shuffle algebra which relates the generalized Feigin homomorphisms. These constructions can be compactly described by a commuting tetrahedron of maps beginning with the quantum group and terminating in a quantum polynomial algebra. The second goal in this project is to better understand the dual canonical basis conjecture for skew-symmetrizable quantum cluster algebras. In the symmetrizable types it is known that dual canonical basis elements need not have positive multiplicative structure constants, while this is still suspected to hold for skew-symmetrizable quantum cluster algebras. We propose an alternate conjecture for the symmetrizable types: the cluster monomials should correspond to irreducible characters of a KLR algebra. Indeed, the main conjecture of this note would establish this ''KLR conjecture'' for acyclic skew-symmetrizable quantum cluster algebras: that is, we conjecture that the images of rigid representations under the quantum shuffle character give irreducible characters for KLR algebras. We sketch a proof in the symmetric case giving an alternative to the proof of Kimura-Qin that all non-initial cluster variables in an acyclic skew-symmetric quantum cluster algebra are contained in the dual canonical basis. With these results in mind we interpret the cluster mutations directly in terms of the representation theory of the KLR algebra.
Au, W-Y; Pang, A; Lam, K K Y; Song, Y-Q; Lee, W-M; So, J C C; Kwong, Y-L
2007-10-01
To determine whether during hematopoietic stem cell transplantation (HSCT), X-chromosome inactivation (lyonization) of donor HSC might change after engraftment in recipients, the glucose-6-phosphate dehydrogenase (G6PD) gene of 180 female donors was genotyped by PCR/allele-specific primer extension, and MALDI-TOF mass spectrometry/Sequenom MassARRAY analysis. X-inactivation was determined by semiquantitative PCR for the HUMARA gene before/after HpaII digestion. X-inactivation was preserved in most cases post-HSCT, although altered skewing of lyonization might occur to either of the X-chromosomes. Among pre-HSCT clinicopathologic parameters analyzed, only recipient gender significantly affected skewing. Seven donors with normal G6PD biochemically but heterozygous for G6PD mutants were identified. Owing to lyonization changes, some donor-recipient pairs showed significantly different G6PD levels. In one donor-recipient pair, extreme lyonization affecting the wild-type G6PD allele occurred, causing biochemical G6PD deficiency in the recipient. In HSCT from asymptomatic female donors heterozygous for X-linked recessive disorders, altered lyonization might cause clinical diseases in the recipients.
Harvesting of males delays female breeding in a socially monogamous mammal; the beaver.
Parker, Howard; Rosell, Frank; Mysterud, Atle
2007-02-22
Human exploitation may skew adult sex ratios in vertebrate populations to the extent that males become limiting for normal reproduction. In polygynous ungulates, females delay breeding in heavily harvested populations, but effects are often fairly small. We would expect a stronger effect of male harvesting in species with a monogamous mating system, but no such study has been performed. We analysed the effect of harvesting males on the timing of reproduction in the obligate monogamous beaver (Castor fiber). We found a negative impact of harvesting of adult males on the timing of parturition in female beavers. The proportion of normal breeders sank from over 80%, when no males had been shot in the territories of pregnant females, to under 20%, when three males had been shot. Harvesting of males in monogamous mammals can apparently affect their normal reproductive cycle.
A Bayesian Surrogate for Regional Skew in Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Kuczera, George
1983-06-01
The problem of how to best utilize site and regional flood data to infer the shape parameter of a flood distribution is considered. One approach to this problem is given in Bulletin 17B of the U.S. Water Resources Council (1981) for the log-Pearson distribution. Here a lesser known distribution is considered, namely, the power normal which fits flood data as well as the log-Pearson and has a shape parameter denoted by λ derived from a Box-Cox power transformation. The problem of regionalizing λ is considered from an empirical Bayes perspective where site and regional flood data are used to infer λ. The distortive effects of spatial correlation and heterogeneity of site sampling variance of λ are explicitly studied with spatial correlation being found to be of secondary importance. The end product of this analysis is the posterior distribution of the power normal parameters expressing, in probabilistic terms, what is known about the parameters given site flood data and regional information on λ. This distribution can be used to provide the designer with several types of information. The posterior distribution of the T-year flood is derived. The effect of nonlinearity in λ on inference is illustrated. Because uncertainty in λ is explicitly allowed for, the understatement in confidence limits due to fixing λ (analogous to fixing log skew) is avoided. Finally, it is shown how to obtain the marginal flood distribution which can be used to select a design flood with specified exceedance probability.
Earthquake fragility assessment of curved and skewed bridges in Mountain West region.
DOT National Transportation Integrated Search
2016-09-01
Reinforced concrete (RC) bridges with both skew and curvature are common in areas with : complex terrains. Skewed and/or curved bridges were found in existing studies to exhibit more : complicated seismic performance than straight bridges, however th...
DOT National Transportation Integrated Search
2016-09-01
the ISSUE : the RESEARCH : Earthquake Fragility : Assessment of Curved : and Skewed Bridges in : Mountain West Region : Reinforced concrete bridges with both skew and curvature are common in areas with complex terrains. : These bridges are irregular ...
Li, Haocheng; Staudenmayer, John; Wang, Tianying; Keadle, Sarah Kozey; Carroll, Raymond J
2018-02-20
We take a functional data approach to longitudinal studies with complex bivariate outcomes. This work is motivated by data from a physical activity study that measured 2 responses over time in 5-minute intervals. One response is the proportion of time active in each interval, a continuous proportions with excess zeros and ones. The other response, energy expenditure rate in the interval, is a continuous variable with excess zeros and skewness. This outcome is complex because there are 3 possible activity patterns in each interval (inactive, partially active, and completely active), and those patterns, which are observed, induce both nonrandom and random associations between the responses. More specifically, the inactive pattern requires a zero value in both the proportion for active behavior and the energy expenditure rate; a partially active pattern means that the proportion of activity is strictly between zero and one and that the energy expenditure rate is greater than zero and likely to be moderate, and the completely active pattern means that the proportion of activity is exactly one, and the energy expenditure rate is greater than zero and likely to be higher. To address these challenges, we propose a 3-part functional data joint modeling approach. The first part is a continuation-ratio model to reorder the ordinal valued 3 activity patterns. The second part models the proportions when they are in interval (0,1). The last component specifies the skewed continuous energy expenditure rate with Box-Cox transformations when they are greater than zero. In this 3-part model, the regression structures are specified as smooth curves measured at various time points with random effects that have a correlation structure. The smoothed random curves for each variable are summarized using a few important principal components, and the association of the 3 longitudinal components is modeled through the association of the principal component scores. The difficulties in handling the ordinal and proportional variables are addressed using a quasi-likelihood type approximation. We develop an efficient algorithm to fit the model that also involves the selection of the number of principal components. The method is applied to physical activity data and is evaluated empirically by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.
Arc voltage distribution skewness as an indicator of electrode gap during vacuum arc remelting
Williamson, Rodney L.; Zanner, Frank J.; Grose, Stephen M.
1998-01-01
The electrode gap of a VAR is monitored by determining the skewness of a distribution of gap voltage measurements. A decrease in skewness indicates an increase in gap and may be used to control the gap.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord [Eau Claire, WI; Cornett, Frank N [Chippewa Falls, WI
2008-10-07
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord [Redwood Shores, CA; Cornett, Frank N [Chippewa Falls, WI
2011-10-04
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.
Rethinking "normal": The role of stochasticity in the phenology of a synchronously breeding seabird.
Youngflesh, Casey; Jenouvrier, Stephanie; Hinke, Jefferson T; DuBois, Lauren; St Leger, Judy; Trivelpiece, Wayne Z; Trivelpiece, Susan G; Lynch, Heather J
2018-05-01
Phenological changes have been observed in a variety of systems over the past century. There is concern that, as a consequence, ecological interactions are becoming increasingly mismatched in time, with negative consequences for ecological function. Significant spatial heterogeneity (inter-site) and temporal variability (inter-annual) can make it difficult to separate intrinsic, extrinsic and stochastic drivers of phenological variability. The goal of this study was to understand the timing and variability in breeding phenology of Adélie penguins under fixed environmental conditions and to use those data to identify a "null model" appropriate for disentangling the sources of variation in wild populations. Data on clutch initiation were collected from both wild and captive populations of Adélie penguins. Clutch initiation in the captive population was modelled as a function of year, individual and age to better understand phenological patterns observed in the wild population. Captive populations displayed as much inter-annual variability in breeding phenology as wild populations, suggesting that variability in breeding phenology is the norm and thus may be an unreliable indicator of environmental forcing. The distribution of clutch initiation dates was found to be moderately asymmetric (right skewed) both in the wild and in captivity, consistent with the pattern expected under social facilitation. The role of stochasticity in phenological processes has heretofore been largely ignored. However, these results suggest that inter-annual variability in breeding phenology can arise independent of any environmental or demographic drivers and that synchronous breeding can enhance inherent stochasticity. This complicates efforts to relate phenological variation to environmental variability in the wild. Accordingly, we must be careful to consider random forcing in phenological processes, lest we fit models to data dominated by random noise. This is particularly true for colonial species where breeding synchrony may outweigh each individual's effort to time breeding with optimal environmental conditions. Our study highlights the importance of identifying appropriate null models for studying phenology. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.
New approaches to probing Minkowski functionals
NASA Astrophysics Data System (ADS)
Munshi, D.; Smidt, J.; Cooray, A.; Renzi, A.; Heavens, A.; Coles, P.
2013-10-01
We generalize the concept of the ordinary skew-spectrum to probe the effect of non-Gaussianity on the morphology of cosmic microwave background (CMB) maps in several domains: in real space (where they are commonly known as cumulant-correlators), and in harmonic and needlet bases. The essential aim is to retain more information than normally contained in these statistics, in order to assist in determining the source of any measured non-Gaussianity, in the same spirit as Munshi & Heavens skew-spectra were used to identify foreground contaminants to the CMB bispectrum in Planck data. Using a perturbative series to construct the Minkowski functionals (MFs), we provide a pseudo-C_ℓ based approach in both harmonic and needlet representations to estimate these spectra in the presence of a mask and inhomogeneous noise. Assuming homogeneous noise, we present approximate expressions for error covariance for the purpose of joint estimation of these spectra. We present specific results for four different models of primordial non-Gaussianity local, equilateral, orthogonal and enfolded models, as well as non-Gaussianity caused by unsubtracted point sources. Closed form results of next-order corrections to MFs too are obtained in terms of a quadruplet of kurt-spectra. We also use the method of modal decomposition of the bispectrum and trispectrum to reconstruct the MFs as an alternative method of reconstruction of morphological properties of CMB maps. Finally, we introduce the odd-parity skew-spectra to probe the odd-parity bispectrum and its impact on the morphology of the CMB sky. Although developed for the CMB, the generic results obtained here can be useful in other areas of cosmology.
Sanawan, Ejaz; Qureshi, Ahmad Uzair; Qureshi, Sidra Shoaib; Cheema, Khalid M; Cheema, Muhammad Arshad
2017-10-01
To determine the efficacy of ultrasound shear in laparoscopic cholecystectomy in terms of total operative time, postoperative bile leaks, gall bladder perforation rate, and postoperative bleeding from cystic artery and collateral injury to bowel and duodenum. Comparative study. Mayo Hospital, Lahore, from June 2013 to May 2014. 150 cases (75 in each group) were randomized into two groups, i.e. harmonic scalpel clipless group (HSG) versus conventional laparoscopic cholecystectomy (CLC) with electrocautery group. The above stated variables were documented. The data for age, blood loss, and drain output were positively skewed as calculated using the Shapiro-Wilk test. The histograms, Q-Q plots and box plots were analyzed for all the dependent variables. Skewed qualitative continuous data was analyzed using the Mann-Whitney U-Test. Operative time was significantly lower in HSG as compared to CLC. Median operative times were 30 minutes (IQR 10) versus 35 minutes (IQR 10) (p<0.001). HSG group had perforation rate of 5/75 (6.67%) as compated to 16/75 (21.33%) in CLC (p=0.010). Intraoperative blood loss in group Awas significantly lower than in group B (p=0.001). Postoperative median pain score was 3 (IQR 2) versus 3 (IQR 3) in HSG versus CLC, respectively. All the primary outcomes showed improved results in the ultrasound shear group as compared to the group for conventional electrocautery.
Building confidence and credibility into CAD with belief decision trees
NASA Astrophysics Data System (ADS)
Affenit, Rachael N.; Barns, Erik R.; Furst, Jacob D.; Rasin, Alexander; Raicu, Daniela S.
2017-03-01
Creating classifiers for computer-aided diagnosis in the absence of ground truth is a challenging problem. Using experts' opinions as reference truth is difficult because the variability in the experts' interpretations introduces uncertainty in the labeled diagnostic data. This uncertainty translates into noise, which can significantly affect the performance of any classifier on test data. To address this problem, we propose a new label set weighting approach to combine the experts' interpretations and their variability, as well as a selective iterative classification (SIC) approach that is based on conformal prediction. Using the NIH/NCI Lung Image Database Consortium (LIDC) dataset in which four radiologists interpreted the lung nodule characteristics, including the degree of malignancy, we illustrate the benefits of the proposed approach. Our results show that the proposed 2-label-weighted approach significantly outperforms the accuracy of the original 5- label and 2-label-unweighted classification approaches by 39.9% and 7.6%, respectively. We also found that the weighted 2-label models produce higher skewness values by 1.05 and 0.61 for non-SIC and SIC respectively on root mean square error (RMSE) distributions. When each approach was combined with selective iterative classification, this further improved the accuracy of classification for the 2-weighted-label by 7.5% over the original, and improved the skewness of the 5-label and 2-unweighted-label by 0.22 and 0.44 respectively.
Study on compensation algorithm of head skew in hard disk drives
NASA Astrophysics Data System (ADS)
Xiao, Yong; Ge, Xiaoyu; Sun, Jingna; Wang, Xiaoyan
2011-10-01
In hard disk drives (HDDs), head skew among multiple heads is pre-calibrated during manufacturing process. In real applications with high capacity of storage, the head stack may be tilted due to environmental change, resulting in additional head skew errors from outer diameter (OD) to inner diameter (ID). In case these errors are below the preset threshold for power on recalibration, the current strategy may not be aware, and drive performance under severe environment will be degraded. In this paper, in-the-field compensation of small DC head skew variation across stroke is proposed, where a zone table has been equipped. Test results demonstrating its effectiveness to reduce observer error and to enhance drive performance via accurate prediction of DC head skew are provided.
Asymmetric skew Bessel processes and their applications to finance
NASA Astrophysics Data System (ADS)
Decamps, Marc; Goovaerts, Marc; Schoutens, Wim
2006-02-01
In this paper, we extend the Harrison and Shepp's construction of the skew Brownian motion (1981) and we obtain a diffusion similar to the two-dimensional Bessel process with speed and scale densities discontinuous at one point. Natural generalizations to multi-dimensional and fractional order Bessel processes are then discussed as well as invariance properties. We call this family of diffusions asymmetric skew Bessel processes in opposition to skew Bessel processes as defined in Barlow et al. [On Walsh's Brownian motions, Seminaire de Probabilities XXIII, Lecture Notes in Mathematics, vol. 1372, Springer, Berlin, New York, 1989, pp. 275-293]. We present factorizations involving (asymmetric skew) Bessel processes with random time. Finally, applications to the valuation of perpetuities and Asian options are proposed.
Optical clock distribution in supercomputers using polyimide-based waveguides
NASA Astrophysics Data System (ADS)
Bihari, Bipin; Gan, Jianhua; Wu, Linghui; Liu, Yujie; Tang, Suning; Chen, Ray T.
1999-04-01
Guided-wave optics is a promising way to deliver high-speed clock-signal in supercomputer with minimized clock-skew. Si- CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitters. Surface-normal couplers can couple the optical clock signals into and out from the H-tree polyimide waveguides surface-normally, which facilitates the integration of photodetectors to convert optical-signal to electrical-signal. A 45-degree surface- normal couplers has been integrated at each output end. The measured output coupling efficiency is nearly 100 percent. The output profile from 45-degree surface-normal coupler were calculated using Fresnel approximation. the theoretical result is in good agreement with experimental result. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.
Arc voltage distribution skewness as an indicator of electrode gap during vacuum arc remelting
Williamson, R.L.; Zanner, F.J.; Grose, S.M.
1998-01-13
The electrode gap of a VAR is monitored by determining the skewness of a distribution of gap voltage measurements. A decrease in skewness indicates an increase in gap and may be used to control the gap. 4 figs.
DOT National Transportation Integrated Search
2014-05-01
Different problems in straight skewed steel I-girder bridges are often associated with the methods used for detailing the cross-frames. Use of theoretical terms to describe these detailing methods and absence of complete and simplified design approac...
NASA Astrophysics Data System (ADS)
Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.
2016-12-01
It is widely recognised that merging radar rainfall estimates (RRE) with rain gauge data can improve the RRE and provide areal and temporal coverage that rain gauges cannot offer. Many methods to merge radar and rain gauge data are based on kriging and require an assumption of Gaussianity on the variable of interest. In particular, this work looks at kriging with external drift (KED), because it is an efficient, widely used, and well performing merging method. Rainfall, especially at finer temporal scale, does not have a normal distribution and presents a bi-modal skewed distribution. In some applications a Gaussianity assumption is made, without any correction. In other cases, variables are transformed in order to obtain a distribution closer to Gaussian. This work has two objectives: 1) compare different transformation methods in merging applications; 2) evaluate the uncertainty arising when untransformed rainfall data is used in KED. The comparison of transformation methods is addressed under two points of view. On the one hand, the ability to reproduce the original probability distribution after back-transformation of merged products is evaluated with qq-plots, on the other hand the rainfall estimates are compared with an independent set of rain gauge measurements. The tested methods are 1) no transformation, 2) Box-Cox transformations with parameter equal to λ=0.5 (square root), 3) λ=0.25 (square root - square root), and 4) λ=0.1 (almost logarithmic), 5) normal quantile transformation, and 6) singularity analysis. The uncertainty associated with the use of non-transformed data in KED is evaluated in comparison with the best performing product. The methods are tested on a case study in Northern England, using hourly data from 211 tipping bucket rain gauges from the Environment Agency and radar rainfall data at 1 km/5-min resolutions from the UK Met Office. In addition, 25 independent rain gauges from the UK Met Office were used to assess the merged products.
Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.
2014-01-01
Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B17B-GB and EMA-MGB RPD-boxplot results showed that the median RPDs across all streamgaging stations for the 10-, 1-, and 0.2-percent AEPs, computed using station skew, were approximately zero. As the AEP flow estimates decreased (that is, from 10 to 0.2 percent AEP) the variability in the RPDs increased, indicating that the AEP flow estimate was greater for EMA-MGB when compared to B17B-GB. There was only one RPD greater than 100 percent for the 10- and 1-percent AEP estimates, whereas 19 RPDs exceeded 100 percent for the 0.2-percent AEP. At streamgaging stations with low-outlier data, historical peak-flow data, or both, RPDs ranged from −84 to 262 percent for the 0.2-percent AEP flow estimate. When streamgaging stations were separated by the presence of historical peak-flow data (that is, no low outliers or censored peaks) or by low outlier peak-flow data (no historical data), the results showed that RPD variability was greatest for the 0.2-AEP flow estimates, indicating that the treatment of historical and (or) low-outlier data was different between methods and that method differences were most influential when estimating the less probable AEP flows (1, 0.5, and 0.2 percent). When regional skew information was weighted with the station skew, B17B-GB estimates were generally higher than the EMA-MGB estimates for any given AEP. This was related to the different regional skews and mean square error used in the weighting procedure for each flood frequency analysis. The B17B-GB weighted skew analysis used a more positive regional skew determined in USGS Water Supply Paper 2433 (Thomas and others, 1997), while the EMA-MGB analysis used a more negative regional skew with a lower mean square error determined from a Bayesian generalized least squares analysis. Regional groupings of streamgaging stations reflected differences in physiographic and climatic characteristics. Potentially influential low flows (PILFs) were more prevalent in arid regions of the State, and generally AEP flows were larger with EMA-MGB than with B17B-GB for gaging stations with PILFs. In most cases EMA-MGB curves would fit the largest floods more accurately than B17B-GB. In areas of the State with more baseflow, such as along the Mogollon Rim and the White Mountains, streamgaging stations generally had fewer PILFs and more positive skews, causing estimated AEP flows to be larger with B17B-GB than with EMA-MGB. The effect of including regional skew was similar for all regions, and the observed pattern was increasingly greater B17B-GB flows (more negative RPDs) with each decreasing AEP quantile. A variation on a goodness-of-fit test statistic was used to describe each method’s ability to fit the largest floods. The mean absolute percent difference between the measured peak flows and the log-Pearson Type 3 (LP3)-estimated flows, for each method, was averaged over the 90th, 75th, and 50th percentiles of peak-flow data at each site. In most percentile subsets, EMA-MGB on average had smaller differences (1 to 3 percent) between the observed and fitted value, suggesting that the EMA-MGB-LP3 distribution is fitting the observed peak-flow data more precisely than B17B-GB. The smallest EMA-MGB percent differences occurred for the greatest 10 percent (90th percentile) of the peak-flow data. When stations were analyzed by USGS NWIS peak flow qualification code groups, the stations with historical peak flows and no low outliers had average percent differences as high as 11 percent greater for B17B-GB, indicating that EMA-MGB utilized the historical information to fit the largest observed floods more accurately. A resampling procedure was used in which 1,000 random subsamples were drawn, each comprising one-half of the observed data. An LP3 distribution was fit to each subsample using B17B-GB and EMA-MGB methods, and the predicted 1-percent AEP flows were compared to those generated from distributions fit to the entire dataset. With station skew, the two methods were similar in the median percent difference, but with weighted skew EMA-MGB estimates were generally better. At two gages where B17B-GB appeared to perform better, a large number of peak flows were deemed to be PILFs by the MGB test, although they did not appear to depart significantly from the trend of the data (step or dogleg appearance). At two gages where EMA-MGB performed better, the MGB identified several PILFs that were affecting the fitted distribution of the B17B-GB method. Monte Carlo simulations were run for the LP3 distribution using different skews and with different assumptions about the expected number of historical peaks. The primary benefit of running Monte Carlo simulations is that the underlying distribution statistics are known, meaning that the true 1-percent AEP is known. The results showed that EMA-MGB performed as well or better in situations where the LP3 distribution had a zero or positive skew and historical information. When the skew for the LP3 distribution was negative, EMA-MGB performed significantly better than B17B-GB and EMA-MGB estimates were less biased by more closely estimating the true 1-percent AEP for 1, 2, and 10 historical flood scenarios.
Mean estimation in highly skewed samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pederson, S P
The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.
Measuring Skewness: A Forgotten Statistic?
ERIC Educational Resources Information Center
Doane, David P.; Seward, Lori E.
2011-01-01
This paper discusses common approaches to presenting the topic of skewness in the classroom, and explains why students need to know how to measure it. Two skewness statistics are examined: the Fisher-Pearson standardized third moment coefficient, and the Pearson 2 coefficient that compares the mean and median. The former is reported in statistical…
NASA Technical Reports Server (NTRS)
Deissler, Robert G.
1990-01-01
The variation of the velocity-derivative skewness of a Navier-Stokes flow as the Reynolds number goes toward zero is calculated numerically. The value of the skewness, which has been somewhat controversial, is shown to become small at low Reynolds numbers.
Investigation of free vibration characteristics for skew multiphase magneto-electro-elastic plate
NASA Astrophysics Data System (ADS)
Kiran, M. C.; Kattimani, S.
2018-04-01
This article presents the investigation of skew multiphase magneto-electro-elastic (MMEE) plate to assess its free vibration characteristics. A finite element (FE) model is formulated considering the different couplings involved via coupled constitutive equations. The transformation matrices are derived to transform local degrees of freedom into the global degrees of freedom for the nodes lying on the skew edges. Effect of different volume fraction (Vf) on the free vibration behavior is explicitly studied. In addition, influence of width to thickness ratio, the aspect ratio, and the stacking arrangement on natural frequencies of skew multiphase MEE plate investigated. Particular attention has been paid to investigate the effect of skew angle on the non-dimensional Eigen frequencies of multiphase MEE plate with simply supported edges.
Skew information in the XY model with staggered Dzyaloshinskii-Moriya interaction
NASA Astrophysics Data System (ADS)
Qiu, Liang; Quan, Dongxiao; Pan, Fei; Liu, Zhi
2017-06-01
We study the performance of the lower bound of skew information in the vicinity of transition point for the anisotropic spin-1/2 XY chain with staggered Dzyaloshinskii-Moriya interaction by use of quantum renormalization-group method. For a fixed value of the Dzyaloshinskii-Moriya interaction, there are two saturated values for the lower bound of skew information corresponding to the spin-fluid and Néel phases, respectively. The scaling exponent of the lower bound of skew information closely relates to the correlation length of the model and the Dzyaloshinskii-Moriya interaction shifts the factorization point. Our results show that the lower bound of skew information can be a good candidate to detect the critical point of XY spin chain with staggered Dzyaloshinskii-Moriya interaction.
Waldinger, Marcel D; Zwinderman, Aeilko H; Olivier, Berend; Schweitzer, Dave H
2008-02-01
The intravaginal ejaculation latency time (IELT) behaves in a skewed manner and needs the appropriate statistics for correct interpretation of treatment results. To explain the rightful use of geometrical mean IELT values and the fold increase of the geometric mean IELT because of the positively skewed IELT distribution. Linking theoretical arguments to the outcome of several selective serotonin reuptake inhibitor and modern antidepressant study results. Geometric mean IELT and fold increase of geometrical mean IELT. Log-transforming each separate IELT measurement of each individual man is the basis for the calculation of the geometric mean IELT. A drug-induced positively skewed IELT distribution necessitates the calculation of the geometric mean IELTs at baseline and during drug treatment. In a positively skewed IELT distribution, the use of the "arithmetic" mean IELT risks an overestimation of the drug-induced ejaculation delay as the mean IELT is always higher than the geometric mean IELT. Strong ejaculation-delaying drugs give rise to a strong positively skewed IELT distribution, whereas weak ejaculation-delaying drugs give rise to (much) less skewed IELT distributions. Ejaculation delay is expressed in fold increase of the geometric mean IELT. Drug-induced ejaculatory performance discloses a positively skewed IELT distribution, requiring the use of the geometric mean IELT and the fold increase of the geometric mean IELT.
X chromosome regulation: diverse patterns in development, tissues and disease
Deng, Xinxian; Berletch, Joel B.; Nguyen, Di K.; Disteche, Christine M.
2014-01-01
Genes on the mammalian X chromosome are present in one copy in males and two copies in females. The complex mechanisms that regulate the X chromosome lead to evolutionary and physiological variability in gene expression between species, the sexes, individuals, developmental stages, tissues and cell types. In early development, delayed and incomplete X chromosome inactivation (XCI) in some species causes variability in gene expression. Additional diversity stems from escape from XCI and from mosaicism or XCI skewing in females. This causes sex-specific differences that manifest as differential gene expression and associated phenotypes. Furthermore, the complexity and diversity of X dosage regulation affect the severity of diseases caused by X-linked mutations. PMID:24733023
NASA Astrophysics Data System (ADS)
Ghotbi, Abdoul R.
2014-09-01
The seismic behavior of skewed bridges has not been well studied compared to straight bridges. Skewed bridges have shown extensive damage, especially due to deck rotation, shear keys failure, abutment unseating and column-bent drift. This research, therefore, aims to study the behavior of skewed and straight highway overpass bridges both with and without taking into account the effects of Soil-Structure Interaction (SSI) due to near-fault ground motions. Due to several sources of uncertainty associated with the ground motions, soil and structure, a probabilistic approach is needed. Thus, a probabilistic methodology similar to the one developed by the Pacific Earthquake Engineering Research Center (PEER) has been utilized to assess the probability of damage due to various levels of shaking using appropriate intensity measures with minimum dispersions. The probabilistic analyses were performed for various bridge configurations and site conditions, including sand ranging from loose to dense and clay ranging from soft to stiff, in order to evaluate the effects. The results proved a considerable susceptibility of skewed bridges to deck rotation and shear keys displacement. It was also found that SSI had a decreasing effect on the damage probability for various demands compared to the fixed-base model without including SSI. However, deck rotation for all types of the soil and also abutment unseating for very loose sand and soft clay showed an increase in damage probability compared to the fixed-base model. The damage probability for various demands has also been found to decrease with an increase of soil strength for both sandy and clayey sites. With respect to the variations in the skew angle, an increase in skew angle has had an increasing effect on the amplitude of the seismic response for various demands. Deck rotation has been very sensitive to the increase in the skew angle; therefore, as the skew angle increased, the deck rotation responded accordingly. Furthermore, abutment unseating showed an increasing trend due to an increase in skew angle for both fixed-base and SSI models.
Hydrodynamic impeller stiffness, damping, and inertia in the rotordynamics of centrifugal flow pumps
NASA Technical Reports Server (NTRS)
Jery, S.; Acosta, A. J.; Brennen, C. E.; Caughey, T. K.
1984-01-01
The lateral hydrodynamic forces experienced by a centrifugal pump impeller performing circular whirl motions within several volute geometries were measured. The lateral forces were decomposed into: (1) time averaged lateral forces and (2) hydrodynamic force matrices representing the variation of the lateral forces with position of the impeller center. It is found that these force matrices essentially consist of equal diagonal terms and skew symmetric off diagonal terms. One consequence of this is that during its whirl motion the impeller experiences forces acting normal and tangential to the locus of whirl. Data on these normal and tangential forces are presented; it is shown that there exists a region of positive reduced whirl frequencies, within which the hydrodynamic forces can be destablizing with respect to whirl.
Sample Skewness as a Statistical Measurement of Neuronal Tuning Sharpness
Samonds, Jason M.; Potetz, Brian R.; Lee, Tai Sing
2014-01-01
We propose using the statistical measurement of the sample skewness of the distribution of mean firing rates of a tuning curve to quantify sharpness of tuning. For some features, like binocular disparity, tuning curves are best described by relatively complex and sometimes diverse functions, making it difficult to quantify sharpness with a single function and parameter. Skewness provides a robust nonparametric measure of tuning curve sharpness that is invariant with respect to the mean and variance of the tuning curve and is straightforward to apply to a wide range of tuning, including simple orientation tuning curves and complex object tuning curves that often cannot even be described parametrically. Because skewness does not depend on a specific model or function of tuning, it is especially appealing to cases of sharpening where recurrent interactions among neurons produce sharper tuning curves that deviate in a complex manner from the feedforward function of tuning. Since tuning curves for all neurons are not typically well described by a single parametric function, this model independence additionally allows skewness to be applied to all recorded neurons, maximizing the statistical power of a set of data. We also compare skewness with other nonparametric measures of tuning curve sharpness and selectivity. Compared to these other nonparametric measures tested, skewness is best used for capturing the sharpness of multimodal tuning curves defined by narrow peaks (maximum) and broad valleys (minima). Finally, we provide a more formal definition of sharpness using a shape-based information gain measure and derive and show that skewness is correlated with this definition. PMID:24555451
Jabbar, Ahmed Najah
2018-04-13
This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.
X Chromosome Inactivation in Women with Alcoholism
Manzardo, Ann M.; Henkhaus, Rebecca; Hidaka, Brandon; Penick, Elizabeth C.; Poje, Albert B.; Butler, Merlin G.
2012-01-01
Background All female mammals with two X chromosomes balance gene expression with males having only one X by inactivating one of their Xs (X chromosome inactivation, XCI). Analysis of XCI in females offers the opportunity to investigate both X-linked genetic factors and early embryonic development that may contribute to alcoholism. Increases in the prevalence of skewing of XCI in women with alcoholism could implicate biological risk factors. Methods The pattern of XCI was examined in DNA isolated in blood from 44 adult females meeting DSM IV criteria for an Alcohol Use Disorder, and 45 control females with no known history of alcohol abuse or dependence. XCI status was determined by analyzing digested and undigested polymerase chain reaction (PCR) products of the polymorphic androgen receptor (AR) gene located on the X chromosome. Subjects were categorized into 3 groups based upon the degree of XCI skewness: random (50:50–64:36), moderately skewed (65:35–80:20) and highly skewed (>80:20). Results XCI status from informative females with alcoholism was found to be random in 59% (n=26), moderately skewed in 27% (n=12) or highly skewed in 14% (n=6). Control subjects showed 60%, 29% and 11%, respectively. The distribution of skewed XCI observed among women with alcoholism did not differ statistically from that of control subjects (χ2 =0.14, 2 df, p=0.93). Conclusions Our data did not support an increase in XCI skewness among women with alcoholism or implicate early developmental events associated with embryonic cell loss or unequal (non-random) expression of X-linked gene(s) or defects in alcoholism among females. PMID:22375556
Generalized Skew Coefficients of Annual Peak Flows for Rural, Unregulated Streams in West Virginia
Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.
2009-01-01
Generalized skew was determined from analysis of records from 147 streamflow-gaging stations in or near West Virginia. The analysis followed guidelines established by the Interagency Advisory Committee on Water Data described in Bulletin 17B, except that stations having 50 or more years of record were used instead of stations with the less restrictive recommendation of 25 or more years of record. The generalized-skew analysis included contouring, averaging, and regression of station skews. The best method was considered the one with the smallest mean square error (MSE). MSE is defined as the following quantity summed and divided by the number of peaks: the square of the difference of an individual logarithm (base 10) of peak flow less the mean of all individual logarithms of peak flow. Contouring of station skews was the best method for determining generalized skew for West Virginia, with a MSE of about 0.2174. This MSE is an improvement over the MSE of about 0.3025 for the national map presented in Bulletin 17B.
Kannurpatti, Sridhar S.; Motes, Michael A.; Rypma, Bart; Biswal, Bharat B.
2012-01-01
In this report we demonstrate a hemodynamic scaling method with resting-state fluctuation of amplitude (RSFA) in healthy adult younger and older subject groups. We show that RSFA correlated with breath hold (BH) responses throughout the brain in groups of younger and older subjects, that RSFA and BH performed comparably in accounting for age-related hemodynamic coupling changes, and yielded more veridical estimates of age-related differences in task-related neural activity. BOLD data from younger and older adults performing motor and cognitive tasks were scaled using RSFA and BH related signal changes. Scaling with RSFA and BH reduced the skew of the BOLD response amplitude distribution in each subject and reduced mean BOLD amplitude and variability in both age groups. Statistically significant differences in intra-subject amplitude variation across regions of activated cortex, and inter-subject amplitude variation in regions of activated cortex were observed between younger and older subject groups. Intra- and inter-subject variability differences were mitigated after scaling. RSFA, though similar to BH in minimizing skew in the un-scaled BOLD amplitude distribution, attenuated the neural activity related BOLD amplitude significantly less than BH. The amplitude and spatial extent of group activation were lower in the older than in the younger group prior to and after scaling. After accounting for vascular variability differences through scaling, age-related decreases in activation volume were observed during the motor and cognitive tasks. The results suggest that RSFA-scaled data yield age-related neural activity differences during task performance with negligible effects from non-neural (i.e., vascular) sources. PMID:20665721
NASA Astrophysics Data System (ADS)
Timmons, Nicholas; Cooray, Asantha; Feng, Chang; Keating, Brian
2017-11-01
We measure the cosmic microwave background (CMB) skewness power spectrum in Planck, using frequency maps of the HFI instrument and the Sunyaev-Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing-SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gas pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck.
Opposite GC skews at the 5' and 3' ends of genes in unicellular fungi
2011-01-01
Background GC-skews have previously been linked to transcription in some eukaryotes. They have been associated with transcription start sites, with the coding strand G-biased in mammals and C-biased in fungi and invertebrates. Results We show a consistent and highly significant pattern of GC-skew within genes of almost all unicellular fungi. The pattern of GC-skew is asymmetrical: the coding strand of genes is typically C-biased at the 5' ends but G-biased at the 3' ends, with intermediate skews at the middle of genes. Thus, the initiation, elongation, and termination phases of transcription are associated with different skews. This pattern influences the encoded proteins by generating differential usage of amino acids at the 5' and 3' ends of genes. These biases also affect fourfold-degenerate positions and extend into promoters and 3' UTRs, indicating that skews cannot be accounted by selection for protein function or translation. Conclusions We propose two explanations, the mutational pressure hypothesis, and the adaptive hypothesis. The mutational pressure hypothesis is that different co-factors bind to RNA pol II at different phases of transcription, producing different mutational regimes. The adaptive hypothesis is that cytidine triphosphate deficiency may lead to C-avoidance at the 3' ends of transcripts to control the flow of RNA pol II molecules and reduce their frequency of collisions. PMID:22208287
Effect of skew angle on second harmonic guided wave measurement in composite plates
NASA Astrophysics Data System (ADS)
Cho, Hwanjeong; Choi, Sungho; Lissenden, Cliff J.
2017-02-01
Waves propagating in anisotropic media are subject to skewing effects due to the media having directional wave speed dependence, which is characterized by slowness curves. Likewise, the generation of second harmonics is sensitive to micro-scale damage that is generally not detectable from linear features of ultrasonic waves. Here, the effect of skew angle on second harmonic guided wave measurement in a transversely isotropic lamina and a quasi-isotropic laminate are numerically studied. The strain energy density function for a nonlinear transversely isotropic material is formulated in terms of the Green-Lagrange strain invariants. The guided wave mode pairs for cumulative second harmonic generation in the plate are selected in accordance with the internal resonance criteria - i.e., phase matching and non-zero power flux. Moreover, the skew angle dispersion curves for the mode pairs are obtained from the semi-analytical finite element method using the derivative of the slowness curve. The skew angles of the primary and secondary wave modes are calculated and wave propagation simulations are carried out using COMSOL. Numerical simulations revealed that the effect of skew angle mismatch can be significant for second harmonic generation in anisotropic media. The importance of skew angle matching on cumulative second harmonic generation is emphasized and the accompanying issue of the selection of internally resonant mode pairs for both a unidirectional transversely isotropic lamina and a quasi-isotropic laminate is demonstrated.
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
Discovery of the Red-Skewed K-alpha Iron Line in Cyg X-2 with Suzaku
NASA Technical Reports Server (NTRS)
Shaposhnikov, Nikolai; Titarchuk, Lev; Laurent, Philippe
2008-01-01
We report on the Suzaku observation of neutron star low-mass X-ray binary Cygnus X-2 which reveals strong iron K-alpha emission line. The line profile shows a prominent red wing extending down to 4 keV. This discovery increases the number of neutron star sources where red-skewed iron lines were observed and strongly suggests that this phenomenon is common not only in black holes but also in other types of compact objects. We examine the line profile by fitting it with the model which attributes its production to the relativistic effects due to disk reflection of X-ray radiation. We also apply an alternative model where the red wing is a result of down-scattering effect of the first order with respect to electron velocity in the wind outflow. Both models describe adequately the observed line profile. However, the X-ray variability in a state similar to that in the Suzaku observation which we establish by analysing RXTE observation favors the wind origin of the line formation.
Itoh, Yuya; Itoh, Akihiro; Kawashima, Hiroki; Ohno, Eizaburo; Nakamura, Yosuke; Hiramatsu, Takeshi; Sugimoto, Hiroyuki; Sumi, Hajime; Hayashi, Daijuro; Kuwahara, Takamichi; Morishima, Tomomasa; Funasaka, Kohei; Nakamura, Masanao; Miyahara, Ryoji; Ohmiya, Naoki; Katano, Yoshiaki; Ishigami, Masatoshi; Goto, Hidemi; Hirooka, Yoshiki
2014-07-01
An accurate diagnosis of pancreatic fibrosis is clinically important and may have potential for staging chronic pancreatitis. The aim of this study was to diagnose the grade of pancreatic fibrosis through a quantitative analysis of endoscopic ultrasound elastography (EUS-EG). From September 2004 to October 2010, 58 consecutive patients examined by EUS-EG for both pancreatic tumors and their upstream pancreas before pancreatectomy were enrolled. Preoperative EUS-EG images in the upstream pancreas were statistically quantified, and the results were retrospectively compared with postoperative histological fibrosis in the same area. For the quantification of EUS-EG images, 4 parameters (mean, standard deviation, skewness, and kurtosis) were calculated using novel software. Histological fibrosis was graded into 4 categories (normal, mild fibrosis, marked fibrosis, and severe fibrosis) according to a previously reported scoring system. The fibrosis grade in the upstream pancreas was normal in 24 patients, mild fibrosis in 19, marked fibrosis in 6, and severe fibrosis in 9. Fibrosis grade was significantly correlated with all 4 quantification parameters (mean r = -0.75, standard deviation r = -0.54, skewness r = 0.69, kurtosis r = 0.67). According to the receiver operating characteristic analysis, the mean was the most useful parameter for diagnosing pancreatic fibrosis. Using the mean, the area under the ROC curves for the diagnosis of mild or higher-grade fibrosis, marked or higher-grade fibrosis and severe fibrosis were 0.90, 0.90, and 0.90, respectively. An accurate diagnosis of pancreatic fibrosis may be possible by analyzing EUS-EG images.
Anomalous Hall effect in semiconductor quantum wells in proximity to chiral p -wave superconductors
NASA Astrophysics Data System (ADS)
Yang, F.; Yu, T.; Wu, M. W.
2018-05-01
By using the gauge-invariant optical Bloch equation, we perform a microscopic kinetic investigation on the anomalous Hall effect in chiral p -wave superconducting states. Specifically, the intrinsic anomalous Hall conductivity in the absence of the magnetic field is zero as a consequence of Galilean invariance in our description. As for the extrinsic channel, a finite anomalous Hall current is obtained from the impurity scattering with the optically excited normal quasiparticle current even at zero temperature. From our kinetic description, it can be clearly seen that the excited normal quasiparticle current is due to an induced center-of-mass momentum of Cooper pairs through the acceleration driven by ac electric field. For the induced anomalous Hall current, we show that the conventional skew-scattering channel in the linear response makes the dominant contribution in the strong impurity interaction. In this case, our kinetic description as a supplementary viewpoint mostly confirms the results of Kubo formalism in the literature. Nevertheless, in the weak impurity interaction, this skew-scattering channel becomes marginal and we reveal that an induction channel from the Born contribution dominates the anomalous Hall current. This channel, which has long been overlooked in the literature, is due to the particle-hole asymmetry by nonlinear optical excitation. Finally, we study the case in the chiral p -wave superconducting state with a transverse conical magnetization, which breaks the Galilean invariance. In this situation, the intrinsic anomalous Hall conductivity is no longer zero. Comparison of this intrinsic channel with the extrinsic one from impurity scattering is addressed.
NASA Astrophysics Data System (ADS)
Samanta, B.; Al-Balushi, K. R.
2003-03-01
A procedure is presented for fault diagnosis of rolling element bearings through artificial neural network (ANN). The characteristic features of time-domain vibration signals of the rotating machinery with normal and defective bearings have been used as inputs to the ANN consisting of input, hidden and output layers. The features are obtained from direct processing of the signal segments using very simple preprocessing. The input layer consists of five nodes, one each for root mean square, variance, skewness, kurtosis and normalised sixth central moment of the time-domain vibration signals. The inputs are normalised in the range of 0.0 and 1.0 except for the skewness which is normalised between -1.0 and 1.0. The output layer consists of two binary nodes indicating the status of the machine—normal or defective bearings. Two hidden layers with different number of neurons have been used. The ANN is trained using backpropagation algorithm with a subset of the experimental data for known machine conditions. The ANN is tested using the remaining set of data. The effects of some preprocessing techniques like high-pass, band-pass filtration, envelope detection (demodulation) and wavelet transform of the vibration signals, prior to feature extraction, are also studied. The results show the effectiveness of the ANN in diagnosis of the machine condition. The proposed procedure requires only a few features extracted from the measured vibration data either directly or with simple preprocessing. The reduced number of inputs leads to faster training requiring far less iterations making the procedure suitable for on-line condition monitoring and diagnostics of machines.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
X chromosome inactivation in women with alcoholism.
Manzardo, Ann M; Henkhaus, Rebecca; Hidaka, Brandon; Penick, Elizabeth C; Poje, Albert B; Butler, Merlin G
2012-08-01
All female mammals with 2 X chromosomes balance gene expression with males having only 1 X by inactivating one of their X chromosomes (X chromosome inactivation [XCI]). Analysis of XCI in females offers the opportunity to investigate both X-linked genetic factors and early embryonic development that may contribute to alcoholism. Increases in the prevalence of skewing of XCI in women with alcoholism could implicate biological risk factors. The pattern of XCI was examined in DNA isolated in blood from 44 adult women meeting DSM-IV criteria for an alcohol use disorder and 45 control women with no known history of alcohol abuse or dependence. XCI status was determined by analyzing digested and undigested polymerase chain reaction (PCR) products of the polymorphic androgen receptor (AR) gene located on the X chromosome. Subjects were categorized into 3 groups based upon the degree of XCI skewness: random (50:50 to 64:36%), moderately skewed (65:35 to 80:20%), and highly skewed (>80:20%). XCI status from informative women with alcoholism was found to be random in 59% (n = 26), moderately skewed in 27% (n = 12), or highly skewed in 14% (n = 6). Control subjects showed 60, 29, and 11%, respectively. The distribution of skewed XCI observed among women with alcoholism did not differ statistically from that of control subjects (χ(2) test = 0.14, 2 df, p = 0.93). Our data did not support an increase in XCI skewness among women with alcoholism or implicate early developmental events associated with embryonic cell loss or unequal (nonrandom) expression of X-linked gene(s) or defects in alcoholism among women. Copyright © 2012 by the Research Society on Alcoholism.
Szelinger, Szabolcs; Malenica, Ivana; Corneveaux, Jason J.; Siniard, Ashley L.; Kurdoglu, Ahmet A.; Ramsey, Keri M.; Schrauwen, Isabelle; Trent, Jeffrey M.; Narayanan, Vinodh; Huentelman, Matthew J.; Craig, David W.
2014-01-01
In females, X chromosome inactivation (XCI) is an epigenetic, gene dosage compensatory mechanism by inactivation of one copy of X in cells. Random XCI of one of the parental chromosomes results in an approximately equal proportion of cells expressing alleles from either the maternally or paternally inherited active X, and is defined by the XCI ratio. Skewed XCI ratio is suggestive of non-random inactivation, which can play an important role in X-linked genetic conditions. Current methods rely on indirect, semi-quantitative DNA methylation-based assay to estimate XCI ratio. Here we report a direct approach to estimate XCI ratio by integrated, family-trio based whole-exome and mRNA sequencing using phase-by-transmission of alleles coupled with allele-specific expression analysis. We applied this method to in silico data and to a clinical patient with mild cognitive impairment but no clear diagnosis or understanding molecular mechanism underlying the phenotype. Simulation showed that phased and unphased heterozygous allele expression can be used to estimate XCI ratio. Segregation analysis of the patient's exome uncovered a de novo, interstitial, 1.7 Mb deletion on Xp22.31 that originated on the paternally inherited X and previously been associated with heterogeneous, neurological phenotype. Phased, allelic expression data suggested an 83∶20 moderately skewed XCI that favored the expression of the maternally inherited, cytogenetically normal X and suggested that the deleterious affect of the de novo event on the paternal copy may be offset by skewed XCI that favors expression of the wild-type X. This study shows the utility of integrated sequencing approach in XCI ratio estimation. PMID:25503791
When is category specific in Alzheimer's disease?
Laws, Keith R; Gale, Tim M; Leeson, Verity C; Crawford, John R
2005-08-01
Mixed findings have emerged concerning whether category-specific disorders occur in Alzheimer's disease. Factors that may contribute to these inconsistencies include: ceiling effects/skewed distributions for control data in some studies; differences in the severity of cognitive deficit in patients; and differences in the type of analysis (in particular, if and how controls are used to analyse single case data). We examined picture naming in Alzheimer's patients and matched elderly healthy normal controls in three experiments. These experiments used stimuli that did and did not produce ceiling effects/skewed data in controls. In Experiment 1, we examined for category effects in individual DAT patients using commonly used analyses for single cases (chi2 and z-scores). The different techniques produced quite different outcomes. In Experiment 2a, we used the same techniques on a different group of patients with similar outcomes. Finally, in Experiment 2b, we examined the same patients but (a) used stimuli that did not produce ceiling effects/skewed distributions in healthy controls, and (b) used statistical methods that did not treat the control sample as a population. We found that ceiling effects in controls may markedly inflate the incidence of dissociations in which living things are differentially impaired and seriously underestimate dissociations in the opposite direction. In addition, methods that treat the control sample as a population led to inflation in the overall number of dissociations detected. These findings have implications for the reliability of category effects previously reported both in Alzheimer patients and in other pathologies. In particular, they suggest that the greater proportion of living than nonliving deficits reported in the literature may be an artifact of the methods used.
Viggiano, Emanuela; Picillo, Esther; Ergoli, Manuela; Cirillo, Alessandra; Del Gaudio, Stefania; Politano, Luisa
2017-04-01
Becker muscular dystrophy (BMD) is an X-linked recessive disorder affecting approximately 1: 18.000 male births. Female carriers are usually asymptomatic, although 2.5-18% may present muscle or heart symptoms. In the present study, the role of the X chromosome inactivation (XCI) on the onset of symptoms in BMD carriers was analysed and compared with the pattern observed in Duchenne muscular dystrophy (DMD) carriers. XCI was determined on the lymphocytes of 36 BMD carriers (both symptomatic and not symptomatic) from 11 families requiring genetic advice at the Cardiomyology and Medical Genetics of the Second University of Naples, using the AR methylation-based assay. Carriers were subdivided into two groups, according to age above or below 50 years. Seven females from the same families known as noncarriers were used as controls. A Student's t-test for nonpaired data was performed to evaluate the differences observed in the XCI values between asymptomatic and symptomatic carriers, and carriers aged above or below 50 years. A Pearson correlation test was used to evaluate the inheritance of the XCI pattern in 19 mother-daughter pairs. The results showed that symptomatic BMD carriers had a skewed XCI with a preferential inactivation of the X chromosome carrying the normal allele, whereas the asymptomatic carriers and controls showed a random XCI. No concordance concerning the XCI pattern was observed between mothers and related daughters. The data obtained in the present study suggest that the onset of symptoms in BMD carriers is related to a skewed XCI, as observed in DMD carriers. Furthermore, they showed no concordance in the XCI pattern inheritance. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Shen, Dehua; Liu, Lanbiao; Zhang, Yongjie
2018-01-01
The constantly increasing utilization of social media as the alternative information channel, e.g., Twitter, provides us a unique opportunity to investigate the dynamics of the financial market. In this paper, we employ the daily happiness sentiment extracted from Twitter as the proxy for the online sentiment dynamics and investigate its association with the skewness of stock returns of 26 international stock market index returns. The empirical results show that: (1) by dividing the daily happiness sentiment into quintiles from the least to the most happiness days, the skewness of the Most-happiness subgroup is significantly larger than that of the Least-happiness subgroup. Besides, there exist significant differences in any pair of subgroups; (2) in an event study methodology, we further show that the skewness around the highest happiness days is significantly larger than the skewness around the lowest happiness days.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timmons, Nicholas; Cooray, Asantha; Feng, Chang
2017-11-01
We measure the cosmic microwave background (CMB) skewness power spectrum in Planck , using frequency maps of the HFI instrument and the Sunyaev–Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing–SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gasmore » pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck .« less
Fast frequency domain method to detect skew in a document image
NASA Astrophysics Data System (ADS)
Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee
2015-12-01
In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.
NASA Astrophysics Data System (ADS)
Cordle, Michael; Rea, Chris; Jury, Jason; Rausch, Tim; Hardie, Cal; Gage, Edward; Victora, R. H.
2018-05-01
This study aims to investigate the impact that factors such as skew, radius, and transition curvature have on areal density capability in heat-assisted magnetic recording hard disk drives. We explore a "ballistic seek" approach for capturing in-situ scan line images of the magnetization footprint on the recording media, and extract parametric results of recording characteristics such as transition curvature. We take full advantage of the significantly improved cycle time to apply a statistical treatment to relatively large samples of experimental curvature data to evaluate measurement capability. Quantitative analysis of factors that impact transition curvature reveals an asymmetry in the curvature profile that is strongly correlated to skew angle. Another less obvious skew-related effect is an overall decrease in curvature as skew angle increases. Using conventional perpendicular magnetic recording as the reference case, we characterize areal density capability as a function of recording position.
Experimental investigation of the noise emission of axial fans under distorted inflow conditions
NASA Astrophysics Data System (ADS)
Zenger, Florian J.; Renz, Andreas; Becher, Marcus; Becker, Stefan
2016-11-01
An experimental investigation on the noise emission of axial fans under distorted inflow conditions was conducted. Three fans with forward-skewed fan blades and three fans with backward-skewed fan blades and a common operating point were designed with a 2D element blade method. Two approaches were adopted to modify the inflow conditions: first, the inflow turbulence intensity was increased by two different rectangular grids and second, the inflow velocity profile was changed to an asymmetric characteristic by two grids with a distinct bar stacking. An increase in the inflow turbulence intensity affects both tonal and broadband noise, whereas a non-uniform velocity profile at the inlet influences mainly tonal components. The magnitude of this effect is not the same for all fans but is dependent on the blade skew. The impact is greater for the forward-skewed fans than for the backward-skewed and thus directly linked to the fan blade geometry.
Higher incidence of small Y chromosome in humans with trisomy 21 (Down syndrome).
Verma, R S; Huq, A; Madahar, C; Qazi, Q; Dosik, H
1982-09-01
The length of the Y chromosome was measured in 42 black patients with trisomy 21 (47,XY,+21) and a similar number of normal individuals of American black ancestry. The length of the Y was expressed as a function of Y/F ratio and arbitrarily classified into five groups using subjectively defined criteria as follows: very small, small, average, large, and very large. Thirty-eight % of the trisomy 21 patients had small or very small Ys compared to 2.38% of the controls (P less than 0.01). In both populations the size of the Y was not normally distributed. In the normals it was skewed to the left, whereas in the Downs the distribution was flat (platykurtic). A significantly higher incidence of Y length heteromorphisms was noted in the Down as compared to the normal black population. In the light of our current understanding that about one-third of all trisomy 21 patients are due to paternal nondisjunction, it may be tempting to speculate that males with small Y are at an increased risk for nondisjunction of the 21 chromosome.
Comparison of tricuspid and bicuspid aortic valve hemodynamics under steady flow conditions
NASA Astrophysics Data System (ADS)
Seaman, Clara; Ward, James; Sucosky, Philippe
2011-11-01
The bicuspid aortic valve (BAV), a congenital valvular defect consisting of two leaflets instead of three, is associated with a high prevalence of calcific aortic valve disease (CAVD). CAVD also develops in the normal tricuspid aortic valve (TAV) but its progression in the BAV is more severe and rapid. Although hemodynamic abnormalities are increasingly considered potential pathogenic contributor, the native BAV hemodynamics remain largely unknown. Therefore, this study aims at comparing experimentally the hemodynamic environments in TAV and BAV anatomies. Particle-image velocimetry was used to characterize the flow downstream of a native TAV and a model BAV mounted in a left-heart simulator and subjected to three steady flow rates characterizing different phases of the cardiac cycle. While the TAV developed a jet aligned along the valve axis, the BAV was shown to develop a skewed systolic jet with skewness decreasing with increasing flow rate. Measurement of the transvalvular pressure revealed a valvular resistance up to 50% larger in the BAV than in the TAV. The increase in velocity between the TAV and BAV leads to an increase in shear stress downstream of the valve. This study reveals strong hemodynamic abnormalities in the BAV, which may contribute to CAVD pathogenesis.
Adaptive linear rank tests for eQTL studies
Szymczak, Silke; Scheinhardt, Markus O.; Zeller, Tanja; Wild, Philipp S.; Blankenberg, Stefan; Ziegler, Andreas
2013-01-01
Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal–Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. PMID:22933317
Adaptive linear rank tests for eQTL studies.
Szymczak, Silke; Scheinhardt, Markus O; Zeller, Tanja; Wild, Philipp S; Blankenberg, Stefan; Ziegler, Andreas
2013-02-10
Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Tan, K. L.; Chong, Z. L.; Khoo, M. B. C.; Teoh, W. L.; Teh, S. Y.
2017-09-01
Quality control is crucial in a wide variety of fields, as it can help to satisfy customers’ needs and requirements by enhancing and improving the products and services to a superior quality level. The EWMA median chart was proposed as a useful alternative to the EWMA \\bar{X} chart because the median-type chart is robust against contamination, outliers or small deviation from the normality assumption compared to the traditional \\bar{X}-type chart. To provide a complete understanding of the run-length distribution, the percentiles of the run-length distribution should be investigated rather than depending solely on the average run length (ARL) performance measure. This is because interpretation depending on the ARL alone can be misleading, as the process mean shifts change according to the skewness and shape of the run-length distribution, varying from almost symmetric when the magnitude of the mean shift is large, to highly right-skewed when the process is in-control (IC) or slightly out-of-control (OOC). Before computing the percentiles of the run-length distribution, optimal parameters of the EWMA median chart will be obtained by minimizing the OOC ARL, while retaining the IC ARL at a desired value.
Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.
2015-01-01
Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565
Non-linearities in Theory-of-Mind Development.
Blijd-Hoogewys, Els M A; van Geert, Paul L C
2016-01-01
Research on Theory-of-Mind (ToM) has mainly focused on ages of core ToM development. This article follows a quantitative approach focusing on the level of ToM understanding on a measurement scale, the ToM Storybooks, in 324 typically developing children between 3 and 11 years of age. It deals with the eventual occurrence of developmental non-linearities in ToM functioning, using smoothing techniques, dynamic growth model building and additional indicators, namely moving skewness, moving growth rate changes and moving variability. The ToM sum-scores showed an overall developmental trend that leveled off toward the age of 10 years. Within this overall trend two non-linearities in the group-based change pattern were found: a plateau at the age of around 56 months and a dip at the age of 72-78 months. These temporary regressions in ToM sum-score were accompanied by a decrease in growth rate and variability, and a change in skewness of the ToM data, all suggesting a developmental shift in ToM understanding. The temporary decreases also occurred in the different ToM sub-scores and most clearly so in the core ToM component of beliefs. It was also found that girls had an earlier growth spurt than boys and that the underlying developmental path was more salient in girls than in boys. The consequences of these findings are discussed from various theoretical points of view, with an emphasis on a dynamic systems interpretation of the underlying developmental paths.
Non-linearities in Theory-of-Mind Development
Blijd-Hoogewys, Els M. A.; van Geert, Paul L. C.
2017-01-01
Research on Theory-of-Mind (ToM) has mainly focused on ages of core ToM development. This article follows a quantitative approach focusing on the level of ToM understanding on a measurement scale, the ToM Storybooks, in 324 typically developing children between 3 and 11 years of age. It deals with the eventual occurrence of developmental non-linearities in ToM functioning, using smoothing techniques, dynamic growth model building and additional indicators, namely moving skewness, moving growth rate changes and moving variability. The ToM sum-scores showed an overall developmental trend that leveled off toward the age of 10 years. Within this overall trend two non-linearities in the group-based change pattern were found: a plateau at the age of around 56 months and a dip at the age of 72–78 months. These temporary regressions in ToM sum-score were accompanied by a decrease in growth rate and variability, and a change in skewness of the ToM data, all suggesting a developmental shift in ToM understanding. The temporary decreases also occurred in the different ToM sub-scores and most clearly so in the core ToM component of beliefs. It was also found that girls had an earlier growth spurt than boys and that the underlying developmental path was more salient in girls than in boys. The consequences of these findings are discussed from various theoretical points of view, with an emphasis on a dynamic systems interpretation of the underlying developmental paths. PMID:28101065
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, M-D.
2000-08-23
Internal combustion engines are a major source of airborne particulate matter (PM). The size of the engine PM is in the sub-micrometer range. The number of engine particles per unit volume is high, normally in the range of 10{sup 12} to 10{sup 14}. To measure the size distribution of the engine particles dilution of an aerosol sample is required. A diluter utilizing a venturi ejector mixing technique is commercially available and tested. The purpose of this investigation was to determine if turbulence created by the ejector in the mini-dilutor changes the size of particles passing through it. The results ofmore » the NaCl aerosol experiments show no discernible difference in the geometric mean diameter and geometric standard deviation of particles passing through the ejector. Similar results were found for the DOP particles. The ratio of the total number concentrations before and after the ejector indicates that a dilution ratio of approximately 20 applies equally for DOP and NaCl particles. This indicates the dilution capability of the ejector is not affected by the particle composition. The statistical analysis results of the first and second moments of a distribution indicate that the ejector may not change the major parameters (e.g., the geometric mean diameter and geometric standard deviation) characterizing the size distributions of NaCl and DOP particles. However, when the skewness was examined, it indicates that the ejector modifies the particle size distribution significantly. The ejector could change the skewness of the distribution in an unpredictable and inconsistent manner. Furthermore, when the variability of particle counts in individual size ranges as a result of the ejector is examined, one finds that the variability is greater for DOP particles in the size range of 40-150 nm than for NaCl particles in the size range of 30 to 350 nm. The numbers or particle counts in this size region are high enough that the Poisson counting errors are small (<10%) compared with the tail regions. This result shows that the ejector device could have a higher bin-to-bin counting uncertainty for ''soft'' particles such as DOP than for a solid dry particle like NaCl. The results suggest that it may be difficult to precisely characterize the size distribution of particles ejected from the mini-dilution system if the particle is not solid.« less
Mean, Median, and Skew: Correcting a Textbook Rule
ERIC Educational Resources Information Center
von Hippel, Paul T.
2005-01-01
Many textbooks teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. This rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long but the other is heavy. Most commonly, though, the rule fails in discrete…
Generation of net sediment transport by velocity skewness in oscillatory sheet flow
NASA Astrophysics Data System (ADS)
Chen, Xin; Li, Yong; Chen, Genfa; Wang, Fujun; Tang, Xuelin
2018-01-01
This study utilizes a qualitative approach and a two-phase numerical model to investigate net sediment transport caused by velocity skewness beneath oscillatory sheet flow and current. The qualitative approach is derived based on the pseudo-laminar approximation of boundary layer velocity and exponential approximation of concentration. The two-phase model can obtain well the instantaneous erosion depth, sediment flux, boundary layer thickness, and sediment transport rate. It can especially illustrate the difference between positive and negative flow stages caused by velocity skewness, which is considerably important in determining the net boundary layer flow and sediment transport direction. The two-phase model also explains the effect of sediment diameter and phase-lag to sediment transport by comparing the instantaneous-type formulas to better illustrate velocity skewness effect. In previous studies about sheet flow transport in pure velocity-skewed flows, net sediment transport is only attributed to the phase-lag effect. In the present study with the qualitative approach and two-phase model, phase-lag effect is shown important but not sufficient for the net sediment transport beneath pure velocity-skewed flow and current, while the asymmetric wave boundary layer development between positive and negative flow stages also contributes to the sediment transport.
Seaman, Clara; Akingba, A George; Sucosky, Philippe
2014-04-01
The bicuspid aortic valve (BAV), which forms with two leaflets instead of three as in the normal tricuspid aortic valve (TAV), is associated with a spectrum of secondary valvulopathies and aortopathies potentially triggered by hemodynamic abnormalities. While studies have demonstrated an intrinsic degree of stenosis and the existence of a skewed orifice jet in the BAV, the impact of those abnormalities on BAV hemodynamic performance and energy loss has not been examined. This steady-flow study presents the comparative in vitro assessment of the flow field and energy loss in a TAV and type-I BAV under normal and simulated calcified states. Particle-image velocimetry (PIV) measurements were performed to quantify velocity, vorticity, viscous, and Reynolds shear stress fields in normal and simulated calcified porcine TAV and BAV models at six flow rates spanning the systolic phase. The BAV model was created by suturing the two coronary leaflets of a porcine TAV. Calcification was simulated via deposition of glue beads in the base of the leaflets. Valvular performance was characterized in terms of geometric orifice area (GOA), pressure drop, effective orifice area (EOA), energy loss (EL), and energy loss index (ELI). The BAV generated an elliptical orifice and a jet skewed toward the noncoronary leaflet. In contrast, the TAV featured a circular orifice and a jet aligned along the valve long axis. While the BAV exhibited an intrinsic degree of stenosis (18% increase in maximum jet velocity and 7% decrease in EOA relative to the TAV at the maximum flow rate), it generated only a 3% increase in EL and its average ELI (2.10 cm2/m2) remained above the clinical threshold characterizing severe aortic stenosis. The presence of simulated calcific lesions normalized the alignment of the BAV jet and resulted in the loss of jet axisymmetry in the TAV. It also amplified the degree of stenosis in the TAV and BAV, as indicated by the 342% and 404% increase in EL, 70% and 51% reduction in ELI and 48% and 51% decrease in EOA, respectively, relative to the nontreated valve models at the maximum flow rate. This study indicates the ability of the BAV to function as a TAV despite its intrinsic degree of stenosis and suggests the weak dependence of pressure drop on orifice area in calcified valves.
Yang, Huixia; Wei, Yumei; Su, Rina; Wang, Chen; Meng, Wenying; Wang, Yongqing; Shang, Lixin; Cai, Zhenyu; Ji, Liping; Wang, Yunfeng; Sun, Ying; Liu, Jiaxiu; Wei, Li; Sun, Yufeng; Zhang, Xueying; Luo, Tianxia; Chen, Haixia; Yu, Lijun
2016-01-01
Objective To use Z-scores to compare different charts of femur length (FL) applied to our population with the aim of identifying the most appropriate chart. Methods A retrospective study was conducted in Beijing. Fifteen hospitals in Beijing were chosen as clusters using a systemic cluster sampling method, in which 15,194 pregnant women delivered from June 20th to November 30th, 2013. The measurements of FL in the second and third trimester were recorded, as well as the last measurement obtained before delivery. Based on the inclusion and exclusion criteria, we identified FL measurements from 19996 ultrasounds from 7194 patients between 11 and 42 weeks gestation. The FL data were then transformed into Z-scores that were calculated using three series of reference equations obtained from three reports: Leung TN, Pang MW et al (2008); Chitty LS, Altman DG et al (1994); and Papageorghiou AT et al (2014). Each Z-score distribution was presented as the mean and standard deviation (SD). Skewness and kurtosis and were compared with the standard normal distribution using the Kolmogorov-Smirnov test. The histogram of their distributions was superimposed on the non-skewed standard normal curve (mean = 0, SD = 1) to provide a direct visual impression. Finally, the sensitivity and specificity of each reference chart for identifying fetuses <5th or >95th percentile (based on the observed distribution of Z-scores) were calculated. The Youden index was also listed. A scatter diagram with the 5th, 50th, and 95th percentile curves calculated from and superimposed on each reference chart was presented to provide a visual impression. Results The three Z-score distribution curves appeared to be normal, but none of them matched the expected standard normal distribution. In our study, the Papageorghiou reference curve provided the best results, with a sensitivity of 100% for identifying fetuses with measurements < 5th and > 95th percentile, and specificities of 99.9% and 81.5%, respectively. Conclusions It is important to choose an appropriate reference curve when defining what is normal. The Papageorghiou reference curve for FL seems to be the best fit for our population. Perhaps it is time to change our reference curve for femur length. PMID:27458922
Portfolio optimization using median-variance approach
NASA Astrophysics Data System (ADS)
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
A pyramid scheme for three-dimensional diffusion equations on polyhedral meshes
NASA Astrophysics Data System (ADS)
Wang, Shuai; Hang, Xudeng; Yuan, Guangwei
2017-12-01
In this paper, a new cell-centered finite volume scheme is proposed for three-dimensional diffusion equations on polyhedral meshes, which is called as pyramid scheme (P-scheme). The scheme is designed for polyhedral cells with nonplanar cell-faces. The normal flux on a nonplanar cell-face is discretized on a planar face, which is determined by a simple optimization procedure. The resulted discrete form of the normal flux involves only cell-centered and cell-vertex unknowns, and is free from face-centered unknowns. In the case of hexahedral meshes with skewed nonplanar cell-faces, a quite simple expression is obtained for the discrete normal flux. Compared with the second order accurate O-scheme [31], the P-scheme is more robust and the discretization cost is reduced remarkably. Numerical results are presented to show the performance of the P-scheme on various kinds of distorted meshes. In particular, the P-scheme is shown to be second order accurate.
Psychological Health and Overweight and Obesity Among High Stressed Work Environments
Faghri, Pouran D; Mignano, Christina; Huedo- Medina, Tania B; Cherniack, Martin
2016-01-01
Correctional employees are recognized to underreport stress and stress symptoms and are known to have a culture that discourages appearing “weak” and seeking psychiatric help. This study assesses underreporting of stress and emotions. Additionally, it evaluates the relationships between stress and emotions on health behaviors. Correctional employees (n=317) completed physical assessments to measure body mass index (BMI), and surveys to assess perceived stress, emotions, and health behavior (diet, exercise, and sleep quality). Stress and emotion survey items were evaluated for under-reporting via skewness, kurtosis, and visual assessment of histograms. Structural equation modeling evaluated relationships between stress/emotion and health behaviors. Responses to stress and negatively worded emotions were non-normally distributed whereas responses to positively-worded emotions were normally distributed. Emotion predicted diet, exercise, and sleep quality whereas stress predicted only sleep quality. As stress was a poor predictor of health behaviors and responses to stress and negatively worded emotions were non-normally distributed it may suggests correctional employees are under-reporting stress and negative emotions. PMID:27547828
Psychological Health and Overweight and Obesity Among High Stressed Work Environments.
Faghri, Pouran D; Mignano, Christina; Huedo-Medina, Tania B; Cherniack, Martin
2015-07-01
Correctional employees are recognized to underreport stress and stress symptoms and are known to have a culture that discourages appearing "weak" and seeking psychiatric help. This study assesses underreporting of stress and emotions. Additionally, it evaluates the relationships between stress and emotions on health behaviors. Correctional employees (n=317) completed physical assessments to measure body mass index (BMI), and surveys to assess perceived stress, emotions, and health behavior (diet, exercise, and sleep quality). Stress and emotion survey items were evaluated for under-reporting via skewness, kurtosis, and visual assessment of histograms. Structural equation modeling evaluated relationships between stress/emotion and health behaviors. Responses to stress and negatively worded emotions were non-normally distributed whereas responses to positively-worded emotions were normally distributed. Emotion predicted diet, exercise, and sleep quality whereas stress predicted only sleep quality. As stress was a poor predictor of health behaviors and responses to stress and negatively worded emotions were non-normally distributed it may suggests correctional employees are under-reporting stress and negative emotions.
Kolmogorov and scalar spectral regimes in numerical turbulence
NASA Technical Reports Server (NTRS)
Kerr, R. M.
1985-01-01
Velocity and passive-scalar spectra for turbulent fields generated by a forced three-dimensional simulation and Taylormicroscale Reynolds numbers up to 83 are shown to have distinct spectral regimes, including a Kolmogorov inertial subrange. Both one- and three-dimensional spectra are shown for comparison with experiment and theory, respectively. When normalized by the Kolmogorov dissipation scales velocity spectra collapse to a single curve and a high-wavenumber bulge is seen. The bulge leads to an artificially high Kolmogorov constant, but is consistent with recent measurements of the velocity spectrum in the dissipation regime and the velocity-derivative skewness. Scalar spectra, when normalized by the Oboukov-Corrsin scales, collapse to curves which depend only on Prandtl number and show a universal inertial-convective subrange, independent of Prandtl number. When normalized by the Batchelor scales, the scalar spectra show a universal dissipation regime which is independent of Prandtl numbers from 0.1 to 1.0. The time development of velocity spectra is illustrated by energy-transfer spectra in which distinct pulses propagate to high wavenumbers.
Spolarics, Zoltan; Peña, Geber; Qin, Yong; Donnelly, Robert J; Livingston, David H
2017-01-01
Females have a longer lifespan and better general health than males. Considerable number of studies also demonstrated that, after trauma and sepsis, females present better outcomes as compared to males indicating sex-related differences in the innate immune response. The current notion is that differences in the immuno-modulatory effects of sex hormones are the underlying causative mechanism. However, the field remains controversial and the exclusive role of sex hormones has been challenged. Here, we propose that polymorphic X-linked immune competent genes, which are abundant in the population are important players in sex-based immuno-modulation and play a key role in causing sex-related outcome differences following trauma or sepsis. We describe the differences in X chromosome (ChrX) regulation between males and females and its consequences in the context of common X-linked polymorphisms at the individual as well as population level. We also discuss the potential pathophysiological and immune-modulatory aspects of ChrX cellular mosaicism, which is unique to females and how this may contribute to sex-biased immune-modulation. The potential confounding effects of ChrX skewing of cell progenitors at the bone marrow is also presented together with aspects of acute trauma-induced de novo ChrX skewing at the periphery. In support of the hypothesis, novel observations indicating ChrX skewing in a female trauma cohort as well as case studies depicting the temporal relationship between trauma-induced cellular skewing and the clinical course are also described. Finally, we list and discuss a selected set of polymorphic X-linked genes, which are frequent in the population and have key regulatory or metabolic functions in the innate immune response and, therefore, are primary candidates for mediating sex-biased immune responses. We conclude that sex-related differences in a variety of disease processes including the innate inflammatory response to injury and infection may be related to the abundance of X-linked polymorphic immune-competent genes, differences in ChrX regulation, and inheritance patterns between the sexes and the presence of X-linked cellular mosaicism, which is unique to females.
NASA Astrophysics Data System (ADS)
Scholkmann, Felix; Cifra, Michal; Alexandre Moraes, Thiago; de Mello Gallep, Cristiano
2011-12-01
The aim of the present study was to test whether the multifractal properties of ultra-weak photon emission (UPE) from germinating wheat seedlings (Triticum aestivum) change when the seedlings are treated with different concentrations of the toxin potassium dichromate (PD). To this end, UPE was measured (50 seedlings in one Petri dish, duration: approx. 16.6- 28 h) from samples of three groups: (i) control (group C, N = 9), (ii) treated with 25 ppm of PD (group G25, N = 32), and (iii) treated with 150 ppm of PD (group G150, N = 23). For the multifractal analysis, the following steps where performed: (i) each UPE time series was trimmed to a final length of 1000 min; (ii) each UPE time series was filtered, linear detrended and normalized; (iii) the multifractal spectrum (f(α)) was calculated for every UPE time series using the backward multifractal detrended moving average (MFDMA) method; (iv) each multifractal spectrum was characterized by calculating the mode (αmode) of the spectrum and the degree of multifractality (Δα) (v) for every UPE time series its mean, skewness and kurtosis were also calculated; finally (vi) all obtained parameters where analyzed to determine their ability to differentiate between the three groups. This was based on Fisher's discriminant ratio (FDR), which was calculated for each parameter combination. Additionally, a non-parametric test was used to test whether the parameter values are significantly different or not. The analysis showed that when comparing all the three groups, FDR had the highest values for the multifractal parameters (αmode, Δα). Furthermore, the differences in these parameters between the groups were statistically significant (p < 0.05). The classical parameters (mean, skewness and kurtosis) had lower FDR values than the multifractal parameters in all cases and showed no significant difference between the groups (except for the skewness between group C and G150). In conclusion, multifractal analysis enables changes in UPE time series to be detected even when they are hidden for normal linear signal analysis methods. The analysis of changes in the multifractal properties might be a basis to design a classification system enabling the intoxication of cell cultures to be quantified based on UPE measurements.
Bias of averages in life-cycle footprinting of infrastructure: truck and bus case studies.
Taptich, Michael N; Horvath, Arpad
2014-11-18
The life-cycle output (e.g., level of service) of infrastructure systems heavily influences their normalized environmental footprint. Many studies and tools calculate emission factors based on average productivity; however, the performance of these systems varies over time and space. We evaluate the appropriate use of emission factors based on average levels of service by comparing them to those reflecting a distribution of system outputs. For the provision of truck and bus services where fuel economy is assumed constant over levels of service, emission factor estimation biases, described by Jensen's inequality, always result in larger-than-expected environmental impacts (3%-400%) and depend strongly on the variability and skew of truck payloads and bus ridership. Well-to-wheel greenhouse gas emission factors for diesel trucks in California range from 87 to 1,500 g of CO2 equivalents per ton-km, depending on the size and type of trucks and the services performed. Along a bus route in San Francisco, well-to-wheel emission factors ranged between 53 and 940 g of CO2 equivalents per passenger-km. The use of biased emission factors can have profound effects on various policy decisions. If average emission rates must be used, reflecting a distribution of productivity can reduce emission factor biases.
Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W
2011-11-01
Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.
Analyzing coastal environments by means of functional data analysis
NASA Astrophysics Data System (ADS)
Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.
2017-07-01
Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
NASA Astrophysics Data System (ADS)
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
The effect of transverse shear in a cracked plate under skew-symmetric loading
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.
1979-01-01
The problem of an elastic plate containing a through crack and subjected to twisting moments or transverse shear loads is considered. By using a bending theory which allows the satisfaction of the boundary conditions on the crack surface regarding the normal and the twisting moments and the transverse shear load separately, it is found that the resulting asymptotic stress field around the crack tip becomes identical to that given by the elasticity solutions of the plane strain and antiplane shear problems. The problem is solved for uniformly distributed or concentrated twisting moment or transverse shear load and the normalized Mode II and Mode III stress-intensity factors are tabulated. The results also include the effect of the Poisson's ratio and material orthotropy for specially orthotropic materials on the stress-intensity factors.
Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.
Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A
2013-11-01
We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.
Sanabria, Sergio J; Furrer, Roman; Neuenschwander, Jürg; Niemz, Peter; Schütz, Philipp
2015-12-01
Reliable non-destructive testing (NDT) ultrasound systems for timber composite structures require quantitative understanding of the propagation of ultrasound beams in wood. A finite-difference time-domain (FDTD) model is described, which incorporates local anisotropy variations of stiffness, damping and density in timber elements. The propagation of pulsed air-coupled ultrasound (ACU) beams in normal and slanted incidence configurations is reproduced by direct definition of material properties (gas, solid) at each model pixel. First, the model was quantitatively validated against analytical derivations. Time-varying wavefronts in unbounded timber with curved growth rings were accurately reproduced, as well as the acoustic properties (velocity, attenuation, beam skewing) of ACU beams transmitted through timber lamellas. An experimental sound field imaging (SFI) setup was implemented at NDT frequencies (120 kHz), which for specific beam incidence positions allows spatially resolved ACU field characterization at the receiver side. The good agreement of experimental and modeled beam shifts across timber laminates allowed extrapolation of the inner propagation paths. The modeling base is an orthotropic stiffness dataset for the desired wood species. In cross-grain planes, beam skewing leads to position-dependent wave paths. They are well-described in terms of the growth ring curvature, which is obtained by visual observation of the laminate. Extraordinary refraction phenomena were observed, which lead to well-collimated quasi-shear wave coupling at grazing beam incidence angles. The anisotropic damping in cross-grain planes is satisfactorily explained in terms of the known anisotropic stiffness dataset and a constant loss tangent. The incorporation of high-resolution density maps (X-ray computed tomography) provided insight into ultrasound scattering effects in the layered growth ring structure. Finally, the combined potential of the FDTD model and the SFI setup for material property and defect inversion in anisotropic materials was demonstrated. A portable SFI demonstrator was implemented with a multi-sensor MEMs receiver array that captures and compensates for variable wave propagation paths in glued laminated timber, and improves the imaging of lamination defects. Copyright © 2015 Elsevier B.V. All rights reserved.
Andronowski, Janna M; Crowder, Christian
2018-05-21
Quantifying the amount of cortical bone loss is one variable used in histological methods of adult age estimation. Measurements of cortical area tend to be subjective and additional information regarding bone loss is not captured considering cancellous bone is disregarded. We describe whether measuring bone area (cancellous + cortical area) rather than cortical area may improve histological age estimation for the sixth rib. Mid-shaft rib cross-sections (n = 114) with a skewed sex distribution were analyzed. Ages range from 16 to 87 years. Variables included: total cross-sectional area, cortical area, bone area, relative bone area, relative cortical area, and endosteal area. Males have larger mean total cross-sectional area, bone area, and cortical area than females. Females display a larger mean endosteal area and greater mean relative measure values. Relative bone area significantly correlates with age. The relative bone area variable will provide researchers with a less subjective and more accurate measure than cortical area. © 2018 American Academy of Forensic Sciences.
Candela, L.; Olea, R.A.; Custodio, E.
1988-01-01
Groundwater quality observation networks are examples of discontinuous sampling on variables presenting spatial continuity and highly skewed frequency distributions. Anywhere in the aquifer, lognormal kriging provides estimates of the variable being sampled and a standard error of the estimate. The average and the maximum standard error within the network can be used to dynamically improve the network sampling efficiency or find a design able to assure a given reliability level. The approach does not require the formulation of any physical model for the aquifer or any actual sampling of hypothetical configurations. A case study is presented using the network monitoring salty water intrusion into the Llobregat delta confined aquifer, Barcelona, Spain. The variable chloride concentration used to trace the intrusion exhibits sudden changes within short distances which make the standard error fairly invariable to changes in sampling pattern and to substantial fluctuations in the number of wells. ?? 1988.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Hajarian, Masoud
2012-08-01
A matrix P is called a symmetric orthogonal if P = P T = P -1. A matrix X is said to be a generalised bisymmetric with respect to P if X = X T = PXP. It is obvious that any symmetric matrix is also a generalised bisymmetric matrix with respect to I (identity matrix). By extending the idea of the Jacobi and the Gauss-Seidel iterations, this article proposes two new iterative methods, respectively, for computing the generalised bisymmetric (containing symmetric solution as a special case) and skew-symmetric solutions of the generalised Sylvester matrix equation ? (including Sylvester and Lyapunov matrix equations as special cases) which is encountered in many systems and control applications. When the generalised Sylvester matrix equation has a unique generalised bisymmetric (skew-symmetric) solution, the first (second) iterative method converges to the generalised bisymmetric (skew-symmetric) solution of this matrix equation for any initial generalised bisymmetric (skew-symmetric) matrix. Finally, some numerical results are given to illustrate the effect of the theoretical results.
Arnold-Chiari malformation and nystagmus of skew
Pieh, C.; Gottlob, I.
2000-01-01
The Arnold-Chiari malfomation is typically associated with downbeat nystagmus. Eye movement recordings in two patients with Arnold-Chiari malfomation type 1 showed, in addition to downbeat and gaze evoked nystagmus, intermittent nystagmus of skew. To date this finding has not been reported in association with Arnold-Chiari malfomation. Nystagmus of skew should raise the suspicion of Arnold-Chiari malfomation and prompt sagittal head MRI examination. PMID:10864619
Circular distributions based on nonnegative trigonometric sums.
Fernández-Durán, J J
2004-06-01
A new family of distributions for circular random variables is proposed. It is based on nonnegative trigonometric sums and can be used to model data sets which present skewness and/or multimodality. In this family of distributions, the trigonometric moments are easily expressed in terms of the parameters of the distribution. The proposed family is applied to two data sets, one related with the directions taken by ants and the other with the directions taken by turtles, to compare their goodness of fit versus common distributions used in the literature.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. © 2017 Yufei Gao et al.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning.
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Zhou, Yanjie; Zhou, Bing; Shi, Lei
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. PMID:29065568
Static performance investigation of a skewed-throat multiaxis thrust-vectoring nozzle concept
NASA Technical Reports Server (NTRS)
Wing, David J.
1994-01-01
The static performance of a jet exhaust nozzle which achieves multiaxis thrust vectoring by physically skewing the geometric throat has been characterized in the static test facility of the 16-Foot Transonic Tunnel at NASA Langley Research Center. The nozzle has an asymmetric internal geometry defined by four surfaces: a convergent-divergent upper surface with its ridge perpendicular to the nozzle centerline, a convergent-divergent lower surface with its ridge skewed relative to the nozzle centerline, an outwardly deflected sidewall, and a straight sidewall. The primary goal of the concept is to provide efficient yaw thrust vectoring by forcing the sonic plane (nozzle throat) to form at a yaw angle defined by the skewed ridge of the lower surface contour. A secondary goal is to provide multiaxis thrust vectoring by combining the skewed-throat yaw-vectoring concept with upper and lower pitch flap deflections. The geometric parameters varied in this investigation included lower surface ridge skew angle, nozzle expansion ratio (divergence angle), aspect ratio, pitch flap deflection angle, and sidewall deflection angle. Nozzle pressure ratio was varied from 2 to a high of 11.5 for some configurations. The results of the investigation indicate that efficient, substantial multiaxis thrust vectoring was achieved by the skewed-throat nozzle concept. However, certain control surface deflections destabilized the internal flow field, which resulted in substantial shifts in the position and orientation of the sonic plane and had an adverse effect on thrust-vectoring and weight flow characteristics. By increasing the expansion ratio, the location of the sonic plane was stabilized. The asymmetric design resulted in interdependent pitch and yaw thrust vectoring as well as nonzero thrust-vector angles with undeflected control surfaces. By skewing the ridges of both the upper and lower surface contours, the interdependency between pitch and yaw thrust vectoring may be eliminated and the location of the sonic plane may be further stabilized.
Lamontagne, Jonathan R.; Stedinger, Jery R.; Berenbrock, Charles; Veilleux, Andrea G.; Ferris, Justin C.; Knifong, Donna L.
2012-01-01
Flood-frequency information is important in the Central Valley region of California because of the high risk of catastrophic flooding. Most traditional flood-frequency studies focus on peak flows, but for the assessment of the adequacy of reservoirs, levees, other flood control structures, sustained flood flow (flood duration) frequency data are needed. This study focuses on rainfall or rain-on-snow floods, rather than the annual maximum, because rain events produce the largest floods in the region. A key to estimating flood-duration frequency is determining the regional skew for such data. Of the 50 sites used in this study to determine regional skew, 28 sites were considered to have little to no significant regulated flows, and for the 22 sites considered significantly regulated, unregulated daily flow data were synthesized by using reservoir storage changes and diversion records. The unregulated, annual maximum rainfall flood flows for selected durations (1-day, 3-day, 7-day, 15-day, and 30-day) for all 50 sites were furnished by the U.S. Army Corps of Engineers. Station skew was determined by using the expected moments algorithm program for fitting the Pearson Type 3 flood-frequency distribution to the logarithms of annual flood-duration data. Bayesian generalized least squares regression procedures used in earlier studies were modified to address problems caused by large cross correlations among concurrent rainfall floods in California and to address the extensive censoring of low outliers at some sites, by using the new expected moments algorithm for fitting the LP3 distribution to rainfall flood-duration data. To properly account for these problems and to develop suitable regional-skew regression models and regression diagnostics, a combination of ordinary least squares, weighted least squares, and Bayesian generalized least squares regressions were adopted. This new methodology determined that a nonlinear model relating regional skew to mean basin elevation was the best model for each flood duration. The regional-skew values ranged from -0.74 for a flood duration of 1-day and a mean basin elevation less than 2,500 feet to values near 0 for a flood duration of 7-days and a mean basin elevation greater than 4,500 feet. This relation between skew and elevation reflects the interaction of snow and rain, which increases with increased elevation. The regional skews are more accurate, and the mean squared errors are less than in the Interagency Advisory Committee on Water Data's National skew map of Bulletin 17B.
A proportional integral estimator-based clock synchronization protocol for wireless sensor networks.
Yang, Wenlun; Fu, Minyue
2017-11-01
Clock synchronization is an issue of vital importance in applications of WSNs. This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
The Angular Three-Point Correlation Function in the Quasi-linear Regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buchalter, Ari; Kamionkowski, Marc; Jaffe, Andrew H.
2000-02-10
We calculate the normalized angular three-point correlation function (3PCF), q, as well as the normalized angular skewness, s{sub 3}, assuming the small-angle approximation, for a biased mass distribution in flat and open cold dark matter (CDM) models with Gaussian initial conditions. The leading-order perturbative results incorporate the explicit dependence on the cosmological parameters, the shape of the CDM transfer function, the linear evolution of the power spectrum, the form of the assumed redshift distribution function, and linear and nonlinear biasing, which may be evolving. Results are presented for different redshift distributions, including that appropriate for the APM Galaxy Survey, asmore » well as for a survey with a mean redshift of z{approx_equal}1 (such as the VLA FIRST Survey). Qualitatively, many of the results found for s{sub 3} and q are similar to those obtained in a related treatment of the spatial skewness and 3PCF, such as a leading-order correction to the standard result for s{sub 3} in the case of nonlinear bias (as defined for unsmoothed density fields), and the sensitivity of the configuration dependence of q to both cosmological and biasing models. We show that since angular correlation functions (CFs) are sensitive to clustering over a range of redshifts, the various evolutionary dependences included in our predictions imply that measurements of q in a deep survey might better discriminate between models with different histories, such as evolving versus nonevolving bias, that can have similar spatial CFs at low redshift. Our calculations employ a derived equation, valid for open, closed, and flat models, to obtain the angular bispectrum from the spatial bispectrum in the small-angle approximation. (c) (c) 2000. The American Astronomical Society.« less
Buhari, Faiza Sulaiman; Selvaraj, Venkatesh
2016-01-01
Background and Aims: Earlier studies have shown that the type of laryngoscope blade influences the degree of hemodynamic response to endotracheal intubation. The aim of the study was to evaluate the hemodynamic response to oral endotracheal intubation with C-MAC laryngoscopy and McCoy laryngoscopy compared to that of Macintosh laryngoscopy in adult patients under general anesthesia. Material and Methods: This is a prospective randomized parallel group study. Ninety American Society of Anesthesiologists I patients were randomly allotted into three groups. Group A – Macintosh laryngoscopy (control group). Group B – laryngoscopy with McCoy laryngoscope. Group C – laryngoscopy with C-MAC video laryngoscope. Heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean arterial pressure (MAP) were monitored at baseline (just before induction), just before intubation (T0), 1 min (T1), 3 min (T3), 5 min (T5), and 10 min (T10) after intubation. Intergroup comparison of study parameters was done by unpaired sample t-test for normal data and Mann-Whitney U-test for skewed data. For within-group comparison, the repeated measures of ANOVA for normal data and Friedman followed by Wilcoxon signed rank test for skewed data were performed. Results: In C-MAC group, the HR was significantly higher than the Macintosh group at 3 min after intubation, whereas SBP, DBP, and MAP were significantly higher at 1 min. McCoy group showed a similar response compared to Macintosh group at all time intervals. Conclusion: C-MAC video laryngoscope has a comparatively greater hemodynamic response than Macintosh laryngoscope. PMID:28096584
Mangold, Alexandra; Trenkwalder, Katharina; Ringler, Max; Hödl, Walter; Ringler, Eva
2015-09-03
Reproductive skew, the uneven distribution of reproductive success among individuals, is a common feature of many animal populations. Several scenarios have been proposed to favour either high or low levels of reproductive skew. Particularly a male-biased operational sex ratio and the asynchronous arrival of females is expected to cause high variation in reproductive success among males. Recently it has been suggested that the type of benefits provided by males (fixed vs. dilutable) could also strongly impact individual mating patterns, and thereby affecting reproductive skew. We tested this hypothesis in Hyalinobatrachium valerioi, a Neotropical glass frog with prolonged breeding and paternal care. We monitored and genetically sampled a natural population in southwestern Costa Rica during the breeding season in 2012 and performed parentage analysis of adult frogs and tadpoles to investigate individual mating frequencies, possible mating preferences, and estimate reproductive skew in males and females. We identified a polygamous mating system, where high proportions of males (69 %) and females (94 %) reproduced successfully. The variance in male mating success could largely be attributed to differences in time spent calling at the reproductive site, but not to body size or relatedness. Female H. valerioi were not choosy and mated indiscriminately with available males. Our findings support the hypothesis that dilutable male benefits - such as parental care - can favour female polyandry and maintain low levels of reproductive skew among males within a population, even in the presence of direct male-male competition and a highly male-biased operational sex ratio. We hypothesize that low male reproductive skew might be a general characteristic in prolonged breeders with paternal care.
Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin
2015-11-01
In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.
The skew ray ambiguity in the analysis of videokeratoscopic data.
Iskander, D Robert; Davis, Brett A; Collins, Michael J
2007-05-01
Skew ray ambiguity is present in most videokeratoscopic measurements when azimuthal components of the corneal curvature are not taken into account. There have been some reported studies based on theoretical predictions and measured test surfaces suggesting that skew ray ambiguity is significant for highly deformed corneas or decentered corneal measurements. However, the effect of skew ray ambiguity in ray tracing through videokeratoscopic data has not been studied in depth. We have evaluated the significance of the skew ray ambiguity and its effect on the analyzed corneal optics. This has been achieved by devising a procedure in which we compared the corneal wavefront aberrations estimated from 3D ray tracing with those determined from 2D (meridional based) estimates of the refractive power. The latter was possible due to recently developed concept of refractive Zernike power polynomials which links the refractive power domain with that of the wavefront. Simulated corneal surfaces as well as data from a range of corneas (from two different Placido disk-based videokeratoscopes) were used to find the limit at which the difference in estimated corneal wavefronts (or the corresponding refractive powers) would have clinical significance (e.g., equivalent to 0.125 D or more). The inclusion/exclusion of the skew ray in the analyses showed some differences in the results. However, the proposed procedure showed clinically significant differences only for highly deformed corneas and only for large corneal diameters. For the overwhelming majority of surfaces, the skew ray ambiguity is not a clinically significant issue in the analysis of the videokeratoscopic data indicating that the meridional processing such as that encountered in calculation of the refractive power maps is adequate.
Accelerating dark energy cosmological model in two fluids with hybrid scale factor
NASA Astrophysics Data System (ADS)
Mishra, B.; Sahoo, P. K.; Ray, Pratik P.
In this paper, we have investigated the anisotropic behavior of the accelerating universe in Bianchi V spacetime in the framework of General Relativity (GR). The matter field we have considered is of two non-interacting fluids, i.e. the usual string fluid and dark energy (DE) fluid. In order to represent the pressure anisotropy, the skewness parameters are introduced along three different spatial directions. To achieve a physically realistic solutions to the field equations, we have considered a scale factor, known as hybrid scale factor, which is generated by a time-varying deceleration parameter. This simulates a cosmic transition from early deceleration to late time acceleration. It is observed that the string fluid dominates the universe at early deceleration phase but does not affect nature of cosmic dynamics substantially at late phase, whereas the DE fluid dominates the universe in present time, which is in accordance with the observations results. Hence, we analyzed here the role of two fluids in the transitional phases of universe with respect to time which depicts the reason behind the cosmic expansion and DE. The role of DE with variable equation of state parameter (EoS) and skewness parameters, is also discussed along with physical and geometrical properties.
NASA Astrophysics Data System (ADS)
Dias-Junior, Cléo Q.; Dias, Nelson Luís; Fuentes, José D.; Chamecki, Marcelo
2017-04-01
In this work, we investigate the ozone dynamics during the occurrence of both downdrafts associated with mesoscale convective storms and non-classical low-level jets. Extensive data sets, comprised of air chemistry and meteorological observations made in the Amazon region of Brazil over the course of 2014-15, are analyzed to address several questions. A first objective is to investigate the atmospheric thermodynamic and dynamic conditions associated with storm-generated ozone enhancements in the Amazon region. A second objective is to determine the magnitude and the frequency of ground-level ozone enhancements related to low-level jets. Ozone enhancements are analyzed as a function of wind shear, low-level jet maximum wind speed, and altitude of jet core. Strong and sudden increases in ozone levels are associated with simultaneous changes in variables such as horizontal wind speed, convective available potential energy, turbulence intensity and vertical velocity skewness. Rapid increases in vertical velocity skewness give support to the hypothesis that the ozone enhancements are directly related to downdrafts. Low-level jets associated with advancing density currents are often present during and after storm downdrafts that transport ozone-enriched air from aloft to the surface.
On river-floodplain interaction and hydrograph skewness
NASA Astrophysics Data System (ADS)
Fleischmann, Ayan S.; Paiva, Rodrigo C. D.; Collischonn, Walter; Sorribas, Mino V.; Pontes, Paulo R. M.
2016-10-01
Understanding hydrological processes occurring within a basin by looking at its outlet hydrograph can improve and foster comprehension of ungauged regions. In this context, we present an extensive examination of the roles that floodplains play on driving hydrograph shapes. Observations of many river hydrographs with large floodplain influence are carried out and indicate that a negative skewness of the hydrographs is present among many of them. Through a series of numerical experiments and analytical reasoning, we show how the relationship between flood wave celerity and discharge in such systems is responsible for determining the hydrograph shapes. The more water inundates the floodplains upstream of the observed point, the more negatively skewed is the observed hydrograph. A case study is performed in the Amazon River Basin, where major rivers with large floodplain attenuation (e.g., Purus, Madeira, and Juruá) are identified with higher negative skewness in the respective hydrographs. Finally, different wetland types could be distinguished by using this feature, e.g., wetlands maintained by endogenous processes, from wetlands governed by overbank flow (along river floodplains). A metric of hydrograph skewness was developed to quantify this effect, based on the time derivative of discharge. Together with the skewness concept, it may be used in other studies concerning the relevance of floodplain attenuation in large, ungauged rivers, where remote sensing data (e.g., satellite altimetry) can be very useful.
NASA Astrophysics Data System (ADS)
Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.
2017-11-01
Merging radar and rain gauge rainfall data is a technique used to improve the quality of spatial rainfall estimates and in particular the use of Kriging with External Drift (KED) is a very effective radar-rain gauge rainfall merging technique. However, kriging interpolations assume Gaussianity of the process. Rainfall has a strongly skewed, positive, probability distribution, characterized by a discontinuity due to intermittency. In KED rainfall residuals are used, implicitly calculated as the difference between rain gauge data and a linear function of the radar estimates. Rainfall residuals are non-Gaussian as well. The aim of this work is to evaluate the impact of applying KED to non-Gaussian rainfall residuals, and to assess the best techniques to improve Gaussianity. We compare Box-Cox transformations with λ parameters equal to 0.5, 0.25, and 0.1, Box-Cox with time-variant optimization of λ, normal score transformation, and a singularity analysis technique. The results suggest that Box-Cox with λ = 0.1 and the singularity analysis is not suitable for KED. Normal score transformation and Box-Cox with optimized λ, or λ = 0.25 produce satisfactory results in terms of Gaussianity of the residuals, probability distribution of the merged rainfall products, and rainfall estimate quality, when validated through cross-validation. However, it is observed that Box-Cox transformations are strongly dependent on the temporal and spatial variability of rainfall and on the units used for the rainfall intensity. Overall, applying transformations results in a quantitative improvement of the rainfall estimates only if the correct transformations for the specific data set are used.
Flow-covariate prediction of stream pesticide concentrations.
Mosquin, Paul L; Aldworth, Jeremy; Chen, Wenlin
2018-01-01
Potential peak functions (e.g., maximum rolling averages over a given duration) of annual pesticide concentrations in the aquatic environment are important exposure parameters (or target quantities) for ecological risk assessments. These target quantities require accurate concentration estimates on nonsampled days in a monitoring program. We examined stream flow as a covariate via universal kriging to improve predictions of maximum m-day (m = 1, 7, 14, 30, 60) rolling averages and the 95th percentiles of atrazine concentration in streams where data were collected every 7 or 14 d. The universal kriging predictions were evaluated against the target quantities calculated directly from the daily (or near daily) measured atrazine concentration at 32 sites (89 site-yr) as part of the Atrazine Ecological Monitoring Program in the US corn belt region (2008-2013) and 4 sites (62 site-yr) in Ohio by the National Center for Water Quality Research (1993-2008). Because stream flow data are strongly skewed to the right, 3 transformations of the flow covariate were considered: log transformation, short-term flow anomaly, and normalized Box-Cox transformation. The normalized Box-Cox transformation resulted in predictions of the target quantities that were comparable to those obtained from log-linear interpolation (i.e., linear interpolation on the log scale) for 7-d sampling. However, the predictions appeared to be negatively affected by variability in regression coefficient estimates across different sample realizations of the concentration time series. Therefore, revised models incorporating seasonal covariates and partially or fully constrained regression parameters were investigated, and they were found to provide much improved predictions in comparison with those from log-linear interpolation for all rolling average measures. Environ Toxicol Chem 2018;37:260-273. © 2017 SETAC. © 2017 SETAC.
Ben Bouallègue, Fayçal; Vauchot, Fabien; Mariano-Goulart, Denis; Payoux, Pierre
2018-02-09
We evaluated the performance of amyloid PET textural and shape features in discriminating normal and Alzheimer's disease (AD) subjects, and in predicting conversion to AD in subjects with mild cognitive impairment (MCI) or significant memory concern (SMC). Subjects from the Alzheimer's Disease Neuroimaging Initiative with available baseline 18 F-florbetapir and T1-MRI scans were included. The cross-sectional cohort consisted of 181 controls and 148 AD subjects. The longitudinal cohort consisted of 431 SMC/MCI subjects, 85 of whom converted to AD during follow-up. PET images were normalized to MNI space and post-processed using in-house software. Relative retention indices (SUVr) were computed with respect to pontine, cerebellar, and composite reference regions. Several textural and shape features were extracted then combined using a support vector machine (SVM) to build a predictive model of AD conversion. Diagnostic and prognostic performance was evaluated using ROC analysis and survival analysis with the Cox proportional hazard model. The three SUVr and all the tested features effectively discriminated AD subjects in cross-sectional analysis (all p < 0.001). In longitudinal analysis, the variables with the highest prognostic value were composite SUVr (AUC 0.86; accuracy 81%), skewness (0.87; 83%), local minima (0.85; 79%), Geary's index (0.86; 81%), gradient norm maximal argument (0.83; 82%), and the SVM model (0.91; 86%). The adjusted hazard ratio for AD conversion was 5.5 for the SVM model, compared with 4.0, 2.6, and 3.8 for cerebellar, pontine and composite SUVr (all p < 0.001), indicating that appropriate amyloid textural and shape features predict conversion to AD with at least as good accuracy as classical SUVr.
An estimate of field size distributions for selected sites in the major grain producing countries
NASA Technical Reports Server (NTRS)
Podwysocki, M. H.
1977-01-01
The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.
Oldham, Athenia L.; Miner, Cathrine A.; Wang, Hong-Cheng; Webb, Carol F.
2011-01-01
Previous data suggested that constitutive expression of the transcription factor Bright (B cell regulator of immunoglobulin heavy chain transcription), normally tightly regulated during B cell differentiation, was associated with autoantibody production. Here we show that constitutive Bright expression results in skewing of mature B lineage subpopulations toward marginal zone cells at the expense of the follicular subpopulation. C57Bl/6 transgenic mice constitutively expressing Bright in B lineage cells generated autoantibodies that were not the result of global increases in immunoglobulin or of breaches in key tolerance checkpoints typically defective in other autoimmune mouse models. Rather, autoimmunity correlated with increased numbers of marginal zone B cells and alterations in the phenotype and gene expression profiles of lymphocytes within the follicular B cell compartment. These data suggest a novel role for Bright in the normal development of mature B cell subsets and in autoantibody production. PMID:21963220
A concept for canceling the leakage field inside the stored beam chamber of a septum magnet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abliz, M.; Jaski, M.; Xiao, A.
Here, the Advanced Photon Source is in the process of upgrading its storage ring from a double-bend to a multi-bend lattice as part of the APS Upgrade Project (APS-U). A swap-out injection scheme is planned for the APS-U to keep a constant beam current and to enable a small dynamic aperture. A novel concept that cancels out the effect of leakage field inside the stored beam chamber was introduced in the design of the septum magnet. As a result, the horizontal deflecting angle of the stored beam was reduced to below 1 µrad with a 2 mm septum thickness andmore » 1.06 T normal injection field. The concept helped to minimize the integrated skew quadrupole field and normal sextupole fields inside stored beam chamber as well.« less
Robust Mediation Analysis Based on Median Regression
Yuan, Ying; MacKinnon, David P.
2014-01-01
Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925
A Photo Elicitation Study on Chronically Ill Adolescents’ Identity Constructions During Transition
Hanghøj, Signe; Boisen, Kirsten A.; Schmiegelow, Kjeld; Hølge-Hazelton, Bibi
2016-01-01
Adolescence is an important phase of life with increasing independence and identity development, and a vulnerable period of life for chronically ill adolescents with a high occurrence of insufficient treatment adherence. We conducted four photo elicitation focus group interviews with 14 adolescents (12-20 years) with juvenile idiopathic arthritis to investigate identity constructions during transition. Using a discourse analysis approach, six identity types were identified distributed on normal and marginal identities, which were lived either at home (home arena) or outside home with peers (out arena). Most participants positioned themselves as normal in the out arena and as ill in the home arena. Few participants positioned themselves as ill in an out arena, and they described how peers perceived this as a marginal and skewed behavior. This study contributes to a better understanding of why it can be extremely difficult to live with a chronic illness during adolescence. PMID:28462329
A concept for canceling the leakage field inside the stored beam chamber of a septum magnet
Abliz, M.; Jaski, M.; Xiao, A.; ...
2017-12-20
Here, the Advanced Photon Source is in the process of upgrading its storage ring from a double-bend to a multi-bend lattice as part of the APS Upgrade Project (APS-U). A swap-out injection scheme is planned for the APS-U to keep a constant beam current and to enable a small dynamic aperture. A novel concept that cancels out the effect of leakage field inside the stored beam chamber was introduced in the design of the septum magnet. As a result, the horizontal deflecting angle of the stored beam was reduced to below 1 µrad with a 2 mm septum thickness andmore » 1.06 T normal injection field. The concept helped to minimize the integrated skew quadrupole field and normal sextupole fields inside stored beam chamber as well.« less
NASA Astrophysics Data System (ADS)
Mohamed, Abdel-Baset A.
2017-10-01
An analytical solution of the master equation that describes a superconducting cavity containing two coupled superconducting charge qubits is obtained. Quantum-mechanical correlations based on Wigner-Yanase skew information, as local quantum uncertainty and uncertainty-induced quantum non-locality, are compared to the concurrence under the effects of the phase decoherence. Local quantum uncertainty exhibits sudden changes during its time evolution and revival process. Sudden death and sudden birth occur only for entanglement, depending on the initial state of the two coupled charge qubits, while the correlations of skew information does not vanish. The quantum correlations of skew information are found to be sensitive to the dephasing rate, the photons number in the cavity, the interaction strength between the two qubits, and the qubit distribution angle of the initial state. With a proper initial state, the stationary correlation of the skew information has a non-zero stationary value for a long time interval under the phase decoherence, that it may be useful in quantum information and computation processes.
On the Origin of Protein Superfamilies and Superfolds
NASA Astrophysics Data System (ADS)
Magner, Abram; Szpankowski, Wojciech; Kihara, Daisuke
2015-02-01
Distributions of protein families and folds in genomes are highly skewed, having a small number of prevalent superfamiles/superfolds and a large number of families/folds of a small size. Why are the distributions of protein families and folds skewed? Why are there only a limited number of protein families? Here, we employ an information theoretic approach to investigate the protein sequence-structure relationship that leads to the skewed distributions. We consider that protein sequences and folds constitute an information theoretic channel and computed the most efficient distribution of sequences that code all protein folds. The identified distributions of sequences and folds are found to follow a power law, consistent with those observed for proteins in nature. Importantly, the skewed distributions of sequences and folds are suggested to have different origins: the skewed distribution of sequences is due to evolutionary pressure to achieve efficient coding of necessary folds, whereas that of folds is based on the thermodynamic stability of folds. The current study provides a new information theoretic framework for proteins that could be widely applied for understanding protein sequences, structures, functions, and interactions.
A strategy to load balancing for non-connectivity MapReduce job
NASA Astrophysics Data System (ADS)
Zhou, Huaping; Liu, Guangzong; Gui, Haixia
2017-09-01
MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.
Statistical analysis of content of Cs-137 in soils in Bansko-Razlog region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobilarov, R. G., E-mail: rkobi@tu-sofia.bg
Statistical analysis of the data set consisting of the activity concentrations of {sup 137}Cs in soils in Bansko–Razlog region is carried out in order to establish the dependence of the deposition and the migration of {sup 137}Cs on the soil type. The descriptive statistics and the test of normality show that the data set have not normal distribution. Positively skewed distribution and possible outlying values of the activity of {sup 137}Cs in soils were observed. After reduction of the effects of outliers, the data set is divided into two parts, depending on the soil type. Test of normality of themore » two new data sets shows that they have a normal distribution. Ordinary kriging technique is used to characterize the spatial distribution of the activity of {sup 137}Cs over an area covering 40 km{sup 2} (whole Razlog valley). The result (a map of the spatial distribution of the activity concentration of {sup 137}Cs) can be used as a reference point for future studies on the assessment of radiological risk to the population and the erosion of soils in the study area.« less
Asymmetry of wind waves studied in a laboratory tank
NASA Astrophysics Data System (ADS)
Ileykin, L. A.; Donelan, M. A.; Mellen, R. H.; McLaughlin, D. J.
1995-03-01
Asymmetry of wind waves was studied in laboratory tank tinder varied wind and fetch conditions using both bispectral analysis of wave records and third-order statistics of the surface elevation. It is found skewness S (the normalized third-order moment of surface elevation describing the horizontal asymmetry waves) varies only slightly with the inverse wave u*/Cm (where u* is the air friction velocity and Cm is phase speed of the dominant waves). At the same time asymmetry A, which is determined from the Hilbert transform of the wave record and characterizes the skewness of the rate of change of surface elevation, increase consistently in magnitude with the ratio u*/Cm. This suggests that nonlinear distortion of the wave profile determined by the degree of wind forcing and is a sensitive indicator of wind-wave interaction processes. It is shown that the asymmetric profile of waves can described within the frameworks of the nonlinear nonspectral concept (Plate, 1972; Lake and Yuen, 197 according to which the wind-wave field can be represented as a coherent bound-wave system consisting mainly of dominant component w. and its harmonics propagating with the same speed C. , as observed by Ramamonjiaris and Coantic (1976). The phase shift between o). harmonics is found and shown to increase with the asymmetry of the waves.
Asymmetry of wind waves studied in a laboratory tank
NASA Astrophysics Data System (ADS)
Leykin, I. A.; Donelan, M. A.; Mellen, R. H.; McLaughlin, D. J.
Asymmetry of wind waves was studied in laboratory tank tinder varied wind and fetch conditions using both bispectral analysis of wave records and third-order statistics of the surface elevation. It is found skewness S (the normalized third-order moment of surface elevation describing the horizontal asymmetry waves) varies only slightly with the inverse wave u*/Cm (where u* is the air friction velocity and Cm is phase speed of the dominant waves). At the same time asymmetry A, which is determined from the Hilbert transform of the wave record and characterizes the skewness of the rate of change of surface elevation, increase consistently in magnitude with the ratio u*/Cm. This suggests that nonlinear distortion of the wave profile determined by the degree of wind forcing and is a sensitive indicator of wind-wave interaction processes. It is shown that the asymmetric profile of waves can described within the frameworks of the nonlinear nonspectral concept (Plate, 1972; Lake and Yuen, 197 according to which the wind-wave field can be represented as a coherent bound-wave system consisting mainly of dominant component w. and its harmonics propagating with the same speed C. , as observed by Ramamonjiaris and Coantic (1976). The phase shift between o). harmonics is found and shown to increase with the asymmetry of the waves.
Di-Battista, Adriana; Meloni, Vera Ayres; da Silva, Magnus Dias; Moysés-Oliveira, Mariana; Melaragno, Maria Isabel
2016-12-01
In females carrying structural rearrangements of an X-chromosome, cells with the best dosage balance are preferentially selected, frequently resulting in a skewed inactivation pattern and amelioration of the phenotype. The Xp11.23-p11.22 region is involved in a recently described microduplication syndrome associated with severe clinical consequences in males and females, causing intellectual disability, behavior problems, epilepsy with electroencephalogram anomalies, minor facial anomalies, and early onset of puberty. Female carriers usually present an unusual X-chromosome inactivation pattern in favor of the aberrant chromosome, resulting in functional disomy of the duplicated segment. Here, we describe a girl carrying a de novo ∼9.7 Mb Xp11.3-p11.22 duplication of paternal origin and skewed X-chromosome inactivation pattern of the normal X-chromosome. We reviewed other cases previously reported and determined the minimal critical region possibly responsible for this unusual inactivation pattern. The critical region encompasses 36 RefSeq genes, including at least 10 oncogenes and/or genes related to the cell cycle control. We discuss the molecular mechanisms that underlie the positive selection of the cells with the active duplicated chromosome. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Combining Deterministic structures and stochastic heterogeneity for transport modeling
NASA Astrophysics Data System (ADS)
Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg
2017-04-01
Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.
NASA Astrophysics Data System (ADS)
Priya, Mallika; Rao, Bola Sadashiva Satish; Chandra, Subhash; Ray, Satadru; Mathew, Stanley; Datta, Anirbit; Nayak, Subramanya G.; Mahato, Krishna Kishore
2016-02-01
In spite of many efforts for early detection of breast cancer, there is still lack of technology for immediate implementation. In the present study, the potential photoacoustic spectroscopy was evaluated in discriminating breast cancer from normal, involving blood serum samples seeking early detection. Three photoacoustic spectra in time domain were recorded from each of 20 normal and 20 malignant samples at 281nm pulsed laser excitations and a total of 120 spectra were generated. The time domain spectra were then Fast Fourier Transformed into frequency domain and 116.5625 - 206.875 kHz region was selected for further analysis using a combinational approach of wavelet, PCA and logistic regression. Initially, wavelet analysis was performed on the FFT data and seven features (mean, median, area under the curve, variance, standard deviation, skewness and kurtosis) from each were extracted. PCA was then performed on the feature matrix (7x120) for discriminating malignant samples from the normal by plotting a decision boundary using logistic regression analysis. The unsupervised mode of classification used in the present study yielded specificity and sensitivity values of 100% in each respectively with a ROC - AUC value of 1. The results obtained have clearly demonstrated the capability of photoacoustic spectroscopy in discriminating cancer from the normal, suggesting its possible clinical implications.
Prototyping and Characterization of an Adjustable Skew Angle Single Gimbal Control Moment Gyroscope
2015-03-01
performance, and an analysis of the test results is provided. In addition to the standard battery of CMG performance tests that were planned, a...objectives for this new CMG is to provide comparable performance to the Andrews CMGs, the values in Table 1 will be used for output torque comparison...essentially fixed at 53.4°. This specific skew angle value is not the problem, as this is one commonly used CMG skew angle for satellite systems. The real
Near-wall similarity in a pressure-driven three-dimensional turbulent boundary layer
NASA Technical Reports Server (NTRS)
Pierce, F. J.; Mcallister, J. E.
1980-01-01
Mean velocity, measured wall pressure and wall shear stress fields were made in a three dimensional pressure-driven turbulent boundary layer created by a cylinder with trailing edge placed normal to a flat plate floor. The direct force wall shear stress measurements were made with floating element direct force sensing shear meter that responded to both the magnitude and direction of the local wall shear stress. The ability of 10 near wall similarity models to describe the near wall velocity field for the measured flow under a wide range of skewing conditions and a variety of pressure gradient and wall shear vector orientations was used.
Three-dimensional zonal grids about arbitrary shapes by Poisson's equation
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1988-01-01
A method for generating 3-D finite difference grids about or within arbitrary shapes is presented. The 3-D Poisson equations are solved numerically, with values for the inhomogeneous terms found automatically by the algorithm. Those inhomogeneous terms have the effect near boundaries of reducing cell skewness and imposing arbitrary cell height. The method allows the region of interest to be divided into zones (blocks), allowing the method to be applicable to almost any physical domain. A FORTRAN program called 3DGRAPE has been written to implement the algorithm. Lastly, a method for redistributing grid points along lines normal to boundaries will be described.
Distribution characteristics of stock market liquidity
NASA Astrophysics Data System (ADS)
Luo, Jiawen; Chen, Langnan; Liu, Hao
2013-12-01
We examine the distribution characteristics of stock market liquidity by employing the generalized additive models for location, scale and shape (GAMLSS) model and three-minute frequency data from Chinese stock markets. We find that the BCPE distribution within the GAMLSS framework fits the distributions of stock market liquidity well with the diagnosis test. We also find that the stock market index exhibits a significant impact on the distributions of stock market liquidity. The stock market liquidity usually exhibits a positive skewness, but a normal distribution at a low level of stock market index and a high-peak and fat-tail shape at a high level of stock market index.
Barth, Nancy A.; Veilleux, Andrea G.
2012-01-01
The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.
EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.
Tong, Xiaoxiao; Bentler, Peter M
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.
Growth hormone receptor deficiency (Laron syndrome): clinical and genetic characteristics.
Guevara-Aguirre, J; Rosenbloom, A L; Vaccarello, M A; Fielder, P J; de la Vega, A; Diamond, F B; Rosenfeld, R G
1991-01-01
Approximately 60 cases of GHRD (Laron syndrome) were reported before 1990 and half of these were from Israel. We have described 47 additional patients from an inbred population of South Ecuador and have emphasized certain clinical features including: markedly advanced osseous maturation for height age; normal body proportions in childhood but child-like proportions in adults; much greater deviation of stature than head size, giving an appearance of large cranium and small facies; underweight in childhood despite the appearance of obesity and true obesity in adulthood; blue scleras; and limited elbow extension. The Ecuadorean patients differed markedly and most importantly from the other large concentration, in Israel, by being of normal or superior intelligence, suggesting a unique linkage in the Ecuadorean population. The Ecuadorean population also differed in that those patients coming from Loja province had a markedly skewed sex ratio (19 females: 2 males), while those from El Oro province had a normal sex distribution (14 females: 12 males). The phenotypic similarity between the El Oro and Loja patients indicates that this abnormal sex distribution is not a direct result of the GHRD.
DOT National Transportation Integrated Search
2013-07-01
Many highway bridges are skewed and their behavior and corresponding design analysis need to be furthered to fully accomplish design objectives. This project used physical-test and detailed finite element analysis to better understand the behavior of...
LOOKING WEST, BETWEEN READING DEPOT BRIDGE AND SKEW ARCH BRIDGE ...
LOOKING WEST, BETWEEN READING DEPOT BRIDGE AND SKEW ARCH BRIDGE (HAER No. PA-116). - Philadelphia & Reading Railroad, Reading Depot Bridge, North Sixth Street at Woodward Street, Reading, Berks County, PA
Zolal, Amir; Juratli, Tareq A; Linn, Jennifer; Podlesek, Dino; Sitoci Ficici, Kerim Hakan; Kitzler, Hagen H; Schackert, Gabriele; Sobottka, Stephan B; Rieger, Bernhard; Krex, Dietmar
2016-05-01
Objective To determine the value of apparent diffusion coefficient (ADC) histogram parameters for the prediction of individual survival in patients undergoing surgery for recurrent glioblastoma (GBM) in a retrospective cohort study. Methods Thirty-one patients who underwent surgery for first recurrence of a known GBM between 2008 and 2012 were included. The following parameters were collected: age, sex, enhancing tumor size, mean ADC, median ADC, ADC skewness, ADC kurtosis and fifth percentile of the ADC histogram, initial progression free survival (PFS), extent of second resection and further adjuvant treatment. The association of these parameters with survival and PFS after second surgery was analyzed using log-rank test and Cox regression. Results Using log-rank test, ADC histogram skewness of the enhancing tumor was significantly associated with both survival (p = 0.001) and PFS after second surgery (p = 0.005). Further parameters associated with prolonged survival after second surgery were: gross total resection at second surgery (p = 0.026), tumor size (0.040) and third surgery (p = 0.003). In the multivariate Cox analysis, ADC histogram skewness was shown to be an independent prognostic factor for survival after second surgery. Conclusion ADC histogram skewness of the enhancing lesion, enhancing lesion size, third surgery, as well as gross total resection have been shown to be associated with survival following the second surgery. ADC histogram skewness was an independent prognostic factor for survival in the multivariate analysis.
NASA Astrophysics Data System (ADS)
Pipień, M.
2008-09-01
We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.
Ibbotson, Paul
2013-01-01
We use the Google Ngram database, a corpus of 5,195,769 digitized books containing ~4% of all books ever published, to test three ideas that are hypothesized to account for linguistic generalizations: verbal semantics, pre-emption and skew. Using 828,813 tokens of un-forms as a test case for these mechanisms, we found verbal semantics was a good predictor of the frequency of un-forms in the English language over the past 200 years—both in terms of how the frequency changed over time and their frequency rank. We did not find strong evidence for the direct competition of un-forms and their top pre-emptors, however the skew of the un-construction competitors was inversely correlated with the acceptability of the un-form. We suggest a cognitive explanation for this, namely, that the more the set of relevant pre-emptors is skewed then the more easily it is retrieved from memory. This suggests that it is not just the frequency of pre-emptive forms that must be taken into account when trying to explain usage patterns but their skew as well. PMID:24399991
Cross-frame connection details for skewed steel bridges.
DOT National Transportation Integrated Search
2010-10-30
This report documents a research investigation on connection details and bracing layouts for stability : bracing of steel bridges with skewed supports. Cross-frames and diaphragms play an important role in stabilizing : steel girders, particularly du...
Migliaccio, Americo A; Della Santina, Charles C; Carey, John P; Minor, Lloyd B; Zee, David S
2006-08-01
We examined how the gain of the torsional vestibulo-ocular reflex (VOR) (defined as the instantaneous eye velocity divided by inverted head velocity) in normal humans is affected by eye position, target distance, and the plane of head rotation. In six normal subjects we measured three-dimensional (3D) eye and head rotation axes using scleral search coils, and 6D head position using a magnetic angular and linear position measurement device, during low-amplitude (approximately 20 degrees ), high-velocity (approximately 200 degrees/s), high-acceleration (approximately 4000 degrees /s2) rapid head rotations or 'impulses.' Head impulses were imposed manually and delivered in five planes: yaw (horizontal canal plane), pitch, roll, left anterior-right posterior canal plane (LARP), and right anterior-left posterior canal plane (RALP). Subjects were instructed to fix on one of six targets at eye level. Targets were either straight-ahead, 20 degrees left or 20 degrees right from midline, at distance 15 or 124 cm from the subject. Two subjects also looked at more eccentric targets, 30 degrees left or 30 degrees right from midline. We found that the vertical and horizontal VOR gains increased with the proximity of the target to the subject. Previous studies suggest that the torsional VOR gain should decrease with target proximity. We found, however, that the torsional VOR gain did not change for all planes of head rotation and for both target distances. We also found a dynamic misalignment of the vertical positions of the eyes during the torsional VOR, which was greatest during near viewing with symmetric convergence. This dynamic vertical skew during the torsional VOR arises, in part, because when the eyes are converged, the optical axes are not parallel to the naso-occipital axes around which the eyes are rotating. In five of six subjects, the average skew ranged 0.9 degrees -2.9 degrees and was reduced to <0.4 degrees by a 'torsional' quick-phase (around the naso-occipital axis) occurring <110 ms after the onset of the impulse. We propose that the torsional quick-phase mechanism during the torsional VOR could serve at least three functions: (1) resetting the retinal meridians closer to their usual orientation in the head, (2) correcting for the 'skew' deviation created by misalignment between the axes around which the eyes are rotating and the line of sight, and (3) taking the eyes back toward Listing's plane.
Skew chicane based betatron eigenmode exchange module
Douglas, David
2010-12-28
A skewed chicane eigenmode exchange module (SCEEM) that combines in a single beamline segment the separate functionalities of a skew quad eigenmode exchange module and a magnetic chicane. This module allows the exchange of independent betatron eigenmodes, alters electron beam orbit geometry, and provides longitudinal parameter control with dispersion management in a single beamline segment with stable betatron behavior. It thus reduces the spatial requirements for multiple beam dynamic functions, reduces required component counts and thus reduces costs, and allows the use of more compact accelerator configurations than prior art design methods.
GATA2 null mutation associated with incomplete penetrance in a family with Emberger syndrome.
Brambila-Tapia, Aniel Jessica Leticia; García-Ortiz, José Elías; Brouillard, Pascal; Nguyen, Ha-Long; Vikkula, Miikka; Ríos-González, Blanca Estela; Sandoval-Muñiz, Roberto de Jesús; Sandoval-Talamantes, Ana Karen; Bobadilla-Morales, Lucina; Corona-Rivera, Jorge Román; Arnaud-Lopez, Lisette
2017-09-01
GATA2 mutations are associated with several conditions, including Emberger syndrome which is the association of primary lymphedema with hematological anomalies and an increased risk for myelodysplasia and leukemia. To describe a family with Emberger syndrome with incomplete penetrance. A DNA sequencing of GATA2 gene was performed in the parents and offspring (five individuals in total). The family consisted of 5 individuals with a GATA2 null mutation (c.130G>T, p.Glu44*); three of them were affected (two of which were deceased) while two remained unaffected at the age of 40 and 13 years old. The three affected siblings (two boys and one girl) presented with lymphedema of the lower limbs, recurrent warts, epistaxis and recurrent infections. Two died due to hematological abnormalities (AML and pancytopenia). In contrast, the two other family members who carry the same mutation (the mother and one brother) have not presented any symptoms and their blood tests remain normal. Incomplete penetrance may indicate that GATA2 haploinsufficiency is not enough to produce the phenotype of Emberger syndrome. It could be useful to perform whole exome or genome sequencing, in cases where incomplete penetrance or high variable expressivity is described, in order to probably identify specific gene interactions that drastically modify the phenotype. In addition, skewed gene expression by an epigenetic mechanism of gene regulation should also be considered.
Biopsy variability of lymphocytic infiltration in breast cancer subtypes and the ImmunoSkew score
NASA Astrophysics Data System (ADS)
Khan, Adnan Mujahid; Yuan, Yinyin
2016-11-01
The number of tumour biopsies required for a good representation of tumours has been controversial. An important factor to consider is intra-tumour heterogeneity, which can vary among cancer types and subtypes. Immune cells in particular often display complex infiltrative patterns, however, there is a lack of quantitative understanding of the spatial heterogeneity of immune cells and how this fundamental biological nature of human tumours influences biopsy variability and treatment resistance. We systematically investigate biopsy variability for the lymphocytic infiltrate in 998 breast tumours using a novel virtual biopsy method. Across all breast cancers, we observe a nonlinear increase in concordance between the biopsy and whole-tumour score of lymphocytic infiltrate with increasing number of biopsies, yet little improvement is gained with more than four biopsies. Interestingly, biopsy variability of lymphocytic infiltrate differs considerably among breast cancer subtypes, with the human epidermal growth factor receptor 2-positive (HER2+) subtype having the highest variability. We subsequently identify a quantitative measure of spatial variability that predicts disease-specific survival in HER2+ subtype independent of standard clinical variables (node status, tumour size and grade). Our study demonstrates how systematic methods provide new insights that can influence future study design based on a quantitative knowledge of tumour heterogeneity.
Heteroscedastic Latent Trait Models for Dichotomous Data.
Molenaar, Dylan
2015-09-01
Effort has been devoted to account for heteroscedasticity with respect to observed or latent moderator variables in item or test scores. For instance, in the multi-group generalized linear latent trait model, it could be tested whether the observed (polychoric) covariance matrix differs across the levels of an observed moderator variable. In the case that heteroscedasticity arises across the latent trait itself, existing models commonly distinguish between heteroscedastic residuals and a skewed trait distribution. These models have valuable applications in intelligence, personality and psychopathology research. However, existing approaches are only limited to continuous and polytomous data, while dichotomous data are common in intelligence and psychopathology research. Therefore, in present paper, a heteroscedastic latent trait model is presented for dichotomous data. The model is studied in a simulation study, and applied to data pertaining alcohol use and cognitive ability.
Kepler Observations of Rapid Optical Variability in the BL Lac Object W2r192+42
NASA Technical Reports Server (NTRS)
R.Edelson; Mushotzky, R.; Vaughn, S.; Scargle, J.; Gandhi, P.; Malkan, M.; Baumgartner, W.
2013-01-01
We present the first Kepler monitoring of a strongly variable BL Lac, W2R1926+42. The light curve covers 181 days with approx. 0.2% errors, 30 minute sampling and >90% duty cycle, showing numerous delta-I/I > 25% flares over timescales as short as a day. The flux distribution is highly skewed and non-Gaussian. The variability shows a strong rms-flux correlation with the clearest evidence to date for non-linearity in this relation. We introduce a method to measure periodograms from the discrete autocorrelation function, an approach that may be well-suited to a wide range of Kepler data. The periodogram is not consistent with a simple power-law, but shows a flattening at frequencies below 7x10(exp -5) Hz. Simple models of the power spectrum, such as a broken power law, do not produce acceptable fits, indicating that the Kepler blazar light curve requires more sophisticated mathematical and physical descriptions than currently in use.
Chen, Xiaojian; Oshima, Kiyoko; Schott, Diane; Wu, Hui; Hall, William; Song, Yingqiu; Tao, Yalan; Li, Dingjie; Zheng, Cheng; Knechtges, Paul; Erickson, Beth; Li, X Allen
2017-01-01
In an effort for early assessment of treatment response, we investigate radiation induced changes in quantitative CT features of tumor during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Diagnostic-quality CT data acquired daily during routine CT-guided CRT using a CT-on-rails for 20 pancreatic head cancer patients were analyzed. On each daily CT, the pancreatic head, the spinal cord and the aorta were delineated and the histograms of CT number (CTN) in these contours were extracted. Eight histogram-based radiomic metrics including the mean CTN (MCTN), peak position, volume, standard deviation (SD), skewness, kurtosis, energy and entropy were calculated for each fraction. Paired t-test was used to check the significance of the change of specific metric at specific time. GEE model was used to test the association between changes of metrics over time for different pathology responses. In general, CTN histogram in the pancreatic head (but not in spinal cord) changed during the CRT delivery. Changes from the 1st to the 26th fraction in MCTN ranged from -15.8 to 3.9 HU with an average of -4.7 HU (p<0.001). Meanwhile the volume decreased, the skewness increased (less skewed), and the kurtosis decreased (less peaked). The changes of MCTN, volume, skewness, and kurtosis became significant after two weeks of treatment. Patient pathological response is associated with the changes of MCTN, SD, and skewness. In cases of good response, patients tend to have large reductions in MCTN and skewness, and large increases in SD and kurtosis. Significant changes in CT radiomic features, such as the MCTN, skewness, and kurtosis in tumor were observed during the course of CRT for pancreas cancer based on quantitative analysis of daily CTs. These changes may be potentially used for early assessment of treatment response and stratification for therapeutic intensification.
Rochon, Justine; Kieser, Meinhard
2011-11-01
Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.
Determining collective barrier operation skew in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
2015-11-24
Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by:more » identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.« less
Determining collective barrier operation skew in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by:more » identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.« less
Seyed Moosavi, Seyed Mohsen; Moaveni, Bijan; Moshiri, Behzad; Arvan, Mohammad Reza
2018-02-27
The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors.
Seyed Moosavi, Seyed Mohsen; Moshiri, Behzad; Arvan, Mohammad Reza
2018-01-01
The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors. PMID:29495434
Evaluation of Kurtosis into the product of two normally distributed variables
NASA Astrophysics Data System (ADS)
Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio
2016-06-01
Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.
Beyond R 0: Demographic Models for Variability of Lifetime Reproductive Output
Caswell, Hal
2011-01-01
The net reproductive rate measures the expected lifetime reproductive output of an individual, and plays an important role in demography, ecology, evolution, and epidemiology. Well-established methods exist to calculate it from age- or stage-classified demographic data. As an expectation, provides no information on variability; empirical measurements of lifetime reproduction universally show high levels of variability, and often positive skewness among individuals. This is often interpreted as evidence of heterogeneity, and thus of an opportunity for natural selection. However, variability provides evidence of heterogeneity only if it exceeds the level of variability to be expected in a cohort of identical individuals all experiencing the same vital rates. Such comparisons require a way to calculate the statistics of lifetime reproduction from demographic data. Here, a new approach is presented, using the theory of Markov chains with rewards, obtaining all the moments of the distribution of lifetime reproduction. The approach applies to age- or stage-classified models, to constant, periodic, or stochastic environments, and to any kind of reproductive schedule. As examples, I analyze data from six empirical studies, of a variety of animal and plant taxa (nematodes, polychaetes, humans, and several species of perennial plants). PMID:21738586
Wikberg, Eva C; Jack, Katharine M; Fedigan, Linda M; Campos, Fernando A; Yashima, Akiko S; Bergstrom, Mackenzie L; Hiwatashi, Tomohide; Kawamura, Shoji
2017-01-01
Reproductive skew in multimale groups may be determined by the need for alpha males to offer reproductive opportunities as staying incentives to subordinate males (concessions), by the relative fighting ability of the alpha male (tug-of-war) or by how easily females can be monopolized (priority-of-access). These models have rarely been investigated in species with exceptionally long male tenures, such as white-faced capuchins, where female mate choice for novel unrelated males may be important in shaping reproductive skew. We investigated reproductive skew in white-faced capuchins at Sector Santa Rosa, Costa Rica, using 20 years of demographic, behavioural and genetic data. Infant survival and alpha male reproductive success were highest in small multimale groups, which suggests that the presence of subordinate males can be beneficial to the alpha male, in line with the concession model's assumptions. None of the skew models predicted the observed degree of reproductive sharing, and the probability of an alpha male producing offspring was not affected by his relatedness to subordinate males, whether he resided with older subordinate males, whether he was prime aged, the number of males or females in the group or the number of infants conceived within the same month. Instead, the alpha male's probability of producing offspring decreased when he was the sire of the mother, was weak and lacked a well-established position and had a longer tenure. Because our data best supported the inbreeding avoidance hypothesis and female choice for strong novel mates, these hypotheses should be taken into account in future skew models. © 2016 John Wiley & Sons Ltd.
Thermal response of a highly skewed integral bridge.
DOT National Transportation Integrated Search
2012-06-01
The purpose of this study was to conduct a field evaluation of a highly skewed semi-integral bridge in order to provide : feedback regarding some of the assumptions behind the design guidelines developed by the Virginia Department of : Transportation...
Theoretical and field experimental evaluation of skewed modular slab bridges.
DOT National Transportation Integrated Search
2012-12-01
As a result of longitudinal cracking discovered in the concrete overlays of some recently-built skewed : bridges, the Maryland State Highway Administration (SHA) requested that this research project be : conducted for two purposes: (1) to determine t...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shukla-Dave, Amita, E-mail: davea@mskcc.org; Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY; Lee, Nancy Y.
2012-04-01
Purpose: Dynamic contrast-enhanced MRI (DCE-MRI) can provide information regarding tumor perfusion and permeability and has shown prognostic value in certain tumors types. The goal of this study was to assess the prognostic value of pretreatment DCE-MRI in head and neck squamous cell carcinoma (HNSCC) patients with nodal disease undergoing chemoradiation therapy or surgery. Methods and Materials: Seventy-four patients with histologically proven squamous cell carcinoma and neck nodal metastases were eligible for the study. Pretreatment DCE-MRI was performed on a 1.5T MRI. Clinical follow-up was a minimum of 12 months. DCE-MRI data were analyzed using the Tofts model. DCE-MRI parameters weremore » related to treatment outcome (progression-free survival [PFS] and overall survival [OS]). Patients were grouped as no evidence of disease (NED), alive with disease (AWD), dead with disease (DOD), or dead of other causes (DOC). Prognostic significance was assessed using the log-rank test for single variables and Cox proportional hazards regression for combinations of variables. Results: At last clinical follow-up, for Stage III, all 12 patients were NED. For Stage IV, 43 patients were NED, 4 were AWD, 11 were DOD, and 4 were DOC. K{sup trans} is volume transfer constant. In a stepwise Cox regression, skewness of K{sup trans} (volume transfer constant) was the strongest predictor for Stage IV patients (PFS and OS: p <0.001). Conclusion: Our study shows that skewness of K{sup trans} was the strongest predictor of PFS and OS in Stage IV HNSCC patients with nodal disease. This study suggests an important role for pretreatment DCE-MRI parameter K{sup trans} as a predictor of outcome in these patients.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osman, Mohammed K.; Turner, D. D.; Heus, T.
Here, this study explores water vapor turbulence in the convective boundary layer (CBL) using the Raman lidar observations from the Atmospheric Radiation Measurement site located at Darwin, Australia. An autocovariance technique was used to separate out the random instrument error from the atmospheric variability during time periods when the CBL is cloud–free, quasi–stationary, and well mixed. We identified 45 cases, comprising of 8 wet and 37 dry seasons events, over the 5–year data record period. The dry season in Darwin is known by warm and dry sunny days, while the wet season is characterized by high humidity and monsoonal rains.more » The inherent variability of the latter resulted in a more limited number of cases during the wet season. Profiles of the integral scale, variance, coefficient of the structure function, and skewness were analyzed and compared with similar observations from the Raman lidar at the Atmospheric Radiation Measurement Southern Great Plains (SGP) site. The wet season shows larger median variance profiles than the dry season, while the median profile of the variance from the dry season and the SGP site are found to be more comparable particularly between 0.4 and 0.75 z i. The variance and coefficient of the structure function show qualitatively the same vertical pattern. Furthermore, deeper CBL, larger gradient of water vapor mixing ratio at z i, and the strong correlation with the water vapor variance at z i are seen during the dry season. The median value in the skewness is mostly positive below 0.6 z i unlike the SGP site.« less
Osman, Mohammed K.; Turner, D. D.; Heus, T.; ...
2018-04-23
Here, this study explores water vapor turbulence in the convective boundary layer (CBL) using the Raman lidar observations from the Atmospheric Radiation Measurement site located at Darwin, Australia. An autocovariance technique was used to separate out the random instrument error from the atmospheric variability during time periods when the CBL is cloud–free, quasi–stationary, and well mixed. We identified 45 cases, comprising of 8 wet and 37 dry seasons events, over the 5–year data record period. The dry season in Darwin is known by warm and dry sunny days, while the wet season is characterized by high humidity and monsoonal rains.more » The inherent variability of the latter resulted in a more limited number of cases during the wet season. Profiles of the integral scale, variance, coefficient of the structure function, and skewness were analyzed and compared with similar observations from the Raman lidar at the Atmospheric Radiation Measurement Southern Great Plains (SGP) site. The wet season shows larger median variance profiles than the dry season, while the median profile of the variance from the dry season and the SGP site are found to be more comparable particularly between 0.4 and 0.75 z i. The variance and coefficient of the structure function show qualitatively the same vertical pattern. Furthermore, deeper CBL, larger gradient of water vapor mixing ratio at z i, and the strong correlation with the water vapor variance at z i are seen during the dry season. The median value in the skewness is mostly positive below 0.6 z i unlike the SGP site.« less
Field measurements on skewed semi-integral bridge with elastic inclusion : instrumentation report.
DOT National Transportation Integrated Search
2006-01-01
This project was designed to enhance the Virginia Department of Transportation's expertise in the design of integral bridges, particularly as it applies to highly skewed structures. Specifically, the project involves extensive monitoring of a semi-in...
INVESTIGATION OF SEISMIC PERFORMANCE AND DESIGN OF TYPICAL CURVED AND SKEWED BRIDGES IN COLORADO
DOT National Transportation Integrated Search
2018-01-15
This report summarizes the analytical studies on the seismic performance of typical Colorado concrete bridges, particularly those with curved and skewed configurations. A set of bridge models with different geometric configurations derived from a pro...
Effect of implementing lean-on bracing in skewed steel I-girder bridges.
DOT National Transportation Integrated Search
2016-09-01
Skew of the supports in steel I-girder bridges cause undesirable torsional effects, increase cross-frame forces, and generally increase the difficulty of designing and : constructing a bridge. The girders experience differential deflections due to th...
Systems of Differential Equations with Skew-Symmetric, Orthogonal Matrices
ERIC Educational Resources Information Center
Glaister, P.
2008-01-01
The solution of a system of linear, inhomogeneous differential equations is discussed. The particular class considered is where the coefficient matrix is skew-symmetric and orthogonal, and where the forcing terms are sinusoidal. More general matrices are also considered.
Evaluation of selected warning signs at skewed railroad-highway crossings.
DOT National Transportation Integrated Search
1986-01-01
A 1984 study by the Research Council recommended that advance warning signs be placed in advance of skewed railroad-highway grade crossings. Several signs were suggested for use, and the study reported here was undertaken to determine the effectivene...
Johnson, Nicholas S; Swink, William D; Brenden, Travis O
2017-03-29
Sex determination mechanisms in fishes lie along a genetic-environmental continuum and thereby offer opportunities to understand how physiology and environment interact to determine sex. Mechanisms and ecological consequences of sex determination in fishes are primarily garnered from teleosts, with little investigation into basal fishes. We tagged and released larval sea lamprey ( Petromyzon marinus ) into unproductive lake and productive stream environments. Sex ratios produced from these environments were quantified by recapturing tagged individuals as adults. Sex ratios from unproductive and productive environments were initially similar. However, sex ratios soon diverged, with unproductive environments becoming increasingly male-skewed and productive environments becoming less male-skewed with time. We hypothesize that slower growth in unproductive environments contributed to the sex ratio differences by directly influencing sex determination. To the best of our knowledge, this is the first study suggesting that growth rate in a fish species directly influences sex determination; other studies have suggested that the environmental variables to which sex determination is sensitive (e.g. density, temperature) act as cues for favourable or unfavourable growth conditions. Understanding mechanisms of sex determination in lampreys may provide unique insight into the underlying principles of sex determination in other vertebrates and provide innovative approaches for their management where valued and invasive. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Schönke, M.; Feldens, P.; Wilken, D.; Papenmeier, S.; Heinrich, C.; von Deimling, J. Schneider; Held, P.; Krastel, S.
2017-06-01
This study presents a new in situ method to explore the impact of macrofauna on seafloor microtopography and corresponding microroughness based on underwater laser line scanning. The local microtopography was determined with mm-level accuracy at three stations colonised by the tubeworm Lanice conchilega offshore of the island of Sylt in the German Bight (south-eastern North Sea), covering approximately 0.5 m2 each. Ground truthing was done using underwater video data. Two stations were populated by tubeworm colonies of different population densities, and one station had a hydrodynamically rippled seafloor. Tubeworms caused an increased skewness of the microtopography height distribution and an increased root mean square roughness at short spatial wavelengths compared with hydrodynamic bedforms. Spectral analysis of the 2D Fourier transformed microtopography showed that the roughness magnitude increased at spatial wavelengths between 0.020 and 0.003 m independently of the tubeworm density. This effect was not detected by commonly used 1D roughness profiles but required consideration of the complete spectrum. Overall, the results reveal that new indicator variables for benthic organisms may be developed based on microtopographic data. An example demonstrates the use of local slope and skewness to detect tubeworms in the measured digital elevation model.
Johnson, Nicholas; Swink, William D.; Brenden, Travis O.
2017-01-01
Sex determination mechanisms in fishes lie along a genetic-environmental continuum and thereby offer opportunities to understand how physiology and environment interact to determine sex. Mechanisms and ecological consequences of sex determination in fishes are primarily garnered from teleosts, with little investigation into basal fishes. We tagged and released larval sea lamprey (Petromyzon marinus) into unproductive lake and productive stream environments. Sex ratios produced from these environments were quantified by recapturing tagged individuals as adults. Sex ratios from unproductive and productive environments were initially similar. However, sex ratios soon diverged, with unproductive environments becoming increasingly male-skewed and productive environments becoming less male-skewed with time. We hypothesize that slower growth in unproductive environments contributed to the sex ratio differences by directly influencing sex determination. To the best of our knowledge, this is the first study suggesting that growth rate in a fish species directly influences sex determination; other studies have suggested that the environmental variables to which sex determination is sensitive (e.g. density, temperature) act as cues for favourable or unfavourable growth conditions. Understanding mechanisms of sex determination in lampreys may provide unique insight into the underlying principles of sex determination in other vertebrates and provide innovative approaches for their management where valued and invasive.
Intrinsic challenges in ancient microbiome reconstruction using 16S rRNA gene amplification.
Ziesemer, Kirsten A; Mann, Allison E; Sankaranarayanan, Krithivasan; Schroeder, Hannes; Ozga, Andrew T; Brandt, Bernd W; Zaura, Egija; Waters-Rist, Andrea; Hoogland, Menno; Salazar-García, Domingo C; Aldenderfer, Mark; Speller, Camilla; Hendy, Jessica; Weston, Darlene A; MacDonald, Sandy J; Thomas, Gavin H; Collins, Matthew J; Lewis, Cecil M; Hofman, Corinne; Warinner, Christina
2015-11-13
To date, characterization of ancient oral (dental calculus) and gut (coprolite) microbiota has been primarily accomplished through a metataxonomic approach involving targeted amplification of one or more variable regions in the 16S rRNA gene. Specifically, the V3 region (E. coli 341-534) of this gene has been suggested as an excellent candidate for ancient DNA amplification and microbial community reconstruction. However, in practice this metataxonomic approach often produces highly skewed taxonomic frequency data. In this study, we use non-targeted (shotgun metagenomics) sequencing methods to better understand skewed microbial profiles observed in four ancient dental calculus specimens previously analyzed by amplicon sequencing. Through comparisons of microbial taxonomic counts from paired amplicon (V3 U341F/534R) and shotgun sequencing datasets, we demonstrate that extensive length polymorphisms in the V3 region are a consistent and major cause of differential amplification leading to taxonomic bias in ancient microbiome reconstructions based on amplicon sequencing. We conclude that systematic amplification bias confounds attempts to accurately reconstruct microbiome taxonomic profiles from 16S rRNA V3 amplicon data generated using universal primers. Because in silico analysis indicates that alternative 16S rRNA hypervariable regions will present similar challenges, we advocate for the use of a shotgun metagenomics approach in ancient microbiome reconstructions.
Intrinsic challenges in ancient microbiome reconstruction using 16S rRNA gene amplification
Ziesemer, Kirsten A.; Mann, Allison E.; Sankaranarayanan, Krithivasan; Schroeder, Hannes; Ozga, Andrew T.; Brandt, Bernd W.; Zaura, Egija; Waters-Rist, Andrea; Hoogland, Menno; Salazar-García, Domingo C.; Aldenderfer, Mark; Speller, Camilla; Hendy, Jessica; Weston, Darlene A.; MacDonald, Sandy J.; Thomas, Gavin H.; Collins, Matthew J.; Lewis, Cecil M.; Hofman, Corinne; Warinner, Christina
2015-01-01
To date, characterization of ancient oral (dental calculus) and gut (coprolite) microbiota has been primarily accomplished through a metataxonomic approach involving targeted amplification of one or more variable regions in the 16S rRNA gene. Specifically, the V3 region (E. coli 341–534) of this gene has been suggested as an excellent candidate for ancient DNA amplification and microbial community reconstruction. However, in practice this metataxonomic approach often produces highly skewed taxonomic frequency data. In this study, we use non-targeted (shotgun metagenomics) sequencing methods to better understand skewed microbial profiles observed in four ancient dental calculus specimens previously analyzed by amplicon sequencing. Through comparisons of microbial taxonomic counts from paired amplicon (V3 U341F/534R) and shotgun sequencing datasets, we demonstrate that extensive length polymorphisms in the V3 region are a consistent and major cause of differential amplification leading to taxonomic bias in ancient microbiome reconstructions based on amplicon sequencing. We conclude that systematic amplification bias confounds attempts to accurately reconstruct microbiome taxonomic profiles from 16S rRNA V3 amplicon data generated using universal primers. Because in silico analysis indicates that alternative 16S rRNA hypervariable regions will present similar challenges, we advocate for the use of a shotgun metagenomics approach in ancient microbiome reconstructions. PMID:26563586
Knouft, Jason H
2004-05-01
Many taxonomic and ecological assemblages of species exhibit a right-skewed body size-frequency distribution when characterized at a regional scale. Although this distribution has been frequently described, factors influencing geographic variation in the distribution are not well understood, nor are mechanisms responsible for distribution shape. In this study, variation in the species body size-frequency distributions of 344 regional communities of North American freshwater fishes is examined in relation to latitude, species richness, and taxonomic composition. Although the distribution of all species of North American fishes is right-skewed, a negative correlation exists between latitude and regional community size distribution skewness, with size distributions becoming left-skewed at high latitudes. This relationship is not an artifact of the confounding relationship between latitude and species richness in North American fishes. The negative correlation between latitude and regional community size distribution skewness is partially due to the geographic distribution of families of fishes and apparently enhanced by a nonrandom geographic distribution of species within families. These results are discussed in the context of previous explanations of factors responsible for the generation of species size-frequency distributions related to the fractal nature of the environment, energetics, and evolutionary patterns of body size in North American fishes.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Entropy of stable seasonal rainfall distribution in Kelantan
NASA Astrophysics Data System (ADS)
Azman, Muhammad Az-zuhri; Zakaria, Roslinazairimah; Satari, Siti Zanariah; Radi, Noor Fadhilah Ahmad
2017-05-01
Investigating the rainfall variability is vital for any planning and management in many fields related to water resources. Climate change can gives an impact of water availability and may aggravate water scarcity in the future. Two statistics measurements which have been used by many researchers to measure the rainfall variability are variance and coefficient of variation. However, these two measurements are insufficient since rainfall distribution in Malaysia especially in the East Coast of Peninsular Malaysia is not symmetric instead it is positively skewed. In this study, the entropy concept is used as a tool to measure the seasonal rainfall variability in Kelantan and ten rainfall stations were selected. In previous studies, entropy of stable rainfall (ESR) and apportionment entropy (AE) were used to describe the rainfall amount variability during years for Australian rainfall data. In this study, the entropy of stable seasonal rainfall (ESSR) is suggested to model rainfall amount variability during northeast monsoon (NEM) and southwest monsoon (SWM) seasons in Kelantan. The ESSR is defined to measure the long-term average seasonal rainfall amount variability within a given year (1960-2012). On the other hand, the AE measures the rainfall amounts variability across the months. The results of ESSR and AE values show that stations in east coastline are more variable as compared to other stations inland for Kelantan rainfall. The contour maps of ESSR for Kelantan rainfall stations are also presented.
Confidence intervals for correlations when data are not normal.
Bishara, Anthony J; Hittner, James B
2017-02-01
With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.
Examining dental expenditure and dental insurance accounting for probability of incurring expenses.
Teusner, Dana; Smith, Valerie; Gnanamanickam, Emmanuel; Brennan, David
2017-04-01
There are few studies of dental service expenditure in Australia. Although dental insurance status is strongly associated with a higher probability of dental visiting, some studies indicate that there is little variation in expenditure by insurance status among those who attend for care. Our objective was to assess the overall impact of insurance on expenditures by modelling the association between insurance and expenditure accounting for variation in the probability of incurring expenses, that is dental visiting. A sample of 3000 adults (aged 30-61 years) was randomly selected from the Australian electoral roll. Dental service expenditures were collected prospectively over 2 years by client-held log books. Questionnaires collecting participant characteristics were administered at baseline, 12 months and 24 months. Unadjusted and adjusted ratios of expenditure were estimated using marginalized two-part log-skew-normal models. Such models accommodate highly skewed data and estimate effects of covariates on the overall marginal mean while accounting for the probability of incurring expenses. Baseline response was 39%; of these, 40% (n = 438) were retained over the 2-year period. Only participants providing complete data were included in the analysis (n = 378). Of these, 68.5% were insured, and 70.9% accessed dental services of which nearly all (97.7%) incurred individual dental expenses. The mean dental service expenditure for the total sample (those who did and did not attend) for dental care was AUS$788. Model-adjusted ratios of mean expenditures were higher for the insured (1.61; 95% CI 1.18, 2.20), females (1.38; 95% CI 1.06, 1.81), major city residents (1.43; 95% CI 1.10, 1.84) and those who brushed their teeth twice or more a day (1.50; 95% CI 1.15, 1.96) than their respective counterparts. Accounting for the probability of incurring dental expenses, and other explanatory factors, insured working-aged adults had (on average) approximately 60% higher individual dental service expenditures than uninsured adults. The analytical approach adopted in this study is useful for estimating effects on dental expenditure when a variable is associated with both the probability of visiting for care, and with the types of services received. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Forces in wingwalls from thermal expansion of skewed semi-integral bridges.
DOT National Transportation Integrated Search
2010-11-01
Jointless bridges, such as semi-integral and integral bridges, have become more popular in recent years because of their simplicity in the construction and the elimination of high costs related to joint maintenance. Prior research has shown that skew...
Scaling laws and properties of compositional data
NASA Astrophysics Data System (ADS)
Buccianti, Antonella; Albanese, Stefano; Lima, AnnaMaria; Minolfi, Giulia; De Vivo, Benedetto
2016-04-01
Many random processes occur in geochemistry. Accurate predictions of the manner in which elements or chemical species interact each other are needed to construct models able to treat presence of random components. Geochemical variables actually observed are the consequence of several events, some of which may be poorly defined or imperfectly understood. Variables tend to change with time/space but, despite their complexity, may share specific common traits and it is possible to model them stochastically. Description of the frequency distribution of the geochemical abundances has been an important target of research, attracting attention for at least 100 years, starting with CLARKE (1889) and continued by GOLDSCHMIDT (1933) and WEDEPOHL (1955). However, it was AHRENS (1954a,b) who focussed on the effect of skewness distributions, for example the log-normal distribution, regarded by him as a fundamental law of geochemistry. Although modeling of frequency distributions with some probabilistic models (for example Gaussian, log-normal, Pareto) has been well discussed in several fields of application, little attention has been devoted to the features of compositional data. When compositional nature of data is taken into account, the most typical distribution models for compositions are the Dirichlet and the additive logistic normal (or normal on the simplex) (AITCHISON et al. 2003; MATEU-FIGUERAS et al. 2005; MATEU-FIGUERAS and PAWLOWSKY-GLAHN 2008; MATEU-FIGUERAS et al. 2013). As an alternative, because compositional data have to be transformed from simplex space to real space, coordinates obtained by the ilr transformation or by application of the concept of balance can be analyzed by classical methods (EGOZCUE et al. 2003). In this contribution an approach coherent with the properties of compositional information is proposed and used to investigate the shape of the frequency distribution of compositional data. The purpose is to understand data-generation processes from the perspective of compositional theory. The approach is based on the use of the isometric log-ratio transformation, characterized by theoretical and practical advantages, but requiring a more complex geochemical interpretation compared with the investigation of single variables. The proposed methodology directs attention to model the frequency distributions of more complex indices, linking all the terms of the composition to better represent the dynamics of geochemical processes. An example of its application is presented and discussed by considering topsoil geochemistry of Campania Region (southern Italy). The investigated multi-element data archive contains, among others, Al, As, B, Ba, Ca, Co, Cr, Cu, Fe, K, La, Mg, Mn, Mo, Na, Ni, P, Pb, Sr, Th, Ti, V and Zn (mg/kg) contents determined in 3535 new topsoils as well as information on coordinates, geology, land cover. (BUCCIANTI et al., 2015). AHRENS, L. ,1954a. Geochim. Cosm. Acta 6, 121-131. AHRENS, L., 1954b. Geochim. Cosm. Acta 5, 49-73. AITCHISON, J., et al., 2003. Math Geol 35(6), 667-680. BUCCIANTI et al., 2015. Jour. Geoch. Explor., 159, 302-316. CLARKE, F., 1889. Phil. Society of Washington Bull. 11, 131-142. EGOZCUE, J.J. et al., 2003. Math Geol 35(3), 279-300. MATEU-FIGUERAS, G. et al, (2005), Stoch. Environ. Res. Risk Ass. 19(3), 205-214.
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-08-04
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization. The applicability and accuracy of the regional regression equations depend on the basin characteristics measured for an ungaged location on a stream being within range of those used to develop the equations.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
The assignment of scores procedure for ordinal categorical data.
Chen, Han-Ching; Wang, Nae-Sheng
2014-01-01
Ordinal data are the most frequently encountered type of data in the social sciences. Many statistical methods can be used to process such data. One common method is to assign scores to the data, convert them into interval data, and further perform statistical analysis. There are several authors who have recently developed assigning score methods to assign scores to ordered categorical data. This paper proposes an approach that defines an assigning score system for an ordinal categorical variable based on underlying continuous latent distribution with interpretation by using three case study examples. The results show that the proposed score system is well for skewed ordinal categorical data.
An investigation of safety problems at skewed rail-highway grade crossings.
DOT National Transportation Integrated Search
1984-01-01
Skewed rail-highway grade crossings can be a safety problem because of the restrictions which the angle of crossing may place upon a motorist's ability to detect an oncoming train and because of the potential roadway hazard which the use of flangeway...
DOT National Transportation Integrated Search
2016-12-01
Damage to skewed and curved bridges during strong earthquakes is documented. This project investigates whether such damage could be mitigated by using buckling restrained braces. Nonlinear models show that using buckling restrained braces to mitigate...
DOT National Transportation Integrated Search
2009-10-01
The research presented herein describes the field verification for the effectiveness of continuity diaphragms for : skewed continuous precast, prestressed, concrete girder bridges. The objectives of this research are (1) to perform : field load testi...
Design study for multi-channel tape recorder system, volume 2
NASA Technical Reports Server (NTRS)
1972-01-01
Skew test data are presented on a tape recorder transport with a double capstan drive for a 100 KHz tone recorded on five tracks simultaneously. Phase detectors were used to measure the skew when the center channel was the 100 KHz reference.
DOT National Transportation Integrated Search
2016-12-01
The objective of this project is to find effective configurations for using buckling restrained braces (BRBs) in both skewed and curved bridges for reducing the effects of strong earthquakes. Verification is performed by numerical simulation using an...
Assessing medication effects in the MTA study using neuropsychological outcomes.
Epstein, Jeffery N; Conners, C Keith; Hervey, Aaron S; Tonev, Simon T; Arnold, L Eugene; Abikoff, Howard B; Elliott, Glen; Greenhill, Laurence L; Hechtman, Lily; Hoagwood, Kimberly; Hinshaw, Stephen P; Hoza, Betsy; Jensen, Peter S; March, John S; Newcorn, Jeffrey H; Pelham, William E; Severe, Joanne B; Swanson, James M; Wells, Karen; Vitiello, Benedetto; Wigal, Timothy
2006-05-01
While studies have increasingly investigated deficits in reaction time (RT) and RT variability in children with attention deficit/hyperactivity disorder (ADHD), few studies have examined the effects of stimulant medication on these important neuropsychological outcome measures. 316 children who participated in the Multimodal Treatment Study of Children with ADHD (MTA) completed the Conners' Continuous Performance Test (CPT) at the 24-month assessment point. Outcome measures included standard CPT outcomes (e.g., errors of commission, mean hit reaction time (RT)) and RT indicators derived from an Ex-Gaussian distributional model (i.e., mu, sigma, and tau). Analyses revealed significant effects of medication across all neuropsychological outcome measures. Results on the Ex-Gaussian outcome measures revealed that stimulant medication slows RT and reduces RT variability. This demonstrates the importance of including analytic strategies that can accurately model the actual distributional pattern, including the positive skew. Further, the results of the study relate to several theoretical models of ADHD.
Wu, Cuiqin; Yuan, Dongxing; Liu, Baomin
2006-12-01
An analytical method involving anion exchange high performance liquid chromatographic determination of vitellogenin (Vtg) in fish plasma after postcolumn fluorescence derivatization with o-phthalaldehyde (OPA) was developed. The retention time of Vtg was about 11 min. The reagent variables for derivatization were optimized. The fluorophore was excited at 335 nm and detected at 435 nm. A calibration curve was established ranging from 0.13 to 11.28 microg. The determination limit of Vtg was found to be as low as 0.13 microg. The spiked recovery was 93.6% and interassay variability was less than 4%. The method developed was used to determine Vtg in fish plasma obtained from red sea bream (Pagrosomus major), black porgy (Sparus macrocephalus) and skew band grunt (Hapalogenys nitens), without complicated sample pretreatment. The results confirmed that the method showed advantages of being simple, rapid, reproducible and sensitive.
Motor Unit Interpulse Intervals During High Force Contractions.
Stock, Matt S; Thompson, Brennan J
2016-01-01
We examined the means, medians, and variability for motor-unit interpulse intervals (IPIs) during voluntary, high force contractions. Eight men (mean age = 22 years) attempted to perform isometric contractions at 90% of their maximal voluntary contraction force while bipolar surface electromyographic (EMG) signals were detected from the vastus lateralis and vastus medialis muscles. Surface EMG signal decomposition was used to determine the recruitment thresholds and IPIs of motor units that demonstrated accuracy levels ≥ 96.0%. Motor units with high recruitment thresholds demonstrated longer mean IPIs, but the coefficients of variation were similar across all recruitment thresholds. Polynomial regression analyses indicated that for both muscles, the relationship between the means and standard deviations of the IPIs was linear. The majority of IPI histograms were positively skewed. Although low-threshold motor units were associated with shorter IPIs, the variability among motor units with differing recruitment thresholds was comparable.
Prefrontal cortex damage abolishes brand-cued changes in cola preference.
Koenigs, Michael; Tranel, Daniel
2008-03-01
Human decision-making is remarkably susceptible to commercial advertising, yet the neurobiological basis of this phenomenon remains largely unexplored. With a series of Coke and Pepsi taste tests we show that patients with damage specifically involving ventromedial prefrontal cortex (VMPC), an area important for emotion, did not demonstrate the normal preference bias when exposed to brand information. Both comparison groups (neurologically normal adults and lesion patients with intact VMPC) preferred Pepsi in a blind taste test, but in subsequent taste tests that featured brand information ('semi-blind' taste tests), both comparison groups' preferences were skewed toward Coke, illustrating the so-called 'Pepsi paradox'. Like comparison groups, the VMPC patients preferred Pepsi in the blind taste test, but unlike comparison groups, the VMPC patients maintained their Pepsi preference in the semi-blind test. The result that VMPC damage abolishes the 'Pepsi paradox' suggests that the VMPC is an important part of the neural substrate for translating commercial images into brand preferences.
Guan, Yue; Shi, Hua; Chen, Ying; Liu, Song; Li, Weifeng; Jiang, Zhuoran; Wang, Huanhuan; He, Jian; Zhou, Zhengyang; Ge, Yun
2016-01-01
The aim of this study was to explore the application of whole-lesion histogram analysis of apparent diffusion coefficient (ADC) values of cervical cancer. A total of 54 women (mean age, 53 years) with cervical cancers underwent 3-T diffusion-weighted imaging with b values of 0 and 800 s/mm prospectively. Whole-lesion histogram analysis of ADC values was performed. Paired sample t test was used to compare differences in ADC histogram parameters between cervical cancers and normal cervical tissues. Receiver operating characteristic curves were constructed to identify the optimal threshold of each parameter. All histogram parameters in this study including ADCmean, ADCmin, ADC10%-ADC90%, mode, skewness, and kurtosis of cervical cancers were significantly lower than those of normal cervical tissues (all P < 0.0001). ADC90% had the largest area under receiver operating characteristic curve of 0.996. Whole-lesion histogram analysis of ADC maps is useful in the assessment of cervical cancer.
Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio
1993-02-01
The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.
Continuous Covariate Imbalance and Conditional Power for Clinical Trial Interim Analyses
Ciolino, Jody D.; Martin, Renee' H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.
2014-01-01
Oftentimes valid statistical analyses for clinical trials involve adjustment for known influential covariates, regardless of imbalance observed in these covariates at baseline across treatment groups. Thus, it must be the case that valid interim analyses also properly adjust for these covariates. There are situations, however, in which covariate adjustment is not possible, not planned, or simply carries less merit as it makes inferences less generalizable and less intuitive. In this case, covariate imbalance between treatment groups can have a substantial effect on both interim and final primary outcome analyses. This paper illustrates the effect of influential continuous baseline covariate imbalance on unadjusted conditional power (CP), and thus, on trial decisions based on futility stopping bounds. The robustness of the relationship is illustrated for normal, skewed, and bimodal continuous baseline covariates that are related to a normally distributed primary outcome. Results suggest that unadjusted CP calculations in the presence of influential covariate imbalance require careful interpretation and evaluation. PMID:24607294
Topological spin-hedgehog crystals of a chiral magnet as engineered with magnetic anisotropy
NASA Astrophysics Data System (ADS)
Kanazawa, N.; White, J. S.; Rønnow, H. M.; Dewhurst, C. D.; Morikawa, D.; Shibata, K.; Arima, T.; Kagawa, F.; Tsukazaki, A.; Kozuka, Y.; Ichikawa, M.; Kawasaki, M.; Tokura, Y.
2017-12-01
We report the engineering of spin-hedgehog crystals in thin films of the chiral magnet MnGe by tailoring the magnetic anisotropy. As evidenced by neutron scattering on films with different thicknesses and by varying a magnetic field, we can realize continuously deformable spin-hedgehog crystals, each of which is described as a superposition state of a different set of three spin spirals (a triple-q state). The directions of the three propagation vectors q vary systematically, gathering from the three orthogonal 〈100 〉 directions towards the film normal as the strength of the uniaxial magnetic anisotropy and/or the magnetic field applied along the film normal increase. The formation of triple-q states coincides with the onset of topological Hall signals, that are ascribed to skew scattering by an emergent magnetic field originating in the nontrivial topology of spin hedgehogs. These findings highlight how nanoengineering of chiral magnets makes possible the rational design of unique topological spin textures.
Prefrontal cortex damage abolishes brand-cued changes in cola preference
Tranel, Daniel
2008-01-01
Human decision-making is remarkably susceptible to commercial advertising, yet the neurobiological basis of this phenomenon remains largely unexplored. With a series of Coke and Pepsi taste tests we show that patients with damage specifically involving ventromedial prefrontal cortex (VMPC), an area important for emotion, did not demonstrate the normal preference bias when exposed to brand information. Both comparison groups (neurologically normal adults and lesion patients with intact VMPC) preferred Pepsi in a blind taste test, but in subsequent taste tests that featured brand information (‘semi-blind’ taste tests), both comparison groups’ preferences were skewed toward Coke, illustrating the so-called ‘Pepsi paradox’. Like comparison groups, the VMPC patients preferred Pepsi in the blind taste test, but unlike comparison groups, the VMPC patients maintained their Pepsi preference in the semi-blind test. The result that VMPC damage abolishes the ‘Pepsi paradox’ suggests that the VMPC is an important part of the neural substrate for translating commercial images into brand preferences. PMID:18392113
Work right to right work: An automythology of chronic illness and work.
Vijayasingham, Lavanya
2018-03-01
Objectives Chronic illness is known to disrupt and redirect the usual course of work trajectories. This article aims to portray the longitudinal course of negotiating work after multiple sclerosis. Methods Using therapy and personal journals to reconstruct memories and experience, an autoethnography is produced and narrated within Campbell's "Hero's Journey" automythology framework. Results The narrative highlights the intrasubjectivity of illness meaning-the changing internal meaning-making and external behavior and decision-making dynamics. The journey of being inhibited to "Work Right", to "Looking for the Right" and ultimately, finding "Right Work" is charted; portrayed as a bittersweet maneuver to achieve work-illness equilibrium. Discussion This journey traverses a spectrum of negative coping-the exhibition of deviant work behaviors, disengagement and depression; to recalibration and renewal; culminating in living the "new normal", and finding moral and meaningful work engagements. Life trajectories with chronic illness are often skewed and redirected; but longitudinal narratives of normalization and coping also highlight the pursuits to secure and maintain a life of meaning and value.
The Equilibrium Allele Frequency Distribution for a Population with Reproductive Skew
Der, Ricky; Plotkin, Joshua B.
2014-01-01
We study the population genetics of two neutral alleles under reversible mutation in a model that features a skewed offspring distribution, called the Λ-Fleming–Viot process. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from this equilibrium distribution, but the form of the offspring distribution cannot itself always be so identified. We introduce an estimator for the mutation rate that is consistent, independent of the form of reproductive skew. We also introduce a two-allele infinite-sites version of the Λ-Fleming–Viot process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulas for the expected number of segregating sites as a function of sample size and offspring distribution. We find that the Wright–Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other Λ-processes. PMID:24473932
NASA Astrophysics Data System (ADS)
Castle, James R.; CMS Collaboration
2017-11-01
Flow harmonic fluctuations are studied for PbPb collisions at √{sNN} = 5.02 TeV using the CMS detector at the LHC. Flow harmonic probability distributions p(v2) are obtained by unfolding smearing effects from observed azimuthal anisotropy distributions using particles of 0.3
The structure of mode-locking regions of piecewise-linear continuous maps: II. Skew sawtooth maps
NASA Astrophysics Data System (ADS)
Simpson, D. J. W.
2018-05-01
In two-parameter bifurcation diagrams of piecewise-linear continuous maps on , mode-locking regions typically have points of zero width known as shrinking points. Near any shrinking point, but outside the associated mode-locking region, a significant proportion of parameter space can be usefully partitioned into a two-dimensional array of annular sectors. The purpose of this paper is to show that in these sectors the dynamics is well-approximated by a three-parameter family of skew sawtooth circle maps, where the relationship between the skew sawtooth maps and the N-dimensional map is fixed within each sector. The skew sawtooth maps are continuous, degree-one, and piecewise-linear, with two different slopes. They approximate the stable dynamics of the N-dimensional map with an error that goes to zero with the distance from the shrinking point. The results explain the complicated radial pattern of periodic, quasi-periodic, and chaotic dynamics that occurs near shrinking points.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-07-01
We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watermann, J.; McNamara, A.G.; Sofko, G.J.
Some 7,700 radio aurora spectra obtained from a six link 50-MHz CW radar network set up on the Canadian prairies were analyzed with respect to the distributions of mean Doppler shift, spectral width and skewness. A comparison with recently published SABRE results obtained at 153 MHz shows substantial differences in the distributions which are probably due to different experimental and geophysical conditions. The spectra are mostly broad with mean Doppler shifts close to zero (type II spectra). The typical groupings of type I and type III spectra are clearly identified. All types appear to be in general much more symmetricmore » than those recorded with SABRE, and the skewness is only weakly dependent on the sign of the mean Doppler shift. Its distribution peaks near zero and shows a weak positive correlation with the type II Doppler shifts while the mostly positive type I Doppler shifts are slightly negatively correlated with the skewness.« less
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-04-01
We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
Few Skewed Results from IOTA Interferometer YSO Disk Survey
NASA Astrophysics Data System (ADS)
Monnier, J. D.; Millan-Gabet, R.; Berger, J.-P.; Pedretti, E.; Traub, W.; Schloerb, F. P.
2005-12-01
The 3-telescope IOTA interferometer is capable of measuring closure phases for dozens of Herbig Ae/Be stars in the near-infrared. The closure phase unambiguously identifies deviations from centro-symmetry (i.e., skew) in the brightness distribution, at the scale of 4 milliarcseconds (sub-AU physical scales) for our work. Indeed, hot dust emission from the inner circumstellar accretion disk is expected to be skewed for (generic) flared disks viewed at intermediate inclination angles, as has been observed for LkHa 101. Surprisingly, we find very little evidence for skewed disk emission in our IOTA3 sample, setting strong constraints on the geometry of the inner disk. In particular, we rule out the currently-popular model of a VERTICAL hot inner wall of dust at the sublimation radius. Instead, our data is more consistent with a curved inner wall that bends away from the midplane as might be expected from the pressure-dependence of dust sublimation or limited absorption of stellar luminosity in the disk midplane by gas.
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
CDC6600 subroutine for normal random variables. [RVNORM (RMU, SIG)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amos, D.E.
1977-04-01
A value y for a uniform variable on (0,1) is generated and a table of 96-percent points for the (0,1) normal distribution is interpolated for a value of the normal variable x(0,1) on 0.02 less than or equal to y less than or equal to 0.98. For the tails, the inverse normal is computed by a rational Chebyshev approximation in an appropriate variable. Then X = x sigma + ..mu.. gives the X(..mu..,sigma) variable.
Near-field shock formation in noise propagation from a high-power jet aircraft.
Gee, Kent L; Neilsen, Tracianne B; Downing, J Micah; James, Michael M; McKinley, Richard L; McKinley, Robert C; Wall, Alan T
2013-02-01
Noise measurements near the F-35A Joint Strike Fighter at military power are analyzed via spatial maps of overall and band pressure levels and skewness. Relative constancy of the pressure waveform skewness reveals that waveform asymmetry, characteristic of supersonic jets, is a source phenomenon originating farther upstream than the maximum overall level. Conversely, growth of the skewness of the time derivative with distance indicates that acoustic shocks largely form through the course of near-field propagation and are not generated explicitly by a source mechanism. These results potentially counter previous arguments that jet "crackle" is a source phenomenon.
Codron, Daryl; Carbone, Chris; Clauss, Marcus
2013-01-01
Because egg-laying meant that even the largest dinosaurs gave birth to very small offspring, they had to pass through multiple ontogenetic life stages to adulthood. Dinosaurs’ successors as the dominant terrestrial vertebrate life form, the mammals, give birth to live young, and have much larger offspring and less complex ontogenetic histories. The larger number of juveniles in dinosaur as compared to mammal ecosystems represents both a greater diversity of food available to predators, and competitors for similar-sized individuals of sympatric species. Models of population abundances across different-sized species of dinosaurs and mammals, based on simulated ecological life tables, are employed to investigate how differences in predation and competition pressure influenced dinosaur communities. Higher small- to medium-sized prey availability leads to a normal body mass-species richness (M-S) distribution of carnivorous dinosaurs (as found in the theropod fossil record), in contrast to the right-skewed M-S distribution of carnivorous mammals (as found living members of the order Carnivora). Higher levels of interspecific competition leads to a left-skewed M-S distribution in herbivorous dinosaurs (as found in sauropods and ornithopods), in contrast to the normal M-S distribution of large herbivorous mammals. Thus, our models suggest that differences in reproductive strategy, and consequently ontogeny, explain observed differences in community structure between dinosaur and mammal faunas. Models also show that the largest dinosaurian predators could have subsisted on similar-sized prey by including younger life stages of the largest herbivore species, but that large predators likely avoided prey much smaller than themselves because, despite predicted higher abundances of smaller than larger-bodied prey, contributions of small prey to biomass intake would be insufficient to satisfy meat requirements. A lack of large carnivores feeding on small prey exists in mammals larger than 21.5 kg, and it seems a similar minimum prey-size threshold could have affected dinosaurs as well. PMID:24204749
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, D; Trofimov, A; Winey, B
Purpose: We developed a knowledge-based model that can predict the patient-specific benefits of proton therapy based upon geometric considerations. The model could also aid patient selection in model-based clinical trials or help justify clinical decisions to insurance companies. Methods: The knowledge-based method trains a model upon existing proton treatment plans, exploiting correlations between dose and distance-to-target. Each OAR is split into concentric subvolumes surrounding the target volume, and a skew-normal PDF is fit to the dose distribution found within each shell. The model learns from shared trends in how the best-fit skew-normal parameters depend upon distance-to-target. It can then predictmore » feasible OAR DVHs for a new patient (without a proton plan) based upon their geometry. The expected benefits of proton therapy are assessed by comparing the predicted DVHs to those of an IMRT plan, using a metric such as the equivalent uniform dose (EUD). Results: A model was trained for clival chordoma, owing to its geometric complexity and the multitude of nearby OARs. The model was trained using 20 patients and validated with a further 20 patients, and considers several different OARs. The predicted EUD was in good agreement with that of the actual proton plan. The coefficient of determination (R-squared) was 85% overall, 92% for cochleas, 80% for optic chiasm and 79% for spinal cord. The model exhibited no signs of bias or overfitting. When compared to an IMRT plan, the model could classify whether a patient will experience a gain or a loss with an accuracy between 75% and 95%, depending upon the OAR. Conclusion: We developed a model that can quickly and accurately predict the patient-specific benefits of proton therapy in clival chordoma patients, though models could be trained for other tumor sites. This work is funded by National Cancer Institute grant U19 CA 021239.« less
Codron, Daryl; Carbone, Chris; Clauss, Marcus
2013-01-01
Because egg-laying meant that even the largest dinosaurs gave birth to very small offspring, they had to pass through multiple ontogenetic life stages to adulthood. Dinosaurs' successors as the dominant terrestrial vertebrate life form, the mammals, give birth to live young, and have much larger offspring and less complex ontogenetic histories. The larger number of juveniles in dinosaur as compared to mammal ecosystems represents both a greater diversity of food available to predators, and competitors for similar-sized individuals of sympatric species. Models of population abundances across different-sized species of dinosaurs and mammals, based on simulated ecological life tables, are employed to investigate how differences in predation and competition pressure influenced dinosaur communities. Higher small- to medium-sized prey availability leads to a normal body mass-species richness (M-S) distribution of carnivorous dinosaurs (as found in the theropod fossil record), in contrast to the right-skewed M-S distribution of carnivorous mammals (as found living members of the order Carnivora). Higher levels of interspecific competition leads to a left-skewed M-S distribution in herbivorous dinosaurs (as found in sauropods and ornithopods), in contrast to the normal M-S distribution of large herbivorous mammals. Thus, our models suggest that differences in reproductive strategy, and consequently ontogeny, explain observed differences in community structure between dinosaur and mammal faunas. Models also show that the largest dinosaurian predators could have subsisted on similar-sized prey by including younger life stages of the largest herbivore species, but that large predators likely avoided prey much smaller than themselves because, despite predicted higher abundances of smaller than larger-bodied prey, contributions of small prey to biomass intake would be insufficient to satisfy meat requirements. A lack of large carnivores feeding on small prey exists in mammals larger than 21.5 kg, and it seems a similar minimum prey-size threshold could have affected dinosaurs as well.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Cadmium modulates hematopoietic stem and progenitor cells and skews toward myelopoiesis in mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yandong; Yu, Xinchun
The heavy metal cadmium (Cd) is known to modulate immunity and cause osteoporosis. However, how Cd influences on hematopoiesis remain largely unknown. Herein, we show that wild-type C57BL/6 (B6) mice exposed to Cd for 3 months had expanded bone marrow (BM) populations of long-term hematopoietic stem cells (LT-HSCs), common myeloid progenitors (CMPs) and granulocyte-macrophage progenitors (GMPs), while having reduced populations of multipotent progenitors (MPPs) and common lymphoid progenitors (CLPs). A competitive mixed BM transplantation assay indicates that BM from Cd-treated mice had impaired LT-HSC ability to differentiate into mature cells. In accordance with increased myeloid progenitors and decreased lymphoid progenitors,more » the BM and spleens of Cd-treated mice had more monocytes and/or neutrophils and fewer B cells and T cells. Cd impaired the ability of the non-hematopoietic system to support LT-HSCs, in that lethally irradiated Cd-treated recipients transplanted with normal BM cells had reduced LT-HSCs after the hematopoietic system was fully reconstituted. This is consistent with reduced osteoblasts, a known critical component for HSC niche, observed in Cd-treated mice. Conversely, lethally irradiated control recipients transplanted with BM cells from Cd-treated mice had normal LT-HSC reconstitution. Furthermore, both control mice and Cd-treated mice that received Alendronate, a clinical drug used for treating osteoporosis, had BM increases of LT-HSCs. Thus, the results suggest Cd increase of LT-HSCs is due to effects on HSCs and not on osteoblasts, although, Cd causes osteoblast reduction and impaired niche function for maintaining HSCs. Furthermore, Cd skews HSCs toward myelopoiesis. - Highlights: • Cd increases the number of LT-HSCs but impairs their development. • Cd-treated hosts have compromised ability to support LT-HSCs. • Cd promotes myelopoiesis at the expense of lymphopoiesis at the MPP level.« less
Kinship and Incest Avoidance Drive Patterns of Reproductive Skew in Cooperatively Breeding Birds.
Riehl, Christina
2017-12-01
Social animals vary in how reproduction is divided among group members, ranging from monopolization by a dominant pair (high skew) to equal sharing by cobreeders (low skew). Despite many theoretical models, the ecological and life-history factors that generate this variation are still debated. Here I analyze data from 83 species of cooperatively breeding birds, finding that kinship within the breeding group is a powerful predictor of reproductive sharing across species. Societies composed of nuclear families have significantly higher skew than those that contain unrelated members, a pattern that holds for both multimale and multifemale groups. Within-species studies confirm this, showing that unrelated subordinates of both sexes are more likely to breed than related subordinates are. Crucially, subordinates in cooperative groups are more likely to breed if they are unrelated to the opposite-sex dominant, whereas relatedness to the same-sex dominant has no effect. This suggests that incest avoidance, rather than suppression by dominant breeders, may be an important proximate mechanism limiting reproduction by subordinates. Overall, these results support the ultimate evolutionary logic behind concessions models of skew-namely, that related subordinates gain indirect fitness benefits from helping at the nests of kin, so a lower direct reproductive share is required for selection to favor helping over dispersal-but not the proximate mechanism of dominant control assumed by these models.
Miller, K A; Nelson, N J; Smith, H G; Moore, J A
2009-09-01
Reduced genetic diversity can result in short-term decreases in fitness and reduced adaptive potential, which may lead to an increased extinction risk. Therefore, maintaining genetic variation is important for the short- and long-term success of reintroduced populations. Here, we evaluate how founder group size and variance in male reproductive success influence the long-term maintenance of genetic diversity after reintroduction. We used microsatellite data to quantify the loss of heterozygosity and allelic diversity in the founder groups from three reintroductions of tuatara (Sphenodon), the sole living representatives of the reptilian order Rhynchocephalia. We then estimated the maintenance of genetic diversity over 400 years (approximately 10 generations) using population viability analyses. Reproduction of tuatara is highly skewed, with as few as 30% of males mating across years. Predicted losses of heterozygosity over 10 generations were low (1-14%), and populations founded with more animals retained a greater proportion of the heterozygosity and allelic diversity of their source populations and founder groups. Greater male reproductive skew led to greater predicted losses of genetic diversity over 10 generations, but only accelerated the loss of genetic diversity at small population size (<250 animals). A reduction in reproductive skew at low density may facilitate the maintenance of genetic diversity in small reintroduced populations. If reproductive skew is high and density-independent, larger founder groups could be released to achieve genetic goals for management.
Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann
2008-09-01
Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.
Refinement of Scoring Procedures for the Basic Attributes Test (BAT) Battery
1993-03-01
see Carretta, 1991). Research on the BAT summary scores has shown that some of them (a) are significantly positively skewed and platykurtic , (b) contain...for positively skewed and platykurtic data distributions, and those that were applied here to the BAT data, are the square-root and natural logarithm
Micromagnetic recording model of writer geometry effects at skew
NASA Astrophysics Data System (ADS)
Plumer, M. L.; Bozeman, S.; van Ek, J.; Michel, R. P.
2006-04-01
The effects of the pole-tip geometry at the air-bearing surface on perpendicular recording at a skew angle are examined through modeling and spin-stand test data. Head fields generated by the finite element method were used to record transitions within our previously described micromagnetic recording model. Write-field contours for a variety of square, rectangular, and trapezoidal pole shapes were evaluated to determine the impact of geometry on field contours. Comparing results for recorded track width, transition width, and media signal to noise ratio at 0° and 15° skew demonstrate the benefits of trapezoidal and reduced aspect-ratio pole shapes. Consistency between these modeled results and test data is demonstrated.
On the Yakhot-Orszag renormalization group method for deriving turbulence statistics and models
NASA Technical Reports Server (NTRS)
Smith, L. M.; Reynolds, W. C.
1992-01-01
An independent, comprehensive, critical review of the 'renormalization group' (RNG) theory of turbulence developed by Yakhot and Orszag (1986) is provided. Their basic theory for the Navier-Stokes equations is confirmed, and approximations in the scale removal procedure are discussed. The YO derivations of the velocity-derivative skewness and the transport equation for the energy dissipation rate are examined. An algebraic error in the derivation of the skewness is corrected. The corrected RNG skewness value of -0.59 is in agreement with experiments at moderate Reynolds numbers. Several problems are identified in the derivation of the energy dissipation rate equations which suggest that the derivation should be reformulated.
The measurement of boundary layers on a compressor blade in cascade. Volume 2: Data tables
NASA Technical Reports Server (NTRS)
Zierke, William C.; Deutsch, Steven
1989-01-01
Measurements were made of the boundary layers and wakes about a highly loaded, double-circular-arc compressor blade in cascade. These laser Doppler velocimetry measurements have yielded a very detailed and precise data base with which to test the application of viscous computational codes to turbomachinery. In order to test the computational codes at off-design conditions, the data have been acquired at a chord Reynolds number of 500,000 and at three incidence angles. Average values and 95 percent confidence bands were tabularized for the velocity, local turbulence intensity, skewness, kurtosis, and percent backflow. Tables also exist for the blade static-pressure distributions and boundary layer velocity profiles reconstructed to account for the normal pressure gradient.
Risley, John; Moradkhani, Hamid; Hay, Lauren E.; Markstrom, Steve
2011-01-01
In an earlier global climate-change study, air temperature and precipitation data for the entire twenty-first century simulated from five general circulation models were used as input to precalibrated watershed models for 14 selected basins across the United States. Simulated daily streamflow and energy output from the watershed models were used to compute a range of statistics. With a side-by-side comparison of the statistical analyses for the 14 basins, regional climatic and hydrologic trends over the twenty-first century could be qualitatively identified. Low-flow statistics (95% exceedance, 7-day mean annual minimum, and summer mean monthly streamflow) decreased for almost all basins. Annual maximum daily streamflow also decreased in all the basins, except for all four basins in California and the Pacific Northwest. An analysis of the supply of available energy and water for the basins indicated that ratios of evaporation to precipitation and potential evapotranspiration to precipitation for most of the basins will increase. Probability density functions (PDFs) were developed to assess the uncertainty and multimodality in the impact of climate change on mean annual streamflow variability. Kolmogorov?Smirnov tests showed significant differences between the beginning and ending twenty-first-century PDFs for most of the basins, with the exception of four basins that are located in the western United States. Almost none of the basin PDFs were normally distributed, and two basins in the upper Midwest had PDFs that were extremely dispersed and skewed.
On Some Confidence Intervals for Estimating the Mean of a Skewed Population
ERIC Educational Resources Information Center
Shi, W.; Kibria, B. M. Golam
2007-01-01
A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…
Journey to Centers in the Core
ERIC Educational Resources Information Center
Groth, Randall E.; Kent, Kristen D.; Hitch, Ebony D.
2015-01-01
Considerable discrepancies between the mean and median often occur in data sets that are skewed left, skewed right, or have other unusual features. In such cases, it is important to analyze the data and context carefully to decide how best to describe centers of distributions. The importance of this type of statistical thinking is acknowledged in…
Caste load and the evolution of reproductive skew.
Holman, Luke
2014-01-01
Reproductive skew theory seeks to explain how reproduction is divided among group members in animal societies. Existing theory is framed almost entirely in terms of selection, though nonadaptive processes must also play some role in the evolution of reproductive skew. Here I propose that a genetic correlation between helper fecundity and breeder fecundity may frequently constrain the evolution of reproductive skew. This constraint is part of a wider phenomenon that I term "caste load," which is defined as the decline in mean fitness caused by caste-specific selection pressures, that is, differential selection on breeding and nonbreeding individuals. I elaborate the caste load hypothesis using quantitative and population genetic arguments and individual-based simulations. Although selection can sometimes erode genetic correlations and resolve caste load, this may be constrained when mutations have similar pleiotropic effects on breeder and helper traits. I document evidence for caste load, identify putative genomic adaptations to it, and suggest future research directions. The models highlight the value of considering adaptation within the boundaries imposed by genetic architecture and incidentally reaffirm that monogamy promotes the evolutionary transition to eusociality.
Jet crackle: skewness transport budget and a mechanistic source model
NASA Astrophysics Data System (ADS)
Buchta, David; Freund, Jonathan
2016-11-01
The sound from high-speed (supersonic) jets, such as on military aircraft, is distinctly different than that from lower-speed jets, such as on commercial airliners. Atop the already loud noise, a higher speed adds an intense, fricative, and intermittent character. The observed pressure wave patterns have strong peaks which are followed by relatively long shallows; notably, their pressure skewness is Sk >= 0 . 4 . Direct numerical simulation of free-shear-flow turbulence show that these skewed pressure waves occur immediately adjacent to the turbulence source for M >= 2 . 5 . Additionally, the near-field waves are seen to intersect and nonlinearly merge with other waves. Statistical analysis of terms in a pressure skewness transport equation show that starting just beyond δ99 the nonlinear wave mechanics that add to Sk are balanced by damping molecular effects, consistent with this aspect of the sound arising in the source region. A gas dynamics description is developed that neglects rotational turbulence dynamics and yet reproduces the key crackle features. At its core, this mechanism shows simply that nonlinear compressive effects lead directly to stronger compressions than expansions and thus Sk > 0 .