Sample records for normal distribution based

  1. On Nonequivalence of Several Procedures of Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Chan, Wai

    2005-01-01

    The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure…

  2. Is Middle-Upper Arm Circumference "normally" distributed? Secondary data analysis of 852 nutrition surveys.

    PubMed

    Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer

    2016-01-01

    Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the normal distribution assumption can be successfully applied to MUAC. In light of this promising finding, further research is ongoing to evaluate the performance of a normal distribution based approach to estimating the prevalence of wasting using MUAC.

  3. New spatial upscaling methods for multi-point measurements: From normal to p-normal

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Li, Xin

    2017-12-01

    Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.

  4. Finite Element Simulation and Experimental Verification of Internal Stress of Quenched AISI 4140 Cylinders

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Qin, Shengwei; Hao, Qingguo; Chen, Nailu; Zuo, Xunwei; Rong, Yonghua

    2017-03-01

    The study of internal stress in quenched AISI 4140 medium carbon steel is of importance in engineering. In this work, the finite element simulation (FES) was employed to predict the distribution of internal stress in quenched AISI 4140 cylinders with two sizes of diameter based on exponent-modified (Ex-Modified) normalized function. The results indicate that the FES based on Ex-Modified normalized function proposed is better consistent with X-ray diffraction measurements of the stress distribution than FES based on normalized function proposed by Abrassart, Desalos and Leblond, respectively, which is attributed that Ex-Modified normalized function better describes transformation plasticity. Effect of temperature distribution on the phase formation, the origin of residual stress distribution and effect of transformation plasticity function on the residual stress distribution were further discussed.

  5. On the efficacy of procedures to normalize Ex-Gaussian distributions.

    PubMed

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2014-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.

  6. On the efficacy of procedures to normalize Ex-Gaussian distributions

    PubMed Central

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2015-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588

  7. Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula

    NASA Astrophysics Data System (ADS)

    Kacker, Raghu N.

    2006-02-01

    In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.

  8. Apparent Transition in the Human Height Distribution Caused by Age-Dependent Variation during Puberty Period

    NASA Astrophysics Data System (ADS)

    Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto

    2013-08-01

    In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.

  9. A novel generalized normal distribution for human longevity and other negatively skewed data.

    PubMed

    Robertson, Henry T; Allison, David B

    2012-01-01

    Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.

  10. A Novel Generalized Normal Distribution for Human Longevity and other Negatively Skewed Data

    PubMed Central

    Robertson, Henry T.; Allison, David B.

    2012-01-01

    Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution. PMID:22623974

  11. Scoring in genetically modified organism proficiency tests based on log-transformed results.

    PubMed

    Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P

    2006-01-01

    The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.

  12. Statistical analysis of multivariate atmospheric variables. [cloud cover

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.

    1979-01-01

    Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.

  13. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  14. Study on probability distribution of prices in electricity market: A case study of zhejiang province, china

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.

    2009-05-01

    The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.

  15. Logistic Approximation to the Normal: The KL Rationale

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2006-01-01

    A rationale is proposed for approximating the normal distribution with a logistic distribution using a scaling constant based on minimizing the Kullback-Leibler (KL) information, that is, the expected amount of information available in a sample to distinguish between two competing distributions using a likelihood ratio (LR) test, assuming one of…

  16. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  17. Log-Normal Distribution of Cosmic Voids in Simulations and Mocks

    NASA Astrophysics Data System (ADS)

    Russell, E.; Pycke, J.-R.

    2017-01-01

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.

  18. Plasma Electrolyte Distributions in Humans-Normal or Skewed?

    PubMed

    Feldman, Mark; Dickson, Beverly

    2017-11-01

    It is widely believed that plasma electrolyte levels are normally distributed. Statistical tests and calculations using plasma electrolyte data are often reported based on this assumption of normality. Examples include t tests, analysis of variance, correlations and confidence intervals. The purpose of our study was to determine whether plasma sodium (Na + ), potassium (K + ), chloride (Cl - ) and bicarbonate [Formula: see text] distributions are indeed normally distributed. We analyzed plasma electrolyte data from 237 consecutive adults (137 women and 100 men) who had normal results on a standard basic metabolic panel which included plasma electrolyte measurements. The skewness of each distribution (as a measure of its asymmetry) was compared to the zero skewness of a normal (Gaussian) distribution. The plasma Na + distribution was skewed slightly to the right, but the skew was not significantly different from zero skew. The plasma Cl - distribution was skewed slightly to the left, but again the skew was not significantly different from zero skew. On the contrary, both the plasma K + and [Formula: see text] distributions were significantly skewed to the right (P < 0.01 zero skew). There was also a suggestion from examining frequency distribution curves that K + and [Formula: see text] distributions were bimodal. In adults with a normal basic metabolic panel, plasma potassium and bicarbonate levels are not normally distributed and may be bimodal. Thus, statistical methods to evaluate these 2 plasma electrolytes should be nonparametric tests and not parametric ones that require a normal distribution. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.

  19. Robust Bayesian Analysis of Heavy-tailed Stochastic Volatility Models using Scale Mixtures of Normal Distributions

    PubMed Central

    Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.

    2009-01-01

    A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043

  20. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  1. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.

  2. The Effect on Non-Normal Distributions on the Integrated Moving Average Model of Time-Series Analysis.

    ERIC Educational Resources Information Center

    Doerann-George, Judith

    The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…

  3. Smooth quantile normalization.

    PubMed

    Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada

    2018-04-01

    Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.

  4. Predicting durations of online collective actions based on Peaks' heights

    NASA Astrophysics Data System (ADS)

    Lu, Peng; Nie, Shizhao; Wang, Zheng; Jing, Ziwei; Yang, Jianwu; Qi, Zhongxiang; Pujia, Wangmo

    2018-02-01

    Capturing the whole process of collective actions, the peak model contains four stages, including Prepare, Outbreak, Peak, and Vanish. Based on the peak model, one of the key variables, factors and parameters are further investigated in this paper, which is the rate between peaks and spans. Although the durations or spans and peaks' heights are highly diversified, it seems that the ratio between them is quite stable. If the rate's regularity is discovered, we can predict how long the collective action lasts and when it ends based on the peak's height. In this work, we combined mathematical simulations and empirical big data of 148 cases to explore the regularity of ratio's distribution. It is indicated by results of simulations that the rate has some regularities of distribution, which is not normal distribution. The big data has been collected from the 148 online collective actions and the whole processes of participation are recorded. The outcomes of empirical big data indicate that the rate seems to be closer to being log-normally distributed. This rule holds true for both the total cases and subgroups of 148 online collective actions. The Q-Q plot is applied to check the normal distribution of the rate's logarithm, and the rate's logarithm does follow the normal distribution.

  5. Comparison of Multidimensional Item Response Models: Multivariate Normal Ability Distributions versus Multivariate Polytomous Ability Distributions. Research Report. ETS RR-08-45

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan

    2008-01-01

    Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…

  6. LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu

    2017-01-20

    Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less

  7. Are your covariates under control? How normalization can re-introduce covariate effects.

    PubMed

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  8. A short note on the maximal point-biserial correlation under non-normality.

    PubMed

    Cheng, Ying; Liu, Haiyan

    2016-11-01

    The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.

  9. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    PubMed

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  10. A spatial scan statistic for survival data based on Weibull distribution.

    PubMed

    Bhatt, Vijaya; Tiwari, Neeraj

    2014-05-20

    The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    NASA Astrophysics Data System (ADS)

    Baidillah, Marlin R.; Takei, Masahiro

    2017-06-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.

  12. Notes on power of normality tests of error terms in regression models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Střelec, Luboš

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less

  13. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    NASA Technical Reports Server (NTRS)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  14. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  15. Derivation of a Multiparameter Gamma Model for Analyzing the Residence-Time Distribution Function for Nonideal Flow Systems as an Alternative to the Advection-Dispersion Equation

    DOE PAGES

    Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...

    2013-01-01

    A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less

  16. Statistical Power Analysis with Microsoft Excel: Normal Tests for One or Two Means as a Prelude to Using Non-Central Distributions to Calculate Power

    ERIC Educational Resources Information Center

    Texeira, Antonio; Rosa, Alvaro; Calapez, Teresa

    2009-01-01

    This article presents statistical power analysis (SPA) based on the normal distribution using Excel, adopting textbook and SPA approaches. The objective is to present the latter in a comparative way within a framework that is familiar to textbook level readers, as a first step to understand SPA with other distributions. The analysis focuses on the…

  17. Confidence bounds for normal and lognormal distribution coefficients of variation

    Treesearch

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  18. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  19. Active control of impulsive noise with symmetric α-stable distribution based on an improved step-size normalized adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yali; Zhang, Qizhi; Yin, Yixin

    2015-05-01

    In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.

  20. Measuring Treasury Bond Portfolio Risk and Portfolio Optimization with a Non-Gaussian Multivariate Model

    NASA Astrophysics Data System (ADS)

    Dong, Yijun

    The research about measuring the risk of a bond portfolio and the portfolio optimization was relatively rare previously, because the risk factors of bond portfolios are not very volatile. However, this condition has changed recently. The 2008 financial crisis brought high volatility to the risk factors and the related bond securities, even if the highly rated U.S. treasury bonds. Moreover, the risk factors of bond portfolios show properties of fat-tailness and asymmetry like risk factors of equity portfolios. Therefore, we need to use advanced techniques to measure and manage risk of bond portfolios. In our paper, we first apply autoregressive moving average generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) model with multivariate normal tempered stable (MNTS) distribution innovations to predict risk factors of U.S. treasury bonds and statistically demonstrate that MNTS distribution has the ability to capture the properties of risk factors based on the goodness-of-fit tests. Then based on empirical evidence, we find that the VaR and AVaR estimated by assuming normal tempered stable distribution are more realistic and reliable than those estimated by assuming normal distribution, especially for the financial crisis period. Finally, we use the mean-risk portfolio optimization to minimize portfolios' potential risks. The empirical study indicates that the optimized bond portfolios have better risk-adjusted performances than the benchmark portfolios for some periods. Moreover, the optimized bond portfolios obtained by assuming normal tempered stable distribution have improved performances in comparison to the optimized bond portfolios obtained by assuming normal distribution.

  1. A novel gamma-fitting statistical method for anti-drug antibody assays to establish assay cut points for data with non-normal distribution.

    PubMed

    Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena

    2010-01-31

    In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.

  2. Normal Approximations to the Distributions of the Wilcoxon Statistics: Accurate to What "N"? Graphical Insights

    ERIC Educational Resources Information Center

    Bellera, Carine A.; Julien, Marilyse; Hanley, James A.

    2010-01-01

    The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…

  3. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Treesearch

    Steve P. Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations. To develop these confidence bounds and test, we first establish that estimators based on Newton steps from n-...

  4. Item Response Theory with Estimation of the Latent Population Distribution Using Spline-Based Densities

    ERIC Educational Resources Information Center

    Woods, Carol M.; Thissen, David

    2006-01-01

    The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…

  5. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  6. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  7. Distribution Characteristics of Air-Bone Gaps – Evidence of Bias in Manual Audiometry

    PubMed Central

    Margolis, Robert H.; Wilson, Richard H.; Popelka, Gerald R.; Eikelboom, Robert H.; Swanepoel, De Wet; Saly, George L.

    2015-01-01

    Objective Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps. PMID:26627469

  8. Identifying Epigenetic Biomarkers using Maximal Relevance and Minimal Redundancy Based Feature Selection for Multi-Omics Data.

    PubMed

    Mallik, Saurav; Bhadra, Tapas; Maulik, Ujjwal

    2017-01-01

    Epigenetic Biomarker discovery is an important task in bioinformatics. In this article, we develop a new framework of identifying statistically significant epigenetic biomarkers using maximal-relevance and minimal-redundancy criterion based feature (gene) selection for multi-omics dataset. Firstly, we determine the genes that have both expression as well as methylation values, and follow normal distribution. Similarly, we identify the genes which consist of both expression and methylation values, but do not follow normal distribution. For each case, we utilize a gene-selection method that provides maximal-relevant, but variable-weighted minimum-redundant genes as top ranked genes. For statistical validation, we apply t-test on both the expression and methylation data consisting of only the normally distributed top ranked genes to determine how many of them are both differentially expressed andmethylated. Similarly, we utilize Limma package for performing non-parametric Empirical Bayes test on both expression and methylation data comprising only the non-normally distributed top ranked genes to identify how many of them are both differentially expressed and methylated. We finally report the top-ranking significant gene-markerswith biological validation. Moreover, our framework improves positive predictive rate and reduces false positive rate in marker identification. In addition, we provide a comparative analysis of our gene-selection method as well as othermethods based on classificationperformances obtained using several well-known classifiers.

  9. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  10. Normalization of High Dimensional Genomics Data Where the Distribution of the Altered Variables Is Skewed

    PubMed Central

    Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per

    2011-01-01

    Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175

  11. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  12. Entropy-based goodness-of-fit test: Application to the Pareto distribution

    NASA Astrophysics Data System (ADS)

    Lequesne, Justine

    2013-08-01

    Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.

  13. Optical clock distribution in supercomputers using polyimide-based waveguides

    NASA Astrophysics Data System (ADS)

    Bihari, Bipin; Gan, Jianhua; Wu, Linghui; Liu, Yujie; Tang, Suning; Chen, Ray T.

    1999-04-01

    Guided-wave optics is a promising way to deliver high-speed clock-signal in supercomputer with minimized clock-skew. Si- CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitters. Surface-normal couplers can couple the optical clock signals into and out from the H-tree polyimide waveguides surface-normally, which facilitates the integration of photodetectors to convert optical-signal to electrical-signal. A 45-degree surface- normal couplers has been integrated at each output end. The measured output coupling efficiency is nearly 100 percent. The output profile from 45-degree surface-normal coupler were calculated using Fresnel approximation. the theoretical result is in good agreement with experimental result. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.

  14. Influence of Transformation Plasticity on the Distribution of Internal Stress in Three Water-Quenched Cylinders

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Qin, Shengwei; Zhang, Jiazhi; Wang, Ying; Rong, Yonghua; Zuo, Xunwei; Chen, Nailu

    2017-10-01

    Based on the hardenability of three medium carbon steels, cylinders with the same 60-mm diameter and 240-mm length were designed for quenching in water to obtain microstructures, including a pearlite matrix (Chinese steel mark: 45), a bainite matrix (42CrMo), and a martensite matrix (40CrNiMo). Through the combination of normalized functions describing transformation plasticity (TP), the thermo-elasto-plastic constitutive equation was deduced. The results indicate that the finite element simulation (FES) of the internal stress distribution in the three kinds of hardenable steel cylinders based on the proposed exponent-modified (Ex-Modified) normalized function is more consistent with the X-ray diffraction (XRD) measurements than those based on the normalized functions proposed by Abrassart, Desalos, and Leblond, which is attributed to the fact that the Ex-Modified normalized function better describes the TP kinetics. In addition, there was no significant difference between the calculated and measured stress distributions, even though TP was taken into account for the 45 carbon steel; that is, TP can be ignored in FES. In contrast, in the 42CrMo and 40CrNiMo alloyed steels, the significant effect of TP on the residual stress distributions was demonstrated, meaning that TP must be included in the FES. The rationality of the preceding conclusions was analyzed. The complex quenching stress is a consequence of interactions between the thermal and phase transformation stresses. The separated calculations indicate that the three steels exhibit similar thermal stress distributions for the same water-quenching condition, but different phase transformation stresses between 45 carbon steel and alloyed steels, leading to different distributions of their axial and tangential stresses.

  15. On measures of association among genetic variables

    PubMed Central

    Gianola, Daniel; Manfredi, Eduardo; Simianer, Henner

    2012-01-01

    Summary Systems involving many variables are important in population and quantitative genetics, for example, in multi-trait prediction of breeding values and in exploration of multi-locus associations. We studied departures of the joint distribution of sets of genetic variables from independence. New measures of association based on notions of statistical distance between distributions are presented. These are more general than correlations, which are pairwise measures, and lack a clear interpretation beyond the bivariate normal distribution. Our measures are based on logarithmic (Kullback-Leibler) and on relative ‘distances’ between distributions. Indexes of association are developed and illustrated for quantitative genetics settings in which the joint distribution of the variables is either multivariate normal or multivariate-t, and we show how the indexes can be used to study linkage disequilibrium in a two-locus system with multiple alleles and present applications to systems of correlated beta distributions. Two multivariate beta and multivariate beta-binomial processes are examined, and new distributions are introduced: the GMS-Sarmanov multivariate beta and its beta-binomial counterpart. PMID:22742500

  16. Polymorphic mountain whitefish (Prosopium williamsoni) in a coastal riverscape: size class assemblages, distribution, and habitat associations

    USGS Publications Warehouse

    Starr, James C.; Torgersen, Christian E.

    2015-01-01

    We compared the assemblage structure, spatial distributions, and habitat associations of mountain whitefish (Prosopium williamsoni) morphotypes and size classes. We hypothesised that morphotypes would have different spatial distributions and would be associated with different habitat features based on feeding behaviour and diet. Spatially continuous sampling was conducted over a broad extent (29 km) in the Calawah River, WA (USA). Whitefish were enumerated via snorkelling in three size classes: small (10–29 cm), medium (30–49 cm), and large (≥50 cm). We identified morphotypes based on head and snout morphology: a pinocchio form that had an elongated snout and a normal form with a blunted snout. Large size classes of both morphotypes were distributed downstream of small and medium size classes, and normal whitefish were distributed downstream of pinocchio whitefish. Ordination of whitefish assemblages with nonmetric multidimensional scaling revealed that normal whitefish size classes were associated with higher gradient and depth, whereas pinocchio whitefish size classes were positively associated with pool area, distance upstream, and depth. Reach-scale generalised additive models indicated that normal whitefish relative density was associated with larger substrate size in downstream reaches (R2 = 0.64), and pinocchio whitefish were associated with greater stream depth in the reaches farther upstream (R2 = 0.87). These results suggest broad-scale spatial segregation (1–10 km), particularly between larger and more phenotypically extreme individuals. These results provide the first perspective on spatial distributions and habitat relationships of polymorphic mountain whitefish.

  17. Spatio-temporal precipitation climatology over complex terrain using a censored additive regression model.

    PubMed

    Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim

    2017-06-15

    Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.

  18. Correlates of sexual risk behaviors among young Black MSM: implications for clinic-based counseling programs

    PubMed Central

    Crosby, Richard A.; Mena, Leandro; Ricks, JaNelle

    2018-01-01

    This study applied an 8-item index of recent sexual risk behaviors to young Black men who have sex with men (YBMSM) and evaluated the distribution for normality. The distribution was tested for associations with possible antecedents of sexual risk. YBMSM (N=600), ages 16–29 years, were recruited from an STI clinic, located in the Southern United States. Men completed an extensive audio-computer assisted self-interview. Thirteen possible antecedents of sexual risk, as assessed by the index, were selected for analyses. The 8-item index formed a normal distribution with a mean of 4.77 (sd=1.77). In adjusted analyses, not having completed education beyond high school was associated with less risk, as was having sex with females. Conversely, meeting sex partners online was associated with greater risk, as was reporting that sex partners were drunk during sex. The obtained normal distribution of sexual risk behaviors suggests a corresponding need to “target and tailor” clinic-based counseling and prevention services for YBMSM. Avoiding sex when partners are intoxicated may be an especially valuable goal of counseling sessions. PMID:27875903

  19. Correlates of sexual-risk behaviors among young black MSM: implications for clinic-based counseling programs.

    PubMed

    Crosby, Richard A; Mena, Leandro; Ricks, JaNelle M

    2017-06-01

    This study applied an 8-item index of recent sexual-risk behaviors to young Black men who have sex with men (YBMSM) and evaluated the distribution for normality. The distribution was tested for associations with possible antecedents of sexual risk. YBMSM (N = 600), aged 16-29 years, were recruited from a sexually transmitted infection clinic, located in the southern US. Men completed an extensive audio computer-assisted self-interview. Thirteen possible antecedents of sexual risk, as assessed by the index, were selected for analyses. The 8-item index formed a normal distribution with a mean of 4.77 (SD = 1.77). In adjusted analyses, not having completed education beyond high school was associated with less risk, as was having sex with females. Conversely, meeting sex partners online was associated with greater risk, as was reporting that sex partners were drunk during sex. The obtained normal distribution of sexual-risk behaviors suggests a corresponding need to "target and tailor" clinic-based counseling and prevention services for YBMSM. Avoiding sex when partners are intoxicated may be an especially valuable goal of counseling sessions.

  20. Reply to: Are There More Gifted People than Would Be Expected on a Normal Distribution?

    ERIC Educational Resources Information Center

    Gallagher, James J.

    2014-01-01

    The author responds to the article by Warne, Godwin, and Smith (2013) on the question of whether there are more gifted people than would be expected in a Gaussian normal distribution. He asserts that the answer to this question is yes, based on (a) data that he and his colleagues have collected, (b) data that are already available and quoted by…

  1. The distribution of emission-line galaxies in selected areas of the sky

    NASA Astrophysics Data System (ADS)

    Moody, J. Ward

    1988-11-01

    The author discusses the spatial distribution of emission-line galaxies (ELGs) relative to normal galaxies in several areas of the sky. Current evidence supports the notion that ELGs trace a low-density population in all the surveyed areas with the possible exception of the CfA "Slice of the Universe" survey. Based on this and other survey data in the north galactic cap, it is suggested that the ELGs inside the Bootes void may actually define the edge of a totally empty volume within an underdense distribution of normal galaxies.

  2. The distribution of emission-line galaxies in selected areas of the sky

    NASA Technical Reports Server (NTRS)

    Moody, J. Ward

    1988-01-01

    The spatial distribution of emission-line galaxies (ELGs) relative to normal galaxies in several areas of the sky is discussed. Current evidence supports the notion that ELGs trace a low-density population in all the surveyed areas with the possible exception of the CfA 'Slice of the Universe' survey. Based on this and other survey data in the north galactic cap, it is suggested that the ELGs inside the Bootes void may actually define the edge of a totally empty volume within an underdense distribution of normal galaxies.

  3. A Posteriori Correction of Forecast and Observation Error Variances

    NASA Technical Reports Server (NTRS)

    Rukhovets, Leonid

    2005-01-01

    Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.

  4. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  5. Linear energy transfer incorporated intensity modulated proton therapy optimization

    NASA Astrophysics Data System (ADS)

    Cao, Wenhua; Khabazian, Azin; Yepes, Pablo P.; Lim, Gino; Poenisch, Falk; Grosshans, David R.; Mohan, Radhe

    2018-01-01

    The purpose of this study was to investigate the feasibility of incorporating linear energy transfer (LET) into the optimization of intensity modulated proton therapy (IMPT) plans. Because increased LET correlates with increased biological effectiveness of protons, high LETs in target volumes and low LETs in critical structures and normal tissues are preferred in an IMPT plan. However, if not explicitly incorporated into the optimization criteria, different IMPT plans may yield similar physical dose distributions but greatly different LET, specifically dose-averaged LET, distributions. Conventionally, the IMPT optimization criteria (or cost function) only includes dose-based objectives in which the relative biological effectiveness (RBE) is assumed to have a constant value of 1.1. In this study, we added LET-based objectives for maximizing LET in target volumes and minimizing LET in critical structures and normal tissues. Due to the fractional programming nature of the resulting model, we used a variable reformulation approach so that the optimization process is computationally equivalent to conventional IMPT optimization. In this study, five brain tumor patients who had been treated with proton therapy at our institution were selected. Two plans were created for each patient based on the proposed LET-incorporated optimization (LETOpt) and the conventional dose-based optimization (DoseOpt). The optimized plans were compared in terms of both dose (assuming a constant RBE of 1.1 as adopted in clinical practice) and LET. Both optimization approaches were able to generate comparable dose distributions. The LET-incorporated optimization achieved not only pronounced reduction of LET values in critical organs, such as brainstem and optic chiasm, but also increased LET in target volumes, compared to the conventional dose-based optimization. However, on occasion, there was a need to tradeoff the acceptability of dose and LET distributions. Our conclusion is that the inclusion of LET-dependent criteria in the IMPT optimization could lead to similar dose distributions as the conventional optimization but superior LET distributions in target volumes and normal tissues. This may have substantial advantages in improving tumor control and reducing normal tissue toxicities.

  6. Analysis of vector wind change with respect to time for Cape Kennedy, Florida

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1978-01-01

    Multivariate analysis was used to determine the joint distribution of the four variables represented by the components of the wind vector at an initial time and after a specified elapsed time is hypothesized to be quadravariate normal; the fourteen statistics of this distribution, calculated from 15 years of twice-daily rawinsonde data are presented by monthly reference periods for each month from 0 to 27 km. The hypotheses that the wind component changes with respect to time is univariate normal, that the joint distribution of wind component change with respect to time is univariate normal, that the joint distribution of wind component changes is bivariate normal, and that the modulus of vector wind change is Rayleigh are tested by comparison with observed distributions. Statistics of the conditional bivariate normal distributions of vector wind at a future time given the vector wind at an initial time are derived. Wind changes over time periods from 1 to 5 hours, calculated from Jimsphere data, are presented. Extension of the theoretical prediction (based on rawinsonde data) of wind component change standard deviation to time periods of 1 to 5 hours falls (with a few exceptions) within the 95 percentile confidence band of the population estimate obtained from the Jimsphere sample data. The joint distributions of wind change components, conditional wind components, and 1 km vector wind shear change components are illustrated by probability ellipses at the 95 percentile level.

  7. XFM demonstrates preferential accumulation of a vanadyl-based MRI contrast agent in murine colonic tumors

    PubMed Central

    Mustafi, Devkumar; Ward, Jesse; Dougherty, Urszula; Bissonnette, Marc; Hart, John; Vogt, Stefan; Karczmar, Gregory S.

    2016-01-01

    Contrast agents that specifically enhance cancers on MRI would allow earlier detection. Vanadyl-based chelates (VCs) selectively enhance rodent cancers on MRI, suggesting selective uptake of VCs by cancers. Here we report X-ray fluorescence microscopy (XFM) of VC uptake by murine colon cancer. Colonic tumors in mice treated with azoxymethane/dextran sulfate sodium were identified by MRI. Then a gadolinium-based contrast agent and a VC were injected I.V.; mice were sacrificed and colons sectioned. VC distribution was sampled at 120 minutes after injection to evaluate the long term accumulation. Gadolinium distribution was sampled at 10 minutes after injection due to its rapid washout. XFM was performed on 72 regions of normal and cancerous colon from 5 normal mice and 4 cancer-bearing mice. XFM showed that all gadolinium was extracellular with similar concentrations in colon cancers and normal colon. In contrast, the average VC concentration was 2-fold higher in cancers vs. normal tissue (p<0.002). Cancers also contained numerous ‘hot spots’ with intracellular VC concentrations 6-fold higher than the concentration in normal colon (p<0.0001). No ‘hot spots’ were detected in normal colon. This is the first direct demonstration that VCs selectively accumulate in cancer cells, and thus may improve cancer detection. PMID:25813904

  8. Differential models of twin correlations in skew for body-mass index (BMI).

    PubMed

    Tsang, Siny; Duncan, Glen E; Dinescu, Diana; Turkheimer, Eric

    2018-01-01

    Body Mass Index (BMI), like most human phenotypes, is substantially heritable. However, BMI is not normally distributed; the skew appears to be structural, and increases as a function of age. Moreover, twin correlations for BMI commonly violate the assumptions of the most common variety of the classical twin model, with the MZ twin correlation greater than twice the DZ correlation. This study aimed to decompose twin correlations for BMI using more general skew-t distributions. Same sex MZ and DZ twin pairs (N = 7,086) from the community-based Washington State Twin Registry were included. We used latent profile analysis (LPA) to decompose twin correlations for BMI into multiple mixture distributions. LPA was performed using the default normal mixture distribution and the skew-t mixture distribution. Similar analyses were performed for height as a comparison. Our analyses are then replicated in an independent dataset. A two-class solution under the skew-t mixture distribution fits the BMI distribution for both genders. The first class consists of a relatively normally distributed, highly heritable BMI with a mean in the normal range. The second class is a positively skewed BMI in the overweight and obese range, with lower twin correlations. In contrast, height is normally distributed, highly heritable, and is well-fit by a single latent class. Results in the replication dataset were highly similar. Our findings suggest that two distinct processes underlie the skew of the BMI distribution. The contrast between height and weight is in accord with subjective psychological experience: both are under obvious genetic influence, but BMI is also subject to behavioral control, whereas height is not.

  9. Comparison of parametric and bootstrap method in bioequivalence test.

    PubMed

    Ahn, Byung-Jin; Yim, Dong-Seok

    2009-10-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.

  10. Comparison of Parametric and Bootstrap Method in Bioequivalence Test

    PubMed Central

    Ahn, Byung-Jin

    2009-01-01

    The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption. PMID:19915699

  11. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  12. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  13. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  14. Generating Multivariate Ordinal Data via Entropy Principles.

    PubMed

    Lee, Yen; Kaplan, David

    2018-03-01

    When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.

  15. Conormal distributions in the Shubin calculus of pseudodifferential operators

    NASA Astrophysics Data System (ADS)

    Cappiello, Marco; Schulz, René; Wahlberg, Patrik

    2018-02-01

    We characterize the Schwartz kernels of pseudodifferential operators of Shubin type by means of a Fourier-Bros-Iagolnitzer transform. Based on this, we introduce as a generalization a new class of tempered distributions called Shubin conormal distributions. We study their transformation behavior, normal forms, and microlocal properties.

  16. About normal distribution on SO(3) group in texture analysis

    NASA Astrophysics Data System (ADS)

    Savyolova, T. I.; Filatov, S. V.

    2017-12-01

    This article studies and compares different normal distributions (NDs) on SO(3) group, which are used in texture analysis. Those NDs are: Fisher normal distribution (FND), Bunge normal distribution (BND), central normal distribution (CND) and wrapped normal distribution (WND). All of the previously mentioned NDs are central functions on SO(3) group. CND is a subcase for normal CLT-motivated distributions on SO(3) (CLT here is Parthasarathy’s central limit theorem). WND is motivated by CLT in R 3 and mapped to SO(3) group. A Monte Carlo method for modeling normally distributed values was studied for both CND and WND. All of the NDs mentioned above are used for modeling different components of crystallites orientation distribution function in texture analysis.

  17. Fundamentals of Research Data and Variables: The Devil Is in the Details.

    PubMed

    Vetter, Thomas R

    2017-10-01

    Designing, conducting, analyzing, reporting, and interpreting the findings of a research study require an understanding of the types and characteristics of data and variables. Descriptive statistics are typically used simply to calculate, describe, and summarize the collected research data in a logical, meaningful, and efficient way. Inferential statistics allow researchers to make a valid estimate of the association between an intervention and the treatment effect in a specific population, based upon their randomly collected, representative sample data. Categorical data can be either dichotomous or polytomous. Dichotomous data have only 2 categories, and thus are considered binary. Polytomous data have more than 2 categories. Unlike dichotomous and polytomous data, ordinal data are rank ordered, typically based on a numerical scale that is comprised of a small set of discrete classes or integers. Continuous data are measured on a continuum and can have any numeric value over this continuous range. Continuous data can be meaningfully divided into smaller and smaller or finer and finer increments, depending upon the precision of the measurement instrument. Interval data are a form of continuous data in which equal intervals represent equal differences in the property being measured. Ratio data are another form of continuous data, which have the same properties as interval data, plus a true definition of an absolute zero point, and the ratios of the values on the measurement scale make sense. The normal (Gaussian) distribution ("bell-shaped curve") is of the most common statistical distributions. Many applied inferential statistical tests are predicated on the assumption that the analyzed data follow a normal distribution. The histogram and the Q-Q plot are 2 graphical methods to assess if a set of data have a normal distribution (display "normality"). The Shapiro-Wilk test and the Kolmogorov-Smirnov test are 2 well-known and historically widely applied quantitative methods to assess for data normality. Parametric statistical tests make certain assumptions about the characteristics and/or parameters of the underlying population distribution upon which the test is based, whereas nonparametric tests make fewer or less rigorous assumptions. If the normality test concludes that the study data deviate significantly from a Gaussian distribution, rather than applying a less robust nonparametric test, the problem can potentially be remedied by judiciously and openly: (1) performing a data transformation of all the data values; or (2) eliminating any obvious data outlier(s).

  18. Comparison of low-altitude wind-shear statistics derived from measured and proposed standard wind profiles

    NASA Technical Reports Server (NTRS)

    Usry, J. W.

    1983-01-01

    Wind shear statistics were calculated for a simulated set of wind profiles based on a proposed standard wind field data base. Wind shears were grouped in altitude in altitude bands of 100 ft between 100 and 1400 ft and in wind shear increments of 0.025 knot/ft. Frequency distributions, means, and standard deviations for each altitude band were derived for the total sample were derived for both sets. It was found that frequency distributions in each altitude band for the simulated data set were more dispersed below 800 ft and less dispersed above 900 ft than those for the measured data set. Total sample frequency of occurrence for the two data sets was about equal for wind shear values between +0.075 knot/ft, but the simulated data set had significantly larger values for all wind shears outside these boundaries. It is shown that normal distribution in both data sets neither data set was normally distributed; similar results are observed from the cumulative frequency distributions.

  19. Effects of axisymmetric and normal air jet plumes and solid plume on cylindrical afterbody pressure distributions at Mach numbers from 1.65 to 2.50

    NASA Technical Reports Server (NTRS)

    Covell, P. F.

    1982-01-01

    A wind tunnel investigation of the interference effects of axisymmetric nozzle air plumes, a solid plume, and normal air jet plumes on the afterbody pressure distributions and base pressures of a cylindrical afterbody model was conducted at Mach numbers from 1.65 to 2.50. The axisymmetric nozzles, which varied in exit lip Mach number from 1.7 to 2.7, and the normal air jet nozzle were tested at jet pressure ratios from 1 (jet off) to 615. The tests were conducted at an angle of attack of 0 deg and a Reynolds number per meter of 6.56 million. The results of the investigation show that the solid plume induces greater interference effects than those induced by the axisymmetric nozzle plumes at the selected underexpanded design conditions. A thrust coefficient parameter based on nozzle lip conditons was found to correlate the afterbody disturbance distance and the base pressure between the different axisymmetric nozzles. The normal air jet plume and the solid plume induce afterbody disturbance distances similar to those induced by the axisymmetric air plumes when base pressure is held constant.

  20. The stochastic distribution of available coefficient of friction for human locomotion of five different floor surfaces.

    PubMed

    Chang, Wen-Ruey; Matz, Simon; Chang, Chien-Chi

    2014-05-01

    The maximum coefficient of friction that can be supported at the shoe and floor interface without a slip is usually called the available coefficient of friction (ACOF) for human locomotion. The probability of a slip could be estimated using a statistical model by comparing the ACOF with the required coefficient of friction (RCOF), assuming that both coefficients have stochastic distributions. An investigation of the stochastic distributions of the ACOF of five different floor surfaces under dry, water and glycerol conditions is presented in this paper. One hundred friction measurements were performed on each floor surface under each surface condition. The Kolmogorov-Smirnov goodness-of-fit test was used to determine if the distribution of the ACOF was a good fit with the normal, log-normal and Weibull distributions. The results indicated that the ACOF distributions had a slightly better match with the normal and log-normal distributions than with the Weibull in only three out of 15 cases with a statistical significance. The results are far more complex than what had heretofore been published and different scenarios could emerge. Since the ACOF is compared with the RCOF for the estimate of slip probability, the distribution of the ACOF in seven cases could be considered a constant for this purpose when the ACOF is much lower or higher than the RCOF. A few cases could be represented by a normal distribution for practical reasons based on their skewness and kurtosis values without a statistical significance. No representation could be found in three cases out of 15. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  2. Simulating Univariate and Multivariate Burr Type IIII and Type XII Distributions through the Method of L-Moments

    ERIC Educational Resources Information Center

    Pant, Mohan Dev

    2011-01-01

    The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and…

  3. Money-center structures in dynamic banking systems

    NASA Astrophysics Data System (ADS)

    Li, Shouwei; Zhang, Minghui

    2016-10-01

    In this paper, we propose a dynamic model for banking systems based on the description of balance sheets. It generates some features identified through empirical analysis. Through simulation analysis of the model, we find that banking systems have the feature of money-center structures, that bank asset distributions are power-law distributions, and that contract size distributions are log-normal distributions.

  4. Photon event distribution sampling: an image formation technique for scanning microscopes that permits tracking of sub-diffraction particles with high spatial and temporal resolutions.

    PubMed

    Larkin, J D; Publicover, N G; Sutko, J L

    2011-01-01

    In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.

  5. Normal probability plots with confidence.

    PubMed

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Determining prescription durations based on the parametric waiting time distribution.

    PubMed

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-12-01

    The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  8. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    ERIC Educational Resources Information Center

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  9. Pan-European comparison of candidate distributions for climatological drought indices, SPI and SPEI

    NASA Astrophysics Data System (ADS)

    Stagge, James; Tallaksen, Lena; Gudmundsson, Lukas; Van Loon, Anne; Stahl, Kerstin

    2013-04-01

    Drought indices are vital to objectively quantify and compare drought severity, duration, and extent across regions with varied climatic and hydrologic regimes. The Standardized Precipitation Index (SPI), a well-reviewed meterological drought index recommended by the WMO, and its more recent water balance variant, the Standardized Precipitation-Evapotranspiration Index (SPEI) both rely on selection of univariate probability distributions to normalize the index, allowing for comparisons across climates. The SPI, considered a universal meteorological drought index, measures anomalies in precipitation, whereas the SPEI measures anomalies in climatic water balance (precipitation minus potential evapotranspiration), a more comprehensive measure of water availability that incorporates temperature. Many reviewers recommend use of the gamma (Pearson Type III) distribution for SPI normalization, while developers of the SPEI recommend use of the three parameter log-logistic distribution, based on point observation validation. Before the SPEI can be implemented at the pan-European scale, it is necessary to further validate the index using a range of candidate distributions to determine sensitivity to distribution selection, identify recommended distributions, and highlight those instances where a given distribution may not be valid. This study rigorously compares a suite of candidate probability distributions using WATCH Forcing Data, a global, historical (1958-2001) climate dataset based on ERA40 reanalysis with 0.5 x 0.5 degree resolution and bias-correction based on CRU-TS2.1 observations. Using maximum likelihood estimation, alternative candidate distributions are fit for the SPI and SPEI across the range of European climate zones. When evaluated at this scale, the gamma distribution for the SPI results in negatively skewed values, exaggerating the index severity of extreme dry conditions, while decreasing the index severity of extreme high precipitation. This bias is particularly notable for shorter aggregation periods (1-6 months) during the summer months in southern Europe (below 45° latitude), and can partially be attributed to distribution fitting difficulties in semi-arid regions where monthly precipitation totals cluster near zero. By contrast, the SPEI has potential for avoiding this fitting difficulty because it is not bounded by zero. However, the recommended log-logistic distribution produces index values with less variation than the standard normal distribution. Among the alternative candidate distributions, the best fit distribution and the distribution parameters vary in space and time, suggesting regional commonalities within hydroclimatic regimes, as discussed further in the presentation.

  10. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  11. Checking distributional assumptions for pharmacokinetic summary statistics based on simulations with compartmental models.

    PubMed

    Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V

    2016-08-12

    Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.

  12. On the issues of probability distribution of GPS carrier phase observations

    NASA Astrophysics Data System (ADS)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In common practice the observables related to Global Positioning System (GPS) are assumed to follow a Gauss-Laplace normal distribution. Actually, full knowledge of the observables' distribution is not required for parameter estimation by means of the least-squares algorithm based on the functional relation between observations and unknown parameters as well as the associated variance-covariance matrix. However, the probability distribution of GPS observations plays a key role in procedures for quality control (e.g. outlier and cycle slips detection, ambiguity resolution) and in reliability-related assessments of the estimation results. Under non-ideal observation conditions with respect to the factors impacting GPS data quality, for example multipath effects and atmospheric delays, the validity of the normal distribution postulate of GPS observations is in doubt. This paper presents a detailed analysis of the distribution properties of GPS carrier phase observations using double difference residuals. For this purpose 1-Hz observation data from the permanent SAPOS

  13. Distributions of Cognates in Europe as Based on Levenshtein Distance

    ERIC Educational Resources Information Center

    Schepens, Job; Dijkstra, Ton; Grootjen, Franc

    2012-01-01

    Researchers on bilingual processing can benefit from computational tools developed in artificial intelligence. We show that a normalized Levenshtein distance function can efficiently and reliably simulate bilingual orthographic similarity ratings. Orthographic similarity distributions of cognates and non-cognates were identified across pairs of…

  14. Vector wind and vector wind shear models 0 to 27 km altitude for Cape Kennedy, Florida, and Vandenberg AFB, California

    NASA Technical Reports Server (NTRS)

    Smith, O. E.

    1976-01-01

    The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.

  15. Statistical distributions of ultra-low dose CT sinograms and their fundamental limits

    NASA Astrophysics Data System (ADS)

    Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.

    2017-03-01

    Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream

  16. Estimation of value at risk in currency exchange rate portfolio using asymmetric GJR-GARCH Copula

    NASA Astrophysics Data System (ADS)

    Nurrahmat, Mohamad Husein; Noviyanti, Lienda; Bachrudin, Achmad

    2017-03-01

    In this study, we discuss the problem in measuring the risk in a portfolio based on value at risk (VaR) using asymmetric GJR-GARCH Copula. The approach based on the consideration that the assumption of normality over time for the return can not be fulfilled, and there is non-linear correlation for dependent model structure among the variables that lead to the estimated VaR be inaccurate. Moreover, the leverage effect also causes the asymmetric effect of dynamic variance and shows the weakness of the GARCH models due to its symmetrical effect on conditional variance. Asymmetric GJR-GARCH models are used to filter the margins while the Copulas are used to link them together into a multivariate distribution. Then, we use copulas to construct flexible multivariate distributions with different marginal and dependence structure, which is led to portfolio joint distribution does not depend on the assumptions of normality and linear correlation. VaR obtained by the analysis with confidence level 95% is 0.005586. This VaR derived from the best Copula model, t-student Copula with marginal distribution of t distribution.

  17. Analysis of quantitative data obtained from toxicity studies showing non-normal distribution.

    PubMed

    Kobayashi, Katsumi

    2005-05-01

    The data obtained from toxicity studies are examined for homogeneity of variance, but, usually, they are not examined for normal distribution. In this study I examined the measured items of a carcinogenicity/chronic toxicity study with rats for both homogeneity of variance and normal distribution. It was observed that a lot of hematology and biochemistry items showed non-normal distribution. For testing normal distribution of the data obtained from toxicity studies, the data of the concurrent control group may be examined, and for the data that show a non-normal distribution, non-parametric tests with robustness may be applied.

  18. An Empirical Study of Synchrophasor Communication Delay in a Utility TCP/IP Network

    NASA Astrophysics Data System (ADS)

    Zhu, Kun; Chenine, Moustafa; Nordström, Lars; Holmström, Sture; Ericsson, Göran

    2013-07-01

    Although there is a plethora of literature dealing with Phasor Measurement Unit (PMU) communication delay, there has not been any effort made to generalize empirical delay results by identifying the distribution with the best fit. The existing studies typically assume a distribution or simply build on analogies to communication network routing delay. Specifically, this study provides insight into the characterization of the communication delay of both unprocessed PMU data and synchrophasors sorted by a Phasor Data Concentrator (PDC). The results suggest that a bi-modal distribution containing two normal distributions offers the best fit of the delay of the unprocessed data, whereas the delay profile of the sorted synchrophasors resembles a normal distribution based on these results, the possibility of evaluating the reliability of a synchrophasor application with respect to a particular choice of PDC timeout is discussed.

  19. 3D modeling of effects of increased oxygenation and activity concentration in tumors treated with radionuclides and antiangiogenic drugs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagerloef, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    Purpose: Formation of new blood vessels (angiogenesis) in response to hypoxia is a fundamental event in the process of tumor growth and metastatic dissemination. However, abnormalities in tumor neovasculature often induce increased interstitial pressure (IP) and further reduce oxygenation (pO{sub 2}) of tumor cells. In radiotherapy, well-oxygenated tumors favor treatment. Antiangiogenic drugs may lower IP in the tumor, improving perfusion, pO{sub 2} and drug uptake, by reducing the number of malfunctioning vessels in the tissue. This study aims to create a model for quantifying the effects of altered pO{sub 2}-distribution due to antiangiogenic treatment in combination with radionuclide therapy. Methods:more » Based on experimental data, describing the effects of antiangiogenic agents on oxygenation of GlioblastomaMultiforme (GBM), a single cell based 3D model, including 10{sup 10} tumor cells, was developed, showing how radionuclide therapy response improves as tumor oxygenation approaches normal tissue levels. The nuclides studied were {sup 90}Y, {sup 131}I, {sup 177}Lu, and {sup 211}At. The absorbed dose levels required for a tumor control probability (TCP) of 0.990 are compared for three different log-normal pO{sub 2}-distributions: {mu}{sub 1} = 2.483, {sigma}{sub 1} = 0.711; {mu}{sub 2} = 2.946, {sigma}{sub 2} = 0.689; {mu}{sub 3} = 3.689, and {sigma}{sub 3} = 0.330. The normal tissue absorbed doses will, in turn, depend on this. These distributions were chosen to represent the expected oxygen levels in an untreated hypoxic tumor, a hypoxic tumor treated with an anti-VEGF agent, and in normal, fully-oxygenated tissue, respectively. The former two are fitted to experimental data. The geometric oxygen distributions are simulated using two different patterns: one Monte Carlo based and one radially increasing, while keeping the log-normal volumetric distributions intact. Oxygen and activity are distributed, according to the same pattern. Results: As tumor pO{sub 2} approaches normal tissue levels, the therapeutic effect is improved so that the normal tissue absorbed doses can be decreased by more than 95%, while retaining TCP, in the most favorable scenario and by up to about 80% with oxygen levels previously achieved in vivo, when the least favourable oxygenation case is used as starting point. The major difference occurs in poorly oxygenated cells. This is also where the pO{sub 2}-dependence of the oxygen enhancement ratio is maximal. Conclusions: Improved tumor oxygenation together with increased radionuclide uptake show great potential for optimising treatment strategies, leaving room for successive treatments, or lowering absorbed dose to normal tissues, due to increased tumor response. Further studies of the concomitant use of antiangiogenic drugs and radionuclide therapy therefore appear merited.« less

  20. Drought forecasting in Luanhe River basin involving climatic indices

    NASA Astrophysics Data System (ADS)

    Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.

    2017-11-01

    Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.

  1. Estimation of value at risk and conditional value at risk using normal mixture distributions model

    NASA Astrophysics Data System (ADS)

    Kamaruzzaman, Zetty Ain; Isa, Zaidi

    2013-04-01

    Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.

  2. A nonparametric spatial scan statistic for continuous data.

    PubMed

    Jung, Inkyung; Cho, Ho Jin

    2015-10-20

    Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.

  3. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  4. Automated scoring system of standard uptake value for torso FDG-PET images

    NASA Astrophysics Data System (ADS)

    Hara, Takeshi; Kobayashi, Tatsunori; Kawai, Kazunao; Zhou, Xiangrong; Itoh, Satoshi; Katafuchi, Tetsuro; Fujita, Hiroshi

    2008-03-01

    The purpose of this work was to develop an automated method to calculate the score of SUV for torso region on FDG-PET scans. The three dimensional distributions for the mean and the standard deviation values of SUV were stored in each volume to score the SUV in corresponding pixel position within unknown scans. The modeling methods is based on SPM approach using correction technique of Euler characteristic and Resel (Resolution element). We employed 197 nor-mal cases (male: 143, female: 54) to assemble the normal metabolism distribution of FDG. The physique were registered each other in a rectangular parallelepiped shape using affine transformation and Thin-Plate-Spline technique. The regions of the three organs were determined based on semi-automated procedure. Seventy-three abnormal spots were used to estimate the effectiveness of the scoring methods. As a result, the score images correctly represented that the scores for normal cases were between zeros to plus/minus 2 SD. Most of the scores of abnormal spots associated with cancer were lager than the upper of the SUV interval of normal organs.

  5. Review of Statistical Methods for Analysing Healthcare Resources and Costs

    PubMed Central

    Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G

    2011-01-01

    We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344

  6. A simple optode based method for imaging O2 distribution and dynamics in tap water biofilms.

    PubMed

    Staal, M; Prest, E I; Vrouwenvelder, J S; Rickelt, L F; Kühl, M

    2011-10-15

    A ratiometric luminescence intensity imaging approach is presented, which enables spatial O2 measurements in biofilm reactors with transparent planar O2 optodes. Optodes consist of an O2 sensitive luminescent dye immobilized in a 1-10 μm thick polymeric layer on a transparent carrier, e.g. a glass window. The method is based on sequential imaging of the O2 dependent luminescence intensity, which are subsequently normalized with luminescent intensity images recorded under anoxic conditions. We present 2-dimensional O2 distribution images at the base of a tap water biofilm measured with the new ratiometric method and compare the results with O2 distribution images obtained in the same biofilm reactor with luminescence lifetime imaging. Using conventional digital cameras, such simple normalized luminescence intensity imaging can yield images of 2-dimensional O2 distributions with a high signal-to-noise ratio and spatial resolution comparable or even surpassing those obtained with expensive and complex luminescence lifetime imaging systems. The method can be applied to biofilm growth incubators allowing intermittent experimental shifts to anoxic conditions or in systems, in which the O2 concentration is depleted during incubation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Bimodal Aldosterone Distribution in Low-Renin Hypertension

    PubMed Central

    2013-01-01

    BACKGROUND In low-renin hypertension (LRH), serum aldosterone levels are higher in those subjects with primary aldosteronism and may be lower in those with non-aldosterone mineralocorticoid excess or primary renal sodium retention. We investigated the hypothesis that the frequency distribution of aldosterone in LRH is bimodal. METHODS Of the 3,532 attendees at the sixth examination cycle of the Framingham Offspring Study, 1,831 were included in this cross-sectional analysis after we excluded those with conditions or taking medications such as antihypertensive drugs that might affect renin or aldosterone. RESULTS Three hundred three subjects (17%) had untreated hypertension (SBP ≥140mm Hg or DBP ≥90mm Hg). LRH, defined as plasma renin ≤5 mU/L, was present in 93 of those 303 hypertensive subjects (31%). Aldosterone values were adjusted statistically for age, sex, and the urinary sodium/creatinine ratio. In the subjects with LRH, the adjusted aldosterone distribution was bimodal (dip test for unimodality, P = 0.008). The adjusted aldosterone distribution was unimodal in the normal subjects (P = 0.98) and in the hypertensive subjects with normal plasma renin (P = 0.94). CONCLUSIONS In this community-based sample of white subjects, those with low-renin hypertension had a bimodal adjusted aldosterone distribution. Subjects with normal-renin hypertension and subjects with normal blood pressure had unimodal adjusted aldosterone distributions. These findings suggest 2 pathophysiological variants of LRH, one that is aldosterone-dependent and one that is non-aldosterone-dependent. PMID:23757402

  8. An Ensemble System Based on Hybrid EGARCH-ANN with Different Distributional Assumptions to Predict S&P 500 Intraday Volatility

    NASA Astrophysics Data System (ADS)

    Lahmiri, S.; Boukadoum, M.

    2015-10-01

    Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.

  9. Understanding a Normal Distribution of Data.

    PubMed

    Maltenfort, Mitchell G

    2015-12-01

    Assuming data follow a normal distribution is essential for many common statistical tests. However, what are normal data and when can we assume that a data set follows this distribution? What can be done to analyze non-normal data?

  10. The retest distribution of the visual field summary index mean deviation is close to normal.

    PubMed

    Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz

    2016-09-01

    When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  11. Observed, unknown distributions of clinical chemical quantities should be considered to be log-normal: a proposal.

    PubMed

    Haeckel, Rainer; Wosniok, Werner

    2010-10-01

    The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.

  12. Calculations of lattice vibrational mode lifetimes using Jazz: a Python wrapper for LAMMPS

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Wang, H.; Daw, M. S.

    2015-06-01

    Jazz is a new python wrapper for LAMMPS [1], implemented to calculate the lifetimes of vibrational normal modes based on forces as calculated for any interatomic potential available in that package. The anharmonic character of the normal modes is analyzed via the Monte Carlo-based moments approximation as is described in Gao and Daw [2]. It is distributed as open-source software and can be downloaded from the website http://jazz.sourceforge.net/.

  13. Fractionation in normal tissues: the (α/β)eff concept can account for dose heterogeneity and volume effects.

    PubMed

    Hoffmann, Aswin L; Nahum, Alan E

    2013-10-07

    The simple Linear-Quadratic (LQ)-based Withers iso-effect formula (WIF) is widely used in external-beam radiotherapy to derive a new tumour dose prescription such that there is normal-tissue (NT) iso-effect when changing the fraction size and/or number. However, as conventionally applied, the WIF is invalid unless the normal-tissue response is solely determined by the tumour dose. We propose a generalized WIF (gWIF) which retains the tumour prescription dose, but replaces the intrinsic fractionation sensitivity measure (α/β) by a new concept, the normal-tissue effective fractionation sensitivity, [Formula: see text], which takes into account both the dose heterogeneity in, and the volume effect of, the late-responding normal-tissue in question. Closed-form analytical expressions for [Formula: see text] ensuring exact normal-tissue iso-effect are derived for: (i) uniform dose, and (ii) arbitrary dose distributions with volume-effect parameter n = 1 from the normal-tissue dose-volume histogram. For arbitrary dose distributions and arbitrary n, a numerical solution for [Formula: see text] exhibits a weak dependence on the number of fractions. As n is increased, [Formula: see text] increases from its intrinsic value at n = 0 (100% serial normal-tissue) to values close to or even exceeding the tumour (α/β) at n = 1 (100% parallel normal-tissue), with the highest values of [Formula: see text] corresponding to the most conformal dose distributions. Applications of this new concept to inverse planning and to highly conformal modalities are discussed, as is the effect of possible deviations from LQ behaviour at large fraction sizes.

  14. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    PubMed

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  15. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences.

    PubMed

    Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter

    2014-01-13

    Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.

  16. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  17. Channel characterization and empirical model for ergodic capacity of free-space optical communication link

    NASA Astrophysics Data System (ADS)

    Alimi, Isiaka; Shahpari, Ali; Ribeiro, Vítor; Sousa, Artur; Monteiro, Paulo; Teixeira, António

    2017-05-01

    In this paper, we present experimental results on channel characterization of single input single output (SISO) free-space optical (FSO) communication link that is based on channel measurements. The histograms of the FSO channel samples and the log-normal distribution fittings are presented along with the measured scintillation index. Furthermore, we extend our studies to diversity schemes and propose a closed-form expression for determining ergodic channel capacity of multiple input multiple output (MIMO) FSO communication systems over atmospheric turbulence fading channels. The proposed empirical model is based on SISO FSO channel characterization. Also, the scintillation effects on the system performance are analyzed and results for different turbulence conditions are presented. Moreover, we observed that the histograms of the FSO channel samples that we collected from a 1548.51 nm link have good fits with log-normal distributions and the proposed model for MIMO FSO channel capacity is in conformity with the simulation results in terms of normalized mean-square error (NMSE).

  18. Comparison of experimental and theoretical normal-force distributions (including Reynolds number effects) on an ogive-cylinder body at Mach number 1.98

    NASA Technical Reports Server (NTRS)

    Perkins, Edward W; Jorgensen, Leland H

    1956-01-01

    Effects of Reynolds number and angle of attack on the pressure distribution and normal-force characteristics of a body of revolution consisting of a fineness ratio 3 ogival nose tangent to a cylindrical afterbody 7 diameters long have been determined. The test Mach number was 1.98 and the angle-of-attack range from 0 degree to 20 degrees. The Reynolds numbers, based on body diameter, were 0.15 x 10(6) and 0.45 x 10(6). The experimental results are compared with theory.

  19. Spatial analysis of cities using Renyi entropy and fractal parameters

    NASA Astrophysics Data System (ADS)

    Chen, Yanguang; Feng, Jian

    2017-12-01

    The spatial distributions of cities fall into two groups: one is the simple distribution with characteristic scale (e.g. exponential distribution), and the other is the complex distribution without characteristic scale (e.g. power-law distribution). The latter belongs to scale-free distributions, which can be modeled with fractal geometry. However, fractal dimension is not suitable for the former distribution. In contrast, spatial entropy can be used to measure any types of urban distributions. This paper is devoted to generalizing multifractal parameters by means of dual relation between Euclidean and fractal geometries. The main method is mathematical derivation and empirical analysis, and the theoretical foundation is the discovery that the normalized fractal dimension is equal to the normalized entropy. Based on this finding, a set of useful spatial indexes termed dummy multifractal parameters are defined for geographical analysis. These indexes can be employed to describe both the simple distributions and complex distributions. The dummy multifractal indexes are applied to the population density distribution of Hangzhou city, China. The calculation results reveal the feature of spatio-temporal evolution of Hangzhou's urban morphology. This study indicates that fractal dimension and spatial entropy can be combined to produce a new methodology for spatial analysis of city development.

  20. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  1. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  2. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  3. Improved Multispectral Skin Detection and its Application to Search Space Reduction for Dismount Detection Based on Histograms of Oriented Gradients

    DTIC Science & Technology

    2010-03-01

    2-29 2.7.4 Normalized Difference Skin Index (NDSI) . . . . 2-30 2.7.5 Normalized Difference Vegetation Index ( NDVI ) 2-31 2.7.6...C-1 C.2 NDVI Method . . . . . . . . . . . . . . . . . . . . . . . C-4 Bibliography... NDVI ,NDSI) and (NDGRI,NDSI) values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6 4.3. Joint distributions of ( NDVI ,NDSI) and

  4. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  5. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Principal fault displacements -

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Inoue, N.; Tonagi, M.

    2016-12-01

    The purpose of Probabilistic Fault Displacement Hazard Analysis (PFDHA) is estimate fault displacement values and its extent of the impact. There are two types of fault displacement related to the earthquake fault: principal fault displacement and distributed fault displacement. Distributed fault displacement should be evaluated in important facilities, such as Nuclear Installations. PFDHA estimates principal fault and distributed fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. We constructed slip distance relation of principal fault displacement based on Japanese strike and reverse slip earthquakes in order to apply to Japan area that of subduction field. However, observed displacement data are sparse, especially reverse faults. Takao et al. (2013) tried to estimate the relation using all type fault systems (reverse fault and strike slip fault). After Takao et al. (2013), several inland earthquakes were occurred in Japan, so in this time, we try to estimate distance-displacement functions each strike slip fault type and reverse fault type especially add new fault displacement data set. To normalized slip function data, several criteria were provided by several researchers. We normalized principal fault displacement data based on several methods and compared slip-distance functions. The normalized by total length of Japanese reverse fault data did not show particular trend slip distance relation. In the case of segmented data, the slip-distance relationship indicated similar trend as strike slip faults. We will also discuss the relation between principal fault displacement distributions with source fault character. According to slip distribution function (Petersen et al., 2011), strike slip fault type shows the ratio of normalized displacement are decreased toward to the edge of fault. However, the data set of Japanese strike slip fault data not so decrease in the end of the fault. This result indicates that the fault displacement is difficult to appear at the edge of the fault displacement in Japan. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (NRA), Japan.

  6. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  7. Textural content in 3T MR: an image-based marker for Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Bharath Kumar, S. V.; Mullick, Rakesh; Patil, Uday

    2005-04-01

    In this paper, we propose a study, which investigates the first-order and second-order distributions of T2 images from a magnetic resonance (MR) scan for an age-matched data set of 24 Alzheimer's disease and 17 normal patients. The study is motivated by the desire to analyze the brain iron uptake in the hippocampus of Alzheimer's patients, which is captured by low T2 values. Since, excess iron deposition occurs locally in certain regions of the brain, we are motivated to investigate the spatial distribution of T2, which is captured by higher-order statistics. Based on the first-order and second-order distributions (involving gray level co-occurrence matrix) of T2, we show that the second-order statistics provide features with sensitivity >90% (at 80% specificity), which in turn capture the textural content in T2 data. Hence, we argue that different texture characteristics of T2 in the hippocampus for Alzheimer's and normal patients could be used as an early indicator of Alzheimer's disease.

  8. Inheritance of Properties of Normal and Non-Normal Distributions after Transformation of Scores to Ranks

    ERIC Educational Resources Information Center

    Zimmerman, Donald W.

    2011-01-01

    This study investigated how population parameters representing heterogeneity of variance, skewness, kurtosis, bimodality, and outlier-proneness, drawn from normal and eleven non-normal distributions, also characterized the ranks corresponding to independent samples of scores. When the parameters of population distributions from which samples were…

  9. Optimizing fish sampling for fish - mercury bioaccumulation factors

    USGS Publications Warehouse

    Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste A.; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.

    2015-01-01

    Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.

  10. Gradually truncated log-normal in USA publicly traded firm size distribution

    NASA Astrophysics Data System (ADS)

    Gupta, Hari M.; Campanha, José R.; de Aguiar, Daniela R.; Queiroz, Gabriel A.; Raheja, Charu G.

    2007-03-01

    We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution.

  11. Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models

    ERIC Educational Resources Information Center

    Williams, Jason; MacKinnon, David P.

    2008-01-01

    Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional "z" test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a…

  12. Empirical study of the tails of mutual fund size

    NASA Astrophysics Data System (ADS)

    Schwarzkopf, Yonathan; Farmer, J. Doyne

    2010-06-01

    The mutual fund industry manages about a quarter of the assets in the U.S. stock market and thus plays an important role in the U.S. economy. The question of how much control is concentrated in the hands of the largest players is best quantitatively discussed in terms of the tail behavior of the mutual fund size distribution. We study the distribution empirically and show that the tail is much better described by a log-normal than a power law, indicating less concentration than, for example, personal income. The results are highly statistically significant and are consistent across fifteen years. This contradicts a recent theory concerning the origin of the power law tails of the trading volume distribution. Based on the analysis in a companion paper, the log-normality is to be expected, and indicates that the distribution of mutual funds remains perpetually out of equilibrium.

  13. Probabilistic model of bridge vehicle loads in port area based on in-situ load testing

    NASA Astrophysics Data System (ADS)

    Deng, Ming; Wang, Lei; Zhang, Jianren; Wang, Rei; Yan, Yanhong

    2017-11-01

    Vehicle load is an important factor affecting the safety and usability of bridges. An statistical analysis is carried out in this paper to investigate the vehicle load data of Tianjin Haibin highway in Tianjin port of China, which are collected by the Weigh-in- Motion (WIM) system. Following this, the effect of the vehicle load on test bridge is calculated, and then compared with the calculation result according to HL-93(AASHTO LRFD). Results show that the overall vehicle load follows a distribution with a weighted sum of four normal distributions. The maximum vehicle load during the design reference period follows a type I extremum distribution. The vehicle load effect also follows a weighted sum of four normal distributions, and the standard value of the vehicle load is recommended as 1.8 times that of the calculated value according to HL-93.

  14. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  15. Problems with Using the Normal Distribution – and Ways to Improve Quality and Efficiency of Data Analysis

    PubMed Central

    Limpert, Eckhard; Stahel, Werner A.

    2011-01-01

    Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325

  16. Problems with using the normal distribution--and ways to improve quality and efficiency of data analysis.

    PubMed

    Limpert, Eckhard; Stahel, Werner A

    2011-01-01

    The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

  17. Board-level optical clock signal distribution using Si CMOS-compatible polyimide-based 1- to 48-fanout H-tree

    NASA Astrophysics Data System (ADS)

    Wu, Linghui; Bihari, Bipin; Gan, Jianhua; Chen, Ray T.; Tang, Suning

    1998-08-01

    Si-CMOS compatible polymer-based waveguides for optoelectronic interconnects and packaging have been fabricated and characterized. A 1-to-48 fanout optoelectronic interconnection layer (OIL) structure based on Ultradel 9120/9020 for the high-speed massive clock signal distribution for a Cray T-90 supercomputer board has been constructed. The OIL employs multimode polymeric channel waveguides in conjunction with surface-normal waveguide output coupler and 1-to-2 splitter. A total insertion loss of 7.98 dB at 850 nm was measured experimentally.

  18. Diagnosing and Mapping Pulmonary Emphysema on X-Ray Projection Images: Incremental Value of Grating-Based X-Ray Dark-Field Imaging

    PubMed Central

    Meinel, Felix G.; Schwab, Felix; Schleede, Simone; Bech, Martin; Herzen, Julia; Achterhold, Klaus; Auweter, Sigrid; Bamberg, Fabian; Yildirim, Ali Ö.; Bohla, Alexander; Eickelberg, Oliver; Loewen, Rod; Gifford, Martin; Ruth, Ronald; Reiser, Maximilian F.; Pfeiffer, Franz; Nikolaou, Konstantin

    2013-01-01

    Purpose To assess whether grating-based X-ray dark-field imaging can increase the sensitivity of X-ray projection images in the diagnosis of pulmonary emphysema and allow for a more accurate assessment of emphysema distribution. Materials and Methods Lungs from three mice with pulmonary emphysema and three healthy mice were imaged ex vivo using a laser-driven compact synchrotron X-ray source. Median signal intensities of transmission (T), dark-field (V) and a combined parameter (normalized scatter) were compared between emphysema and control group. To determine the diagnostic value of each parameter in differentiating between healthy and emphysematous lung tissue, a receiver-operating-characteristic (ROC) curve analysis was performed both on a per-pixel and a per-individual basis. Parametric maps of emphysema distribution were generated using transmission, dark-field and normalized scatter signal and correlated with histopathology. Results Transmission values relative to water were higher for emphysematous lungs than for control lungs (1.11 vs. 1.06, p<0.001). There was no difference in median dark-field signal intensities between both groups (0.66 vs. 0.66). Median normalized scatter was significantly lower in the emphysematous lungs compared to controls (4.9 vs. 10.8, p<0.001), and was the best parameter for differentiation of healthy vs. emphysematous lung tissue. In a per-pixel analysis, the area under the ROC curve (AUC) for the normalized scatter value was significantly higher than for transmission (0.86 vs. 0.78, p<0.001) and dark-field value (0.86 vs. 0.52, p<0.001) alone. Normalized scatter showed very high sensitivity for a wide range of specificity values (94% sensitivity at 75% specificity). Using the normalized scatter signal to display the regional distribution of emphysema provides color-coded parametric maps, which show the best correlation with histopathology. Conclusion In a murine model, the complementary information provided by X-ray transmission and dark-field images adds incremental diagnostic value in detecting pulmonary emphysema and visualizing its regional distribution as compared to conventional X-ray projections. PMID:23555692

  19. Diagnosing and mapping pulmonary emphysema on X-ray projection images: incremental value of grating-based X-ray dark-field imaging.

    PubMed

    Meinel, Felix G; Schwab, Felix; Schleede, Simone; Bech, Martin; Herzen, Julia; Achterhold, Klaus; Auweter, Sigrid; Bamberg, Fabian; Yildirim, Ali Ö; Bohla, Alexander; Eickelberg, Oliver; Loewen, Rod; Gifford, Martin; Ruth, Ronald; Reiser, Maximilian F; Pfeiffer, Franz; Nikolaou, Konstantin

    2013-01-01

    To assess whether grating-based X-ray dark-field imaging can increase the sensitivity of X-ray projection images in the diagnosis of pulmonary emphysema and allow for a more accurate assessment of emphysema distribution. Lungs from three mice with pulmonary emphysema and three healthy mice were imaged ex vivo using a laser-driven compact synchrotron X-ray source. Median signal intensities of transmission (T), dark-field (V) and a combined parameter (normalized scatter) were compared between emphysema and control group. To determine the diagnostic value of each parameter in differentiating between healthy and emphysematous lung tissue, a receiver-operating-characteristic (ROC) curve analysis was performed both on a per-pixel and a per-individual basis. Parametric maps of emphysema distribution were generated using transmission, dark-field and normalized scatter signal and correlated with histopathology. Transmission values relative to water were higher for emphysematous lungs than for control lungs (1.11 vs. 1.06, p<0.001). There was no difference in median dark-field signal intensities between both groups (0.66 vs. 0.66). Median normalized scatter was significantly lower in the emphysematous lungs compared to controls (4.9 vs. 10.8, p<0.001), and was the best parameter for differentiation of healthy vs. emphysematous lung tissue. In a per-pixel analysis, the area under the ROC curve (AUC) for the normalized scatter value was significantly higher than for transmission (0.86 vs. 0.78, p<0.001) and dark-field value (0.86 vs. 0.52, p<0.001) alone. Normalized scatter showed very high sensitivity for a wide range of specificity values (94% sensitivity at 75% specificity). Using the normalized scatter signal to display the regional distribution of emphysema provides color-coded parametric maps, which show the best correlation with histopathology. In a murine model, the complementary information provided by X-ray transmission and dark-field images adds incremental diagnostic value in detecting pulmonary emphysema and visualizing its regional distribution as compared to conventional X-ray projections.

  20. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  1. Taking the Missing Propensity Into Account When Estimating Competence Scores

    PubMed Central

    Pohl, Steffi; Carstensen, Claus H.

    2014-01-01

    When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically made when using these models: (1) The missing propensity is unidimensional and (2) the missing propensity and the ability are bivariate normally distributed. These assumptions may, however, be violated in real data sets and could, thus, pose a threat to the validity of this approach. The present study focuses on modeling competencies in various domains, using data from a school sample (N = 15,396) and an adult sample (N = 7,256) from the National Educational Panel Study. Our interest was to investigate whether violations of unidimensionality and the normal distribution assumption severely affect the performance of the model-based approach in terms of differences in ability estimates. We propose a model with a competence dimension, a unidimensional missing propensity and a distributional assumption more flexible than a multivariate normal. Using this model for ability estimation results in different ability estimates compared with a model ignoring missing responses. Implications for ability estimation in large-scale assessments are discussed. PMID:29795844

  2. Nonlinear spectral imaging of human normal skin, basal cell carcinoma and squamous cell carcinoma based on two-photon excited fluorescence and second-harmonic generation

    NASA Astrophysics Data System (ADS)

    Xiong, S. Y.; Yang, J. G.; Zhuang, J.

    2011-10-01

    In this work, we use nonlinear spectral imaging based on two-photon excited fluorescence (TPEF) and second harmonic generation (SHG) for analyzing the morphology of collagen and elastin and their biochemical variations in basal cell carcinoma (BCC), squamous cell carcinoma (SCC) and normal skin tissue. It was found in this work that there existed apparent differences among BCC, SCC and normal skin in terms of their thickness of the keratin and epithelial layers, their size of elastic fibers, as well as their distribution and spectral characteristics of collagen. These differences can potentially be used to distinguish BCC and SCC from normal skin, and to discriminate between BCC and SCC, as well as to evaluate treatment responses.

  3. Inferring local competition intensity from patch size distributions: a test using biological soil crusts

    USGS Publications Warehouse

    Bowker, Matthew A.; Maestre, Fernando T.

    2012-01-01

    Dryland vegetation is inherently patchy. This patchiness goes on to impact ecology, hydrology, and biogeochemistry. Recently, researchers have proposed that dryland vegetation patch sizes follow a power law which is due to local plant facilitation. It is unknown what patch size distribution prevails when competition predominates over facilitation, or if such a pattern could be used to detect competition. We investigated this question in an alternative vegetation type, mosses and lichens of biological soil crusts, which exhibit a smaller scale patch-interpatch configuration. This micro-vegetation is characterized by competition for space. We proposed that multiplicative effects of genetics, environment and competition should result in a log-normal patch size distribution. When testing the prevalence of log-normal versus power law patch size distributions, we found that the log-normal was the better distribution in 53% of cases and a reasonable fit in 83%. In contrast, the power law was better in 39% of cases, and in 8% of instances both distributions fit equally well. We further hypothesized that the log-normal distribution parameters would be predictably influenced by competition strength. There was qualitative agreement between one of the distribution's parameters (μ) and a novel intransitive (lacking a 'best' competitor) competition index, suggesting that as intransitivity increases, patch sizes decrease. The correlation of μ with other competition indicators based on spatial segregation of species (the C-score) depended on aridity. In less arid sites, μ was negatively correlated with the C-score (suggesting smaller patches under stronger competition), while positive correlations (suggesting larger patches under stronger competition) were observed at more arid sites. We propose that this is due to an increasing prevalence of competition transitivity as aridity increases. These findings broaden the emerging theory surrounding dryland patch size distributions and, with refinement, may help us infer cryptic ecological processes from easily observed spatial patterns in the field.

  4. Is It Time to Change Our Reference Curve for Femur Length? Using the Z-Score to Select the Best Chart in a Chinese Population

    PubMed Central

    Yang, Huixia; Wei, Yumei; Su, Rina; Wang, Chen; Meng, Wenying; Wang, Yongqing; Shang, Lixin; Cai, Zhenyu; Ji, Liping; Wang, Yunfeng; Sun, Ying; Liu, Jiaxiu; Wei, Li; Sun, Yufeng; Zhang, Xueying; Luo, Tianxia; Chen, Haixia; Yu, Lijun

    2016-01-01

    Objective To use Z-scores to compare different charts of femur length (FL) applied to our population with the aim of identifying the most appropriate chart. Methods A retrospective study was conducted in Beijing. Fifteen hospitals in Beijing were chosen as clusters using a systemic cluster sampling method, in which 15,194 pregnant women delivered from June 20th to November 30th, 2013. The measurements of FL in the second and third trimester were recorded, as well as the last measurement obtained before delivery. Based on the inclusion and exclusion criteria, we identified FL measurements from 19996 ultrasounds from 7194 patients between 11 and 42 weeks gestation. The FL data were then transformed into Z-scores that were calculated using three series of reference equations obtained from three reports: Leung TN, Pang MW et al (2008); Chitty LS, Altman DG et al (1994); and Papageorghiou AT et al (2014). Each Z-score distribution was presented as the mean and standard deviation (SD). Skewness and kurtosis and were compared with the standard normal distribution using the Kolmogorov-Smirnov test. The histogram of their distributions was superimposed on the non-skewed standard normal curve (mean = 0, SD = 1) to provide a direct visual impression. Finally, the sensitivity and specificity of each reference chart for identifying fetuses <5th or >95th percentile (based on the observed distribution of Z-scores) were calculated. The Youden index was also listed. A scatter diagram with the 5th, 50th, and 95th percentile curves calculated from and superimposed on each reference chart was presented to provide a visual impression. Results The three Z-score distribution curves appeared to be normal, but none of them matched the expected standard normal distribution. In our study, the Papageorghiou reference curve provided the best results, with a sensitivity of 100% for identifying fetuses with measurements < 5th and > 95th percentile, and specificities of 99.9% and 81.5%, respectively. Conclusions It is important to choose an appropriate reference curve when defining what is normal. The Papageorghiou reference curve for FL seems to be the best fit for our population. Perhaps it is time to change our reference curve for femur length. PMID:27458922

  5. Hilbert-Huang Transformation Based Analyses of FP1, FP2, and Fz Electroencephalogram Signals in Alcoholism.

    PubMed

    Lin, Chin-Feng; Su, Jiun-Yi; Wang, Hao-Min

    2015-09-01

    Chronic alcoholism may damage the central nervous system, causing imbalance in the excitation-inhibition homeostasis in the cortex, which may lead to hyper-arousal of the central nervous system, and impairments in cognitive function. In this paper, we use the Hilbert-Huang transformation (HHT) method to analyze the electroencephalogram (EEG) signals from control and alcoholic observers who watched two different pictures. We examined the intrinsic mode function (IMF) based energy distribution features of FP1, FP2, and Fz EEG signals in the time and frequency domains for alcoholics. The HHT-based characteristics of the IMFs, the instantaneous frequencies, and the time-frequency-energy distributions of the IMFs of the clinical FP1, FP2, and Fz EEG signals recorded from normal and alcoholic observers who watched two different pictures were analyzed. We observed that the number of peak amplitudes of the alcoholic subjects is larger than that of the control. In addition, the Pearson correlation coefficients of the IMFs, and the energy-IMF distributions of the clinical FP1, FP2, and Fz EEG signals recorded from normal and alcoholic observers were analyzed. The analysis results show that the energy ratios of IMF4, IMF5, and IMF7 waves of the normal observers to the refereed total energy were larger than 10 %, respectively. In addition, the energy ratios of IMF3, IMF4, and IMF5 waves of the alcoholic observers to the refereed total energy were larger than 10 %. The FP1 and FP2 waves of the normal observers, the FP1 and FP2 waves of the alcoholic observers, and the FP1 and Fz waves of the alcoholic observers demonstrated extremely high correlations. On the other hand, the FP1 waves of the normal and alcoholic observers, the FP1 wave of the normal observer and the FP2 wave of the alcoholic observer, the FP1 wave of the normal observer and the Fz wave of the alcoholic observer, the FP2 waves of the normal and alcoholic FP2 observers, and the FP2 wave of the normal observer and the Fz wave of the alcoholic observer demonstrated extremely low correlations. The IMF4 of the FP1 and FP2 signals of the normal observer, and the IMF5 of the FP1 and FP2 signals of the alcoholic observer were correlated. The IMF4 of the FP1 signal of the normal observer and that of the FP2 signal of the alcoholic observer as well as the IMF5 of the FP1 signal of the normal observer and that of the FP2 signal of the alcoholic observer exhibited extremely low correlations. In this manner, our experiment leads to a better understanding of the HHT-based IMFs features of FP1, FP2, and Fz EEG signals in alcoholism. The analysis results show that the energy ratios of the wave of an alcoholic observer to its refereed total energy for IMF4, and IMF5 in the δ band for FP1, FP2, and Fz channels were larger than those of the respective waves of the normal observer. The alcoholic EEG signals constitute more than 1 % of the total energy in the δ wave, and the reaction times were 0_4, 4_8, 8_12, and 12_16 s. For normal EEG signals, more than 1 % of the total energy is distributed in the δ wave, with a reaction time 0 to 4 s. We observed that the alcoholic subject reaction times were slower than those of the normal subjects, and the alcoholic subjects could have experienced a cognitive error. This phenomenon is due to the intoxicated central nervous systems of the alcoholic subjects.

  6. Spatiotemporal analysis of Quaternary normal faults in the Northern Rocky Mountains, USA

    NASA Astrophysics Data System (ADS)

    Davarpanah, A.; Babaie, H. A.; Reed, P.

    2010-12-01

    The mid-Tertiary Basin-and-Range extensional tectonic event developed most of the normal faults that bound the ranges in the northern Rocky Mountains within Montana, Wyoming, and Idaho. The interaction of the thermally induced stress field of the Yellowstone hot spot with the existing Basin-and-Range fault blocks, during the last 15 my, has produced a new, spatially and temporally variable system of normal faults in these areas. The orientation and spatial distribution of the trace of these hot-spot induced normal faults, relative to earlier Basin-and-Range faults, have significant implications for the effect of the temporally varying and spatially propagating thermal dome on the growth of new hot spot related normal faults and reactivation of existing Basin-and-Range faults. Digitally enhanced LANDSAT 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 4 and 5 Thematic Mapper (TM) bands, with spatial resolution of 30 m, combined with analytical GIS and geological techniques helped in determining and analyzing the lineaments and traces of the Quaternary, thermally-induced normal faults in the study area. Applying the color composite (CC) image enhancement technique, the combination of bands 3, 2 and 1 of the ETM+ and TM images was chosen as the best statistical choice to create a color composite for lineament identification. The spatiotemporal analysis of the Quaternary normal faults produces significant information on the structural style, timing, spatial variation, spatial density, and frequency of the faults. The seismic Quaternary normal faults, in the whole study area, are divided, based on their age, into four specific sets, which from oldest to youngest include: Quaternary (>1.6 Ma), middle and late Quaternary (>750 ka), latest Quaternary (>15 ka), and the last 150 years. A density map for the Quaternary faults reveals that most active faults are near the current Yellowstone National Park area (YNP), where most seismically active faults, in the past 1.6 my, are located. The GIS based autocorrelation method, applied to the trace orientation, length, frequency, and spatial distribution for each age-defined fault set, revealed spatial homogeneity for each specific set. The results of the method of Moran`sI and Geary`s C show no spatial autocorrelation among the trend of the fault traces and their location. Our results suggest that while lineaments of similar age define a clustered pattern in each domain, the overall distribution pattern of lineaments with different ages seems to be non-uniform (random). The directional distribution analysis reveals a distinct range of variation for fault traces of different ages (i.e., some displaying ellipsis behavior). Among the Quaternary normal fault sets, the youngest lineament set (i.e., last 150 years) defines the greatest ellipticity (eccentricity) and the least lineaments distribution variation. The frequency rose diagram for the entire Quaternary normal faults, shows four major modes (around 360o, 330o, 300o, and 270o), and two minor modes (around 235 and 205).

  7. The Statistical Value of Raw Fluorescence Signal in Luminex xMAP Based Multiplex Immunoassays

    PubMed Central

    Breen, Edmond J.; Tan, Woei; Khan, Alamgir

    2016-01-01

    Tissue samples (plasma, saliva, serum or urine) from 169 patients classified as either normal or having one of seven possible diseases are analysed across three 96-well plates for the presences of 37 analytes using cytokine inflammation multiplexed immunoassay panels. Censoring for concentration data caused problems for analysis of the low abundant analytes. Using fluorescence analysis over concentration based analysis allowed analysis of these low abundant analytes. Mixed-effects analysis on the resulting fluorescence and concentration responses reveals a combination of censoring and mapping the fluorescence responses to concentration values, through a 5PL curve, changed observed analyte concentrations. Simulation verifies this, by showing a dependence on the mean florescence response and its distribution on the observed analyte concentration levels. Differences from normality, in the fluorescence responses, can lead to differences in concentration estimates and unreliable probabilities for treatment effects. It is seen that when fluorescence responses are normally distributed, probabilities of treatment effects for fluorescence based t-tests has greater statistical power than the same probabilities from concentration based t-tests. We add evidence that the fluorescence response, unlike concentration values, doesn’t require censoring and we show with respect to differential analysis on the fluorescence responses that background correction is not required. PMID:27243383

  8. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  9. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  10. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.

    PubMed

    Bishara, Anthony J; Li, Jiexiang; Nash, Thomas

    2018-02-01

    When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.

  11. Quantitative assessment of hit detection and confirmation in single and duplicate high-throughput screenings.

    PubMed

    Wu, Zhijin; Liu, Dongmei; Sui, Yunxia

    2008-02-01

    The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.

  12. Automatic Detection and Recognition of Craters Based on the Spectral Features of Lunar Rocks and Minerals

    NASA Astrophysics Data System (ADS)

    Ye, L.; Xu, X.; Luan, D.; Jiang, W.; Kang, Z.

    2017-07-01

    Crater-detection approaches can be divided into four categories: manual recognition, shape-profile fitting algorithms, machine-learning methods and geological information-based analysis using terrain and spectral data. The mainstream method is Shape-profile fitting algorithms. Many scholars throughout the world use the illumination gradient information to fit standard circles by least square method. Although this method has achieved good results, it is difficult to identify the craters with poor "visibility", complex structure and composition. Moreover, the accuracy of recognition is difficult to be improved due to the multiple solutions and noise interference. Aiming at the problem, we propose a method for the automatic extraction of impact craters based on spectral characteristics of the moon rocks and minerals: 1) Under the condition of sunlight, the impact craters are extracted from MI by condition matching and the positions as well as diameters of the craters are obtained. 2) Regolith is spilled while lunar is impacted and one of the elements of lunar regolith is iron. Therefore, incorrectly extracted impact craters can be removed by judging whether the crater contains "non iron" element. 3) Craters which are extracted correctly, are divided into two types: simple type and complex type according to their diameters. 4) Get the information of titanium and match the titanium distribution of the complex craters with normal distribution curve, then calculate the goodness of fit and set the threshold. The complex craters can be divided into two types: normal distribution curve type of titanium and non normal distribution curve type of titanium. We validated our proposed method with MI acquired by SELENE. Experimental results demonstrate that the proposed method has good performance in the test area.

  13. a Predictive Model of Permeability for Fractal-Based Rough Rock Fractures during Shear

    NASA Astrophysics Data System (ADS)

    Huang, Na; Jiang, Yujing; Liu, Richeng; Li, Bo; Zhang, Zhenyu

    This study investigates the roles of fracture roughness, normal stress and shear displacement on the fluid flow characteristics through three-dimensional (3D) self-affine fractal rock fractures, whose surfaces are generated using the modified successive random additions (SRA) algorithm. A series of numerical shear-flow tests under different normal stresses were conducted on rough rock fractures to calculate the evolutions of fracture aperture and permeability. The results show that the rough surfaces of fractal-based fractures can be described using the scaling parameter Hurst exponent (H), in which H = 3 - Df, where Df is the fractal dimension of 3D single fractures. The joint roughness coefficient (JRC) distribution of fracture profiles follows a Gauss function with a negative linear relationship between H and average JRC. The frequency curves of aperture distributions change from sharp to flat with increasing shear displacement, indicating a more anisotropic and heterogeneous flow pattern. Both the mean aperture and permeability of fracture increase with the increment of surface roughness and decrement of normal stress. At the beginning of shear, the permeability increases remarkably and then gradually becomes steady. A predictive model of permeability using the mean mechanical aperture is proposed and the validity is verified by comparisons with the experimental results reported in literature. The proposed model provides a simple method to approximate permeability of fractal-based rough rock fractures during shear using fracture aperture distribution that can be easily obtained from digitized fracture surface information.

  14. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences

    PubMed Central

    2014-01-01

    Background Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Results Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. Conclusion The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification. PMID:24418292

  15. Best Statistical Distribution of flood variables for Johor River in Malaysia

    NASA Astrophysics Data System (ADS)

    Salarpour Goodarzi, M.; Yusop, Z.; Yusof, F.

    2012-12-01

    A complex flood event is always characterized by a few characteristics such as flood peak, flood volume, and flood duration, which might be mutually correlated. This study explored the statistical distribution of peakflow, flood duration and flood volume at Rantau Panjang gauging station on the Johor River in Malaysia. Hourly data were recorded for 45 years. The data were analysed based on water year (July - June). Five distributions namely, Log Normal, Generalize Pareto, Log Pearson, Normal and Generalize Extreme Value (GEV) were used to model the distribution of all the three variables. Anderson-Darling and Kolmogorov-Smirnov goodness-of-fit tests were used to evaluate the best fit. Goodness-of-fit tests at 5% level of significance indicate that all the models can be used to model the distribution of peakflow, flood duration and flood volume. However, Generalize Pareto distribution is found to be the most suitable model when tested with the Anderson-Darling test and the, Kolmogorov-Smirnov suggested that GEV is the best for peakflow. The result of this research can be used to improve flood frequency analysis. Comparison between Generalized Extreme Value, Generalized Pareto and Log Pearson distributions in the Cumulative Distribution Function of peakflow

  16. Effects of normalization on quantitative traits in association test

    PubMed Central

    2009-01-01

    Background Quantitative trait loci analysis assumes that the trait is normally distributed. In reality, this is often not observed and one strategy is to transform the trait. However, it is not clear how much normality is required and which transformation works best in association studies. Results We performed simulations on four types of common quantitative traits to evaluate the effects of normalization using the logarithm, Box-Cox, and rank-based transformations. The impact of sample size and genetic effects on normalization is also investigated. Our results show that rank-based transformation gives generally the best and consistent performance in identifying the causal polymorphism and ranking it highly in association tests, with a slight increase in false positive rate. Conclusion For small sample size or genetic effects, the improvement in sensitivity for rank transformation outweighs the slight increase in false positive rate. However, for large sample size and genetic effects, normalization may not be necessary since the increase in sensitivity is relatively modest. PMID:20003414

  17. Distribution of water quality parameters in Dhemaji district, Assam (India).

    PubMed

    Buragohain, Mridul; Bhuyan, Bhabajit; Sarma, H P

    2010-07-01

    The primary objective of this study is to present a statistically significant water quality database of Dhemaji district, Assam (India) with special reference to pH, fluoride, nitrate, arsenic, iron, sodium and potassium. 25 water samples collected from different locations of five development blocks in Dhemaji district have been studied separately. The implications presented are based on statistical analyses of the raw data. Normal distribution statistics and reliability analysis (correlation and covariance matrix) have been employed to find out the distribution pattern, localisation of data, and other related information. Statistical observations show that all the parameters under investigation exhibit non uniform distribution with a long asymmetric tail either on the right or left side of the median. The width of the third quartile was consistently found to be more than the second quartile for each parameter. Differences among mean, mode and median, significant skewness and kurtosis value indicate that the distribution of various water quality parameters in the study area is widely off normal. Thus, the intrinsic water quality is not encouraging due to unsymmetrical distribution of various water quality parameters in the study area.

  18. Simulation of real-gas effects on pressure distributions for aeroassist flight experiment vehicle and comparison with prediction

    NASA Technical Reports Server (NTRS)

    Micol, John R.

    1992-01-01

    Pressure distributions measured on a 60 degree half-angle elliptic cone, raked off at an angle of 73 degrees from the cone centerline and having an ellipsoid nose (ellipticity equal to 2.0 in the symmetry plane) are presented for angles of attack from -10 degrees to 10 degrees. The high normal shock density ratio aspect of a real gas was simulated by testing in Mach 6 air and CF sub 4 (density ratio equal to 5.25 and 12.0, respectively). The effects of Reynolds number, angle of attack, and normal shock density ratio on these measurements are examined, and comparisons with a three dimensional Euler code known as HALIS are made. A significant effect of density ratio on pressure distributions on the cone section of the configuration was observed; the magnitude of this effect decreased with increasing angle of attack. The effect of Reynolds number on pressure distributions was negligible for forebody pressure distributions, but a measurable effect was noted on base pressures. In general, the HALIS code accurately predicted the measured pressure distributions in air and CF sub 4.

  19. Individual vision and peak distribution in collective actions

    NASA Astrophysics Data System (ADS)

    Lu, Peng

    2017-06-01

    People make decisions on whether they should participate as participants or not as free riders in collective actions with heterogeneous visions. Besides of the utility heterogeneity and cost heterogeneity, this work includes and investigates the effect of vision heterogeneity by constructing a decision model, i.e. the revised peak model of participants. In this model, potential participants make decisions under the joint influence of utility, cost, and vision heterogeneities. The outcomes of simulations indicate that vision heterogeneity reduces the values of peaks, and the relative variance of peaks is stable. Under normal distributions of vision heterogeneity and other factors, the peaks of participants are normally distributed as well. Therefore, it is necessary to predict distribution traits of peaks based on distribution traits of related factors such as vision heterogeneity and so on. We predict the distribution of peaks with parameters of both mean and standard deviation, which provides the confident intervals and robust predictions of peaks. Besides, we validate the peak model of via the Yuyuan Incident, a real case in China (2014), and the model works well in explaining the dynamics and predicting the peak of real case.

  20. Vibrational spectral investigation on xanthine and its derivatives—theophylline, caffeine and theobromine

    NASA Astrophysics Data System (ADS)

    Gunasekaran, S.; Sankari, G.; Ponnusamy, S.

    2005-01-01

    A normal coordinate analysis has been carried out on four compounds having a similar ring structure with different side chain substitutions, which are xanthine, caffeine, theophylline, and theobromine. Xanthine is chemically known as 2,6-dihydroxy purine. Caffeine, theophylline and theobromine are methylated xanthines. Considering the methyl groups as point mass, the number of normal modes of vibrations can be distributed as Γ vib=27 A'+12 A″ based on C s point group symmetry associated with the structures. In the present work 15 A' and 12 A″ normal modes are considered. A new set of orthonormal symmetry co-ordinates have been constructed. Wilson's F- G matrix method has been adopted for the normal coordinate analysis. A satisfactory vibrational band assignment has been made by employing the FTIR and FT Raman spectra of the compounds. The potential energy distribution is calculated with the arrived values of the force constants and hence the agreement of the frequency assignment has been checked.

  1. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  2. Probabilistic properties of wavelets in kinetic surface roughening

    NASA Astrophysics Data System (ADS)

    Bershadskii, A.

    2001-08-01

    Using the data of a recent numerical simulation [M. Ahr and M. Biehl, Phys. Rev. E 62, 1773 (2000)] of homoepitaxial growth it is shown that the observed probability distribution of a wavelet based measure of the growing surface roughness is consistent with a stretched log-normal distribution and the corresponding branching dimension depends on the level of particle desorption.

  3. Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods

    ERIC Educational Resources Information Center

    MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason

    2004-01-01

    The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal…

  4. Confidence intervals for predicting lumber strength properties based on ratios of percentiles from two Weibull populations.

    Treesearch

    Richard A. Johnson; James W. Evans; David W. Green

    2003-01-01

    Ratios of strength properties of lumber are commonly used to calculate property values for standards. Although originally proposed in terms of means, ratios are being applied without regard to position in the distribution. It is now known that lumber strength properties are generally not normally distributed. Therefore, nonparametric methods are often used to derive...

  5. Reliable and More Powerful Methods for Power Analysis in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun

    2017-01-01

    The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…

  6. Multivariate meta-analysis: a robust approach based on the theory of U-statistic.

    PubMed

    Ma, Yan; Mazumdar, Madhu

    2011-10-30

    Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Robustness of fit indices to outliers and leverage observations in structural equation modeling.

    PubMed

    Yuan, Ke-Hai; Zhong, Xiaoling

    2013-06-01

    Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  8. Evaluation of Kurtosis into the product of two normally distributed variables

    NASA Astrophysics Data System (ADS)

    Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio

    2016-06-01

    Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.

  9. Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows

    NASA Technical Reports Server (NTRS)

    McKenzie, D.; Savage, S.

    2011-01-01

    The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.

  10. Introduction to Permutation and Resampling-Based Hypothesis Tests

    ERIC Educational Resources Information Center

    LaFleur, Bonnie J.; Greevy, Robert A.

    2009-01-01

    A resampling-based method of inference--permutation tests--is often used when distributional assumptions are questionable or unmet. Not only are these methods useful for obvious departures from parametric assumptions (e.g., normality) and small sample sizes, but they are also more robust than their parametric counterparts in the presences of…

  11. Effect of Rayleigh-scattering distributed feedback on multiwavelength Raman fiber laser generation.

    PubMed

    El-Taher, A E; Harper, P; Babin, S A; Churkin, D V; Podivilov, E V; Ania-Castanon, J D; Turitsyn, S K

    2011-01-15

    We experimentally demonstrate a Raman fiber laser based on multiple point-action fiber Bragg grating reflectors and distributed feedback via Rayleigh scattering in an ~22-km-long optical fiber. Twenty-two lasing lines with spacing of ~100 GHz (close to International Telecommunication Union grid) in the C band are generated at the watt level. In contrast to the normal cavity with competition between laser lines, the random distributed feedback cavity exhibits highly stable multiwavelength generation with a power-equalized uniform distribution, which is almost independent on power.

  12. Exact probability distribution function for the volatility of cumulative production

    NASA Astrophysics Data System (ADS)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  13. Estimating Non-Normal Latent Trait Distributions within Item Response Theory Using True and Estimated Item Parameters

    ERIC Educational Resources Information Center

    Sass, D. A.; Schmitt, T. A.; Walker, C. M.

    2008-01-01

    Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…

  14. Detecting and correcting for publication bias in meta-analysis - A truncated normal distribution approach.

    PubMed

    Zhu, Qiaohao; Carriere, K C

    2016-01-01

    Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.

  15. Quantiles for Finite Mixtures of Normal Distributions

    ERIC Educational Resources Information Center

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  16. Breast cancer subtype distribution is different in normal weight, overweight, and obese women.

    PubMed

    Gershuni, Victoria; Li, Yun R; Williams, Austin D; So, Alycia; Steel, Laura; Carrigan, Elena; Tchou, Julia

    2017-06-01

    Obesity is associated with tumor promoting pathways related to insulin resistance and chronic low-grade inflammation which have been linked to various disease states, including cancer. Many studies have focused on the relationship between obesity and increased estrogen production, which contributes to the pathogenesis of estrogen receptor-positive breast cancers. The link between obesity and other breast cancer subtypes, such as triple-negative breast cancer (TNBC) and Her2/neu+ (Her2+) breast cancer, is less clear. We hypothesize that obesity may be associated with the pathogenesis of specific breast cancer subtypes resulting in a different subtype distribution than normal weight women. A single-institution, retrospective analysis of tumor characteristics of 848 patients diagnosed with primary operable breast cancer between 2000 and 2013 was performed to evaluate the association between BMI and clinical outcome. Patients were grouped based on their BMI at time of diagnosis stratified into three subgroups: normal weight (BMI = 18-24.9), overweight (BMI = 25-29.9), and obese (BMI > 30). The distribution of breast cancer subtypes across the three BMI subgroups was compared. Obese and overweight women were more likely to present with TNBC and normal weight women with Her2+ breast cancer (p = 0.008). We demonstrated, for the first time, that breast cancer subtype distribution varied significantly according to BMI status. Our results suggested that obesity might activate molecular pathways other than the well-known obesity/estrogen circuit in the pathogenesis of breast cancer. Future studies are needed to understand the molecular mechanisms that drive the variation in subtype distribution across BMI subgroups.

  17. Log Normal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of Alpha Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2008-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316

  18. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua

    2018-06-01

    The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.

  19. An asymptotic analysis of the logrank test.

    PubMed

    Strawderman, R L

    1997-01-01

    Asymptotic expansions for the null distribution of the logrank statistic and its distribution under local proportional hazards alternatives are developed in the case of iid observations. The results, which are derived from the work of Gu (1992) and Taniguchi (1992), are easy to interpret, and provide some theoretical justification for many behavioral characteristics of the logrank test that have been previously observed in simulation studies. We focus primarily upon (i) the inadequacy of the usual normal approximation under treatment group imbalance; and, (ii) the effects of treatment group imbalance on power and sample size calculations. A simple transformation of the logrank statistic is also derived based on results in Konishi (1991) and is found to substantially improve the standard normal approximation to its distribution under the null hypothesis of no survival difference when there is treatment group imbalance.

  20. Sensitivity of the normalized difference vegetation index to subpixel canopy cover, soil albedo, and pixel scale

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.

    1990-01-01

    An analytical framework is provided for examining the physically based behavior of the normalized difference vegetation index (NDVI) in terms of the variability in bulk subpixel landscape components and with respect to variations in pixel scales, within the context of the stochastic-geometric canopy reflectance model. Analysis focuses on regional scale variability in horizontal plant density and soil background reflectance distribution. Modeling is generalized to different plant geometries and solar angles through the use of the nondimensional solar-geometric similarity parameter. Results demonstrate that, for Poisson-distributed plants and for one deterministic distribution, NDVI increases with increasing subpixel fractional canopy amount, decreasing soil background reflectance, and increasing shadows, at least within the limitations of the geometric reflectance model. The NDVI of a pecan orchard and a juniper landscape is presented and discussed.

  1. EXIMS: an improved data analysis pipeline based on a new peak picking method for EXploring Imaging Mass Spectrometry data.

    PubMed

    Wijetunge, Chalini D; Saeed, Isaam; Boughton, Berin A; Spraggins, Jeffrey M; Caprioli, Richard M; Bacic, Antony; Roessner, Ute; Halgamuge, Saman K

    2015-10-01

    Matrix Assisted Laser Desorption Ionization-Imaging Mass Spectrometry (MALDI-IMS) in 'omics' data acquisition generates detailed information about the spatial distribution of molecules in a given biological sample. Various data processing methods have been developed for exploring the resultant high volume data. However, most of these methods process data in the spectral domain and do not make the most of the important spatial information available through this technology. Therefore, we propose a novel streamlined data analysis pipeline specifically developed for MALDI-IMS data utilizing significant spatial information for identifying hidden significant molecular distribution patterns in these complex datasets. The proposed unsupervised algorithm uses Sliding Window Normalization (SWN) and a new spatial distribution based peak picking method developed based on Gray level Co-Occurrence (GCO) matrices followed by clustering of biomolecules. We also use gist descriptors and an improved version of GCO matrices to extract features from molecular images and minimum medoid distance to automatically estimate the number of possible groups. We evaluated our algorithm using a new MALDI-IMS metabolomics dataset of a plant (Eucalypt) leaf. The algorithm revealed hidden significant molecular distribution patterns in the dataset, which the current Component Analysis and Segmentation Map based approaches failed to extract. We further demonstrate the performance of our peak picking method over other traditional approaches by using a publicly available MALDI-IMS proteomics dataset of a rat brain. Although SWN did not show any significant improvement as compared with using no normalization, the visual assessment showed an improvement as compared to using the median normalization. The source code and sample data are freely available at http://exims.sourceforge.net/. awgcdw@student.unimelb.edu.au or chalini_w@live.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. The effect of signal variability on the histograms of anthropomorphic channel outputs: factors resulting in non-normally distributed data

    NASA Astrophysics Data System (ADS)

    Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.

  3. Mapping of quantitative trait loci using the skew-normal distribution.

    PubMed

    Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos

    2007-11-01

    In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.

  4. On the generation of log-Lévy distributions and extreme randomness

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2011-10-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.

  5. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values.

    PubMed

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-01-30

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18 F-FLT PET SUV distributions (P  >  0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  6. Optimal transformations leading to normal distributions of positron emission tomography standardized uptake values

    NASA Astrophysics Data System (ADS)

    Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert

    2018-02-01

    The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P  >  0.10). Similar results were found for 18F-FLT PET SUV distributions (P  >  0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.

  7. The price momentum of stock in distribution

    NASA Astrophysics Data System (ADS)

    Liu, Haijun; Wang, Longfei

    2018-02-01

    In this paper, a new momentum of stock in distribution is proposed and applied in real investment. Firstly, assuming that a stock behaves as a multi-particle system, its share-exchange distribution and cost distribution are introduced. Secondly, an estimation of the share-exchange distribution is given with daily transaction data by 3 σ rule from the normal distribution. Meanwhile, an iterative method is given to estimate the cost distribution. Based on the cost distribution, a new momentum is proposed for stock system. Thirdly, an empirical test is given to compare the new momentum with others by contrarian strategy. The result shows that the new one outperforms others in many places. Furthermore, entropy of stock is introduced according to its cost distribution.

  8. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  9. Role of environmental variability in the evolution of life history strategies.

    PubMed

    Hastings, A; Caswell, H

    1979-09-01

    We reexamine the role of environmental variability in the evolution of life history strategies. We show that normally distributed deviations in the quality of the environment should lead to normally distributed deviations in the logarithm of year-to-year survival probabilities, which leads to interesting consequences for the evolution of annual and perennial strategies and reproductive effort. We also examine the effects of using differing criteria to determine the outcome of selection. Some predictions of previous theory are reversed, allowing distinctions between r and K theory and a theory based on variability. However, these distinctions require information about both the environment and the selection process not required by current theory.

  10. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  11. A closer look at the effect of preliminary goodness-of-fit testing for normality for the one-sample t-test.

    PubMed

    Rochon, Justine; Kieser, Meinhard

    2011-11-01

    Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.

  12. Transformation of arbitrary distributions to the normal distribution with application to EEG test-retest reliability.

    PubMed

    van Albada, S J; Robinson, P A

    2007-04-15

    Many variables in the social, physical, and biosciences, including neuroscience, are non-normally distributed. To improve the statistical properties of such data, or to allow parametric testing, logarithmic or logit transformations are often used. Box-Cox transformations or ad hoc methods are sometimes used for parameters for which no transformation is known to approximate normality. However, these methods do not always give good agreement with the Gaussian. A transformation is discussed that maps probability distributions as closely as possible to the normal distribution, with exact agreement for continuous distributions. To illustrate, the transformation is applied to a theoretical distribution, and to quantitative electroencephalographic (qEEG) measures from repeat recordings of 32 subjects which are highly non-normal. Agreement with the Gaussian was better than using logarithmic, logit, or Box-Cox transformations. Since normal data have previously been shown to have better test-retest reliability than non-normal data under fairly general circumstances, the implications of our transformation for the test-retest reliability of parameters were investigated. Reliability was shown to improve with the transformation, where the improvement was comparable to that using Box-Cox. An advantage of the general transformation is that it does not require laborious optimization over a range of parameters or a case-specific choice of form.

  13. Calculating Student Grades.

    ERIC Educational Resources Information Center

    Allswang, John M.

    1986-01-01

    This article provides two short microcomputer gradebook programs. The programs, written in BASIC for the IBM-PC and Apple II, provide statistical information about class performance and calculate grades either on a normal distribution or based on teacher-defined break points. (JDH)

  14. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  15. Marine forearc extension in the Hikurangi Margin: New insights from high-resolution 3D seismic data

    NASA Astrophysics Data System (ADS)

    Böttner, Christoph; Gross, Felix; Geersen, Jacob; Mountjoy, Joshu; Crutchley, Gareth; Krastel, Sebastian

    2017-04-01

    In subduction zones upper-plate normal faults have long been considered a tectonic feature primarily associated with erosive margins. However, increasing data coverage has proven that similar features also occur in accretionary margins, such as Cascadia, Makran, Nankai or Central Chile, where kinematics are dominated by compression. Considering their wide distribution there is, without doubt, a significant lack of qualitative and quantitative knowledge regarding the role and importance of normal faults and zones of extension for the seismotectonic evolution of accretionary margins. We use a high-resolution 3D P-Cable seismic volume from the Hikurangi Margin acquired in 2014 to analyze the spatial distribution and mechanisms of upper-plate normal faulting. The study area is located at the upper continental slope in the area of the Tuaheni landslide complex. In detail we aim to (1) map the spatial distribution of normal faults and characterize their vertical throws, strike directions, and dip angles; (2) investigate their possible influence on fluid migration in an area, where gas hydrates are present; (3) discuss the mechanisms that may cause extension of the upper-slope in the study area. Beneath the Tuaheni Landslide Complex we mapped about 200 normal faults. All faults have low displacements (<15 m) and dip at high (> 65°) angles. About 71% of the faults dip landward. We found two main strike directions, with the majority of faults striking 350-10°, parallel to the deformation front. A second group of faults strikes 40-60°. The faults crosscut the BSR, which indicates the base of the gas hydrate zone. In combination with seismically imaged bright-spots and pull-up structures, this indicates that the normal faults effectively transport fluids vertically across the base of the gas hydrate zone. Localized uplift, as indicated by the presence of the Tuaheni Ridge, might support normal faulting in the study area. In addition, different subduction rates across the margin may also favor extension between the segments. Future work will help to further untangle the mechanisms that cause extension of the upper continental slope.

  16. Robust Mediation Analysis Based on Median Regression

    PubMed Central

    Yuan, Ying; MacKinnon, David P.

    2014-01-01

    Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925

  17. Cortical Iron Reflects Severity of Alzheimer’s Disease

    PubMed Central

    van Duijn, Sara; Bulk, Marjolein; van Duinen, Sjoerd G.; Nabuurs, Rob J.A.; van Buchem, Mark A.; van der Weerd, Louise; Natté, Remco

    2017-01-01

    Abnormal iron distribution in the isocortex is increasingly recognized as an in vivo marker for Alzheimer’s disease (AD). However, the contribution of iron accumulation to the AD pathology is still poorly understood. In this study, we investigated: 1) frontal cortical iron distribution in AD and normal aging and 2) the relation between iron distribution and degree of AD pathology. We used formalin fixed paraffin embedded frontal cortex from 10 AD patients, 10 elder, 10 middle aged, and 10 young controls and visualized iron with a modified Perl’s histochemical procedure. AD and elderly subjects were not different with respect to age and sex distribution. Iron distribution in the frontal cortex was not affected by normal aging but was clearly different between AD and controls. AD showed accumulation of iron in plaques, activated microglia, and, in the most severe cases, in the mid-cortical layers along myelinated fibers. The degree of altered iron accumulations was correlated to the amount of amyloid-β plaques and tau pathology in the same block, as well as to Braak stage (p < 0.001). AD and normal aging show different iron and myelin distribution in frontal cortex. These changes appear to occur after the development of the AD pathological hallmarks. These findings may help the interpretation of high resolution in vivo MRI and suggest the potential of using changes in iron-based MRI contrast to indirectly determine the degree of AD pathology in the frontal cortex. PMID:29081415

  18. Intracellular distribution of Photofrin in malignant and normal endothelial cell lines.

    PubMed

    Saczko, J; Mazurkiewicz, M; Chwiłkowska, A; Kulbacka, J; Kramer, G; Ługowski, M; Snietura, M; Banaś, T

    2007-01-01

    Compared to current treatments including surgery, radiation therapy, and chemotherapy, PDT offers the advantage of an effective and selective method of destroying diseased tissues without damaging surrounding healthy tissues. One of the aspects of antitumour effectiveness of PDT is related to the distribution of photosensitizing drugs. The localization of photosensitizers in cytoplasmic organelles during PDT plays a major role in the cell destruction; therefore, intracellular localization of Ph in malignant and normal cells was investigated. The cell lines used throughout the study were: human malignant A549, MCF-7, Me45 and normal endothelial cell line HUV-EC-C. After incubation with Ph cells were examined using fluorescence and confocal microscopy to visualize the photosensitizer accumulation. For cytoplasm and mitochondria identification, cells were stained with CellTracker Green and MitoTracker Green, respectively. Distribution of Ph was different in malignant and normal cells and dependent on the incubation time. The maximal concentration of Ph in two malignant cell lines (A549 and MCF-7) was observed after 4 hours of incubation, and the most intensive signal was observed around the nuclear envelope. Intracellular distribution of Ph in the Me45 cell line showed that the fluorescence emitted by Ph overlaid that from MitoTracker. This indicates preferential accumulation of the sensitizer in mitochondria. Our results based on the mitochondrial localization support the idea that PDT can contribute to elimination of malignant cells by inducing apoptosis, which is of physiological significance.

  19. Three-Dimensional Radiobiologic Dosimetry: Application of Radiobiologic Modeling to Patient-Specific 3-Dimensional Imaging–Based Internal Dosimetry

    PubMed Central

    Prideaux, Andrew R.; Song, Hong; Hobbs, Robert F.; He, Bin; Frey, Eric C.; Ladenson, Paul W.; Wahl, Richard L.; Sgouros, George

    2010-01-01

    Phantom-based and patient-specific imaging-based dosimetry methodologies have traditionally yielded mean organ-absorbed doses or spatial dose distributions over tumors and normal organs. In this work, radiobiologic modeling is introduced to convert the spatial distribution of absorbed dose into biologically effective dose and equivalent uniform dose parameters. The methodology is illustrated using data from a thyroid cancer patient treated with radioiodine. Methods Three registered SPECT/CT scans were used to generate 3-dimensional images of radionuclide kinetics (clearance rate) and cumulated activity. The cumulated activity image and corresponding CT scan were provided as input into an EGSnrc-based Monte Carlo calculation: The cumulated activity image was used to define the distribution of decays, and an attenuation image derived from CT was used to define the corresponding spatial tissue density and composition distribution. The rate images were used to convert the spatial absorbed dose distribution to a biologically effective dose distribution, which was then used to estimate a single equivalent uniform dose for segmented volumes of interest. Equivalent uniform dose was also calculated from the absorbed dose distribution directly. Results We validate the method using simple models; compare the dose-volume histogram with a previously analyzed clinical case; and give the mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for an illustrative case of a pediatric thyroid cancer patient with diffuse lung metastases. The mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for the tumor were 57.7, 58.5, and 25.0 Gy, respectively. Corresponding values for normal lung tissue were 9.5, 9.8, and 8.3 Gy, respectively. Conclusion The analysis demonstrates the impact of radiobiologic modeling on response prediction. The 57% reduction in the equivalent dose value for the tumor reflects a high level of dose nonuniformity in the tumor and a corresponding reduced likelihood of achieving a tumor response. Such analyses are expected to be useful in treatment planning for radionuclide therapy. PMID:17504874

  20. Multi-Armed RCTs: A Design-Based Framework. NCEE 2017-4027

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2017-01-01

    Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies (Imbens and Rubin, 2015; Schochet, 2015, 2016). The estimators are derived using the building blocks of experimental designs with minimal assumptions, and are unbiased and normally distributed in large samples…

  1. Bayesian adaptive bandit-based designs using the Gittins index for multi-armed trials with normally distributed endpoints.

    PubMed

    Smith, Adam L; Villar, Sofía S

    2018-01-01

    Adaptive designs for multi-armed clinical trials have become increasingly popular recently because of their potential to shorten development times and to increase patient response. However, developing response-adaptive designs that offer patient-benefit while ensuring the resulting trial provides a statistically rigorous and unbiased comparison of the different treatments included is highly challenging. In this paper, the theory of Multi-Armed Bandit Problems is used to define near optimal adaptive designs in the context of a clinical trial with a normally distributed endpoint with known variance. We report the operating characteristics (type I error, power, bias) and patient-benefit of these approaches and alternative designs using simulation studies based on an ongoing trial. These results are then compared to those recently published in the context of Bernoulli endpoints. Many limitations and advantages are similar in both cases but there are also important differences, specially with respect to type I error control. This paper proposes a simulation-based testing procedure to correct for the observed type I error inflation that bandit-based and adaptive rules can induce.

  2. Field size, length, and width distributions based on LACIE ground truth data. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Badhwar, G.

    1980-01-01

    The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.

  3. Amyloplasts That Sediment in Protonemata of the Moss Ceratodon purpureus Are Nonrandomly Distributed in Microgravity1

    PubMed Central

    Kern, Volker D.; Smith, Jeffrey D.; Schwuchow, Jochen M.; Sack, Fred D.

    2001-01-01

    Little is known about whether or how plant cells regulate the position of heavy organelles that sediment toward gravity. Dark-grown protonemata of the moss Ceratodon purpureus displays a complex plastid zonation in that only some amyloplasts sediment along the length of the tip cell. If gravity is the major force determining the position of amyloplasts that sediment, then these plastids should be randomly distributed in space. Instead, amyloplasts were clustered in the subapical region in microgravity. Cells rotated on a clinostat on earth had a roughly similar non-random plastid distribution. Subapical clusters were also found in ground controls that were inverted and kept stationary, but the distribution profile differed considerably due to amyloplast sedimentation. These findings indicate the existence of as yet unknown endogenous forces and mechanisms that influence amyloplast position and that are normally masked in stationary cells grown on earth. It is hypothesized that a microtubule-based mechanism normally compensates for g-induced drag while still allowing for regulated amyloplast sedimentation. PMID:11299388

  4. Exact and Approximate Statistical Inference for Nonlinear Regression and the Estimating Equation Approach.

    PubMed

    Demidenko, Eugene

    2017-09-01

    The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.

  5. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  6. Study of sea-surface slope distribution and its effect on radar backscatter based on Global Precipitation Measurement Ku-band precipitation radar measurements

    NASA Astrophysics Data System (ADS)

    Yan, Qiushuang; Zhang, Jie; Fan, Chenqing; Wang, Jing; Meng, Junmin

    2018-01-01

    The collocated normalized radar backscattering cross-section measurements from the Global Precipitation Measurement (GPM) Ku-band precipitation radar (KuPR) and the winds from the moored buoys are used to study the effect of different sea-surface slope probability density functions (PDFs), including the Gaussian PDF, the Gram-Charlier PDF, and the Liu PDF, on the geometrical optics (GO) model predictions of the radar backscatter at low incidence angles (0 deg to 18 deg) at different sea states. First, the peakedness coefficient in the Liu distribution is determined using the collocations at the normal incidence angle, and the results indicate that the peakedness coefficient is a nonlinear function of the wind speed. Then, the performance of the modified Liu distribution, i.e., Liu distribution using the obtained peakedness coefficient estimate; the Gaussian distribution; and the Gram-Charlier distribution is analyzed. The results show that the GO model predictions with the modified Liu distribution agree best with the KuPR measurements, followed by the predictions with the Gaussian distribution, while the predictions with the Gram-Charlier distribution have larger differences as the total or the slick filtered, not the radar filtered, probability density is included in the distribution. The best-performing distribution changes with incidence angle and changes with wind speed.

  7. Atlas-derived perfusion correlates of white matter hyperintensities in patients with reduced cardiac output.

    PubMed

    Jefferson, Angela L; Holland, Christopher M; Tate, David F; Csapo, Istvan; Poppas, Athena; Cohen, Ronald A; Guttmann, Charles R G

    2011-01-01

    Reduced cardiac output is associated with increased white matter hyperintensities (WMH) and executive dysfunction in older adults, which may be secondary to relations between systemic and cerebral perfusion. This study preliminarily describes the regional distribution of cerebral WMH in the context of a normal cerebral perfusion atlas and aims to determine if these variables are associated with reduced cardiac output. Thirty-two participants (72 ± 8 years old, 38% female) with cardiovascular risk factors or disease underwent structural MRI acquisition at 1.5T using a standard imaging protocol that included FLAIR sequences. WMH distribution was examined in common anatomical space using voxel-based morphometry and as a function of normal cerebral perfusion patterns by overlaying a single photon emission computed tomography (SPECT) atlas. Doppler echocardiogram data was used to dichotomize the participants on the basis of low (n=9) and normal (n=23) cardiac output. Global WMH count and volume did not differ between the low and normal cardiac output groups; however, atlas-derived SPECT perfusion values in regions of hyperintensities were reduced in the low versus normal cardiac output group (p<0.001). Our preliminary data suggest that participants with low cardiac output have WMH in regions of relatively reduced perfusion, while normal cardiac output participants have WMH in regions with relatively higher regional perfusion. This spatial perfusion distribution difference for areas of WMH may occur in the context of reduced systemic perfusion, which subsequently impacts cerebral perfusion and contributes to subclinical or clinical microvascular damage. Copyright © 2009 Elsevier Inc. All rights reserved.

  8. On the intrinsic shape of the gamma-ray spectrum for Fermi blazars

    NASA Astrophysics Data System (ADS)

    Kang, Shi-Ju; Wu, Qingwen; Zheng, Yong-Gang; Yin, Yue; Song, Jia-Li; Zou, Hang; Feng, Jian-Chao; Dong, Ai-Jun; Wu, Zhong-Zu; Zhang, Zhi-Bin; Wu, Lin-Hui

    2018-05-01

    The curvature of the γ-ray spectrumin blazarsmay reflect the intrinsic distribution of emitting electrons, which will further give some information on the possible acceleration and cooling processes in the emitting region. The γ-ray spectra of Fermi blazars are normally fitted either by a single power-law (PL) or a log-normal (call Logarithmic Parabola, LP) form. The possible reason for this difference is not clear. We statistically explore this issue based on the different observational properties of 1419 Fermi blazars in the 3LAC Clean Sample.We find that the γ-ray flux (100MeV–100GeV) and variability index follow bimodal distributions for PL and LP blazars, where the γ-ray flux and variability index show a positive correlation. However, the distributions of γ-ray luminosity and redshift follow a unimodal distribution. Our results suggest that the bimodal distribution of γ-ray fluxes for LP and PL blazars may not be intrinsic and all blazars may have an intrinsically curved γ-ray spectrum, and the PL spectrum is just caused by the fitting effect due to less photons.

  9. Landscape-scale spatial abundance distributions discriminate core from random components of boreal lake bacterioplankton.

    PubMed

    Niño-García, Juan Pablo; Ruiz-González, Clara; Del Giorgio, Paul A

    2016-12-01

    Aquatic bacterial communities harbour thousands of coexisting taxa. To meet the challenge of discriminating between a 'core' and a sporadically occurring 'random' component of these communities, we explored the spatial abundance distribution of individual bacterioplankton taxa across 198 boreal lakes and their associated fluvial networks (188 rivers). We found that all taxa could be grouped into four distinct categories based on model statistical distributions (normal like, bimodal, logistic and lognormal). The distribution patterns across lakes and their associated river networks showed that lake communities are composed of a core of taxa whose distribution appears to be linked to in-lake environmental sorting (normal-like and bimodal categories), and a large fraction of mostly rare bacteria (94% of all taxa) whose presence appears to be largely random and linked to downstream transport in aquatic networks (logistic and lognormal categories). These rare taxa are thus likely to reflect species sorting at upstream locations, providing a perspective of the conditions prevailing in entire aquatic networks rather than only in lakes. © 2016 John Wiley & Sons Ltd/CNRS.

  10. A Study of Land Surface Temperature Retrieval and Thermal Environment Distribution Based on Landsat-8 in Jinan City

    NASA Astrophysics Data System (ADS)

    Dong, Fang; Chen, Jian; Yang, Fan

    2018-01-01

    Based on the medium resolution Landsat 8 OLI/TIRS, the temperature distribution in four seasons of urban area in Jinan City was obtained by using atmospheric correction method for the retrieval of land surface temperature. Quantitative analysis of the spatio-temporal distribution characteristics, development trend of urban thermal environment, the seasonal variation and the relationship between surface temperature and normalized difference vegetation index (NDVI) was studied. The results show that the distribution of high temperature areas is concentrated in Jinan, and there is a tendency to expand from east to west, revealing a negative correlation between land surface temperature distribution and NDVI. So as to provide theoretical references and scientific basis of improving the ecological environment of Jinan City, strengthening scientific planning and making overall plan addressing climate change.

  11. Development of evaluation technique of GMAW welding quality based on statistical analysis

    NASA Astrophysics Data System (ADS)

    Feng, Shengqiang; Terasaki, Hidenri; Komizo, Yuichi; Hu, Shengsun; Chen, Donggao; Ma, Zhihua

    2014-11-01

    Nondestructive techniques for appraising gas metal arc welding(GMAW) faults plays a very important role in on-line quality controllability and prediction of the GMAW process. On-line welding quality controllability and prediction have several disadvantages such as high cost, low efficiency, complication and greatly being affected by the environment. An enhanced, efficient evaluation technique for evaluating welding faults based on Mahalanobis distance(MD) and normal distribution is presented. In addition, a new piece of equipment, designated the weld quality tester(WQT), is developed based on the proposed evaluation technique. MD is superior to other multidimensional distances such as Euclidean distance because the covariance matrix used for calculating MD takes into account correlations in the data and scaling. The values of MD obtained from welding current and arc voltage are assumed to follow a normal distribution. The normal distribution has two parameters: the mean µ and standard deviation σ of the data. In the proposed evaluation technique used by the WQT, values of MD located in the range from zero to µ+3 σ are regarded as "good". Two experiments which involve changing the flow of shielding gas and smearing paint on the surface of the substrate are conducted in order to verify the sensitivity of the proposed evaluation technique and the feasibility of using WQT. The experimental results demonstrate the usefulness of the WQT for evaluating welding quality. The proposed technique can be applied to implement the on-line welding quality controllability and prediction, which is of great importance to design some novel equipment for weld quality detection.

  12. Study of vitamin A distribution in rats by laser induced fluorescence spectroscopy

    NASA Astrophysics Data System (ADS)

    Akhmeteli, K. T.; Ekaladze, E. N.; Jaliashvli, Z. V.; Medoidze, T. D.; Melikishvili, Z. G.; Merkviladze, N. Z.; Papava, M. B.; Tushurashvili, P. R.

    2008-06-01

    We applied the laser induced fluorescence spectroscopy (LIFS) to investigate intestinal and liver tissues of normal male Wistar rats fed with vitamin A. The special procedure based on intensity spectral functions fitting was developed for the recognition of vitamin A in different tissues. Based on this procedure it is demonstrated that the LIFS can be used to monitor vitamin A deposition and distribution in the body of rat, which is essential for understanding the mechanism of formation of the vitamin A rich droplets, as the mechanism of vitamin A mobilization.

  13. Effect of Box-Cox transformation on power of Haseman-Elston and maximum-likelihood variance components tests to detect quantitative trait Loci.

    PubMed

    Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I

    2003-01-01

    Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel

  14. An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning

    PubMed Central

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-01-01

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197

  15. An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.

    PubMed

    Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi

    2013-08-21

    The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.

  16. Dichotomisation using a distributional approach when the outcome is skewed.

    PubMed

    Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L

    2015-04-24

    Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.

  17. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. Multivariate normality

    NASA Technical Reports Server (NTRS)

    Crutcher, H. L.; Falls, L. W.

    1976-01-01

    Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.

  19. Obesity-Related Metabolic Risk in Sedentary Hispanic Adolescent Girls with Normal BMI.

    PubMed

    van der Heijden, Gert-Jan; Wang, Zhiyue J; Chu, Zili D; Haymond, Morey; Sauer, Pieter J J; Sunehag, Agneta L

    2018-06-15

    Hispanic adolescent girls with normal BMI frequently have high body fat %. Without knowledge of body fat content and distribution, their risk for metabolic complications is unknown. We measured metabolic risk indicators and abdominal fat distribution in post-pubertal Hispanic adolescent girls with Normal BMI (N-BMI: BMI < 85th percentile) and compared these indicators between girls with Normal BMI and High Fat content (N-BMI-HF: body fat ≥ 27%; n = 15) and Normal BMI and Normal Fat content (N-BMI-NF: body fat < 27%; n = 8). Plasma concentrations of glucose, insulin, adiponectin, leptin and Hs-CRP were determined. Insulin resistance was calculated using an oral glucose tolerance test. Body fat % was measured by DXA and subcutaneous, visceral and hepatic fat by MRI/MRS. The N-BMI-HF girls had increased abdominal and hepatic fat content and increased insulin resistance, plasma leptin and Hs-CRP concentrations ( p < 0.05) as compared to their N-BMI-NF counterparts. In N-BMI girls, insulin resistance, plasma insulin and leptin correlated with BMI and body fat % ( p < 0.05). This research confirms the necessity of the development of BMI and body fat % cut-off criteria per sex, age and racial/ethnic group based on metabolic risk factors to optimize the effectiveness of metabolic risk screening procedures.

  20. Precipitated Fluxes of Radiation Belt Electrons via Injection of Whistler-Mode Waves

    NASA Astrophysics Data System (ADS)

    Kulkarni, P.; Inan, U. S.; Bell, T. F.

    2005-12-01

    Inan et al. (U.S. Inan et al., Controlled precipitation of radiation belt electrons, Journal of Geophysical Research-Space Physics, 108 (A5), 1186, doi: 10.1029/2002JA009580, 2003.) suggested that the lifetime of energetic (a few MeV) electrons in the inner radiation belts may be moderated by in situ injection of whistler mode waves at frequencies of a few kHz. We use the Stanford 2D VLF raytracing program (along with an accurate estimation of the path-integrated Landau damping based on data from the HYDRA instrument on the POLAR spacecraft) to determine the distribution of wave energy throughout the inner radiation belts as a function of injection point, wave frequency and injection wave normal angle. To determine the total wave power injected and its initial distribution in k-space (i.e., wave-normal angle), we apply the formulation of Wang and Bell ( T.N.C. Wang and T.F. Bell, Radiation resistance of a short dipole immersed in a cold magnetoionic medium, Radio Science, 4 (2), 167-177, February 1969) for an electric dipole antenna placed at a variety of locations throughout the inner radiation belts. For many wave frequencies and wave normal angles the results establish that most of the radiated power is concentrated in waves whose wave normals are located near the resonance cone. The combined use of the radiation pattern and ray-tracing including Landau damping allows us to make quantitative estimates of the magnetospheric distribution of wave power density for different source injection points. We use these results to estimate the number of individual space-based transmitters needed to significantly impact the lifetimes of energetic electrons in the inner radiation belts. Using the wave power distribution, we finally determine the energetic electron pitch angle scattering and the precipitated flux signatures that would be detected.

  1. Analyzing signal attenuation in PFG anomalous diffusion via a non-Gaussian phase distribution approximation approach by fractional derivatives.

    PubMed

    Lin, Guoxing

    2016-11-21

    Anomalous diffusion exists widely in polymer and biological systems. Pulsed-field gradient (PFG) techniques have been increasingly used to study anomalous diffusion in nuclear magnetic resonance and magnetic resonance imaging. However, the interpretation of PFG anomalous diffusion is complicated. Moreover, the exact signal attenuation expression including the finite gradient pulse width effect has not been obtained based on fractional derivatives for PFG anomalous diffusion. In this paper, a new method, a Mainardi-Luchko-Pagnini (MLP) phase distribution approximation, is proposed to describe PFG fractional diffusion. MLP phase distribution is a non-Gaussian phase distribution. From the fractional derivative model, both the probability density function (PDF) of a spin in real space and the PDF of the spin's accumulating phase shift in virtual phase space are MLP distributions. The MLP phase distribution leads to a Mittag-Leffler function based PFG signal attenuation, which differs significantly from the exponential attenuation for normal diffusion and from the stretched exponential attenuation for fractional diffusion based on the fractal derivative model. A complete signal attenuation expression E α (-D f b α,β * ) including the finite gradient pulse width effect was obtained and it can handle all three types of PFG fractional diffusions. The result was also extended in a straightforward way to give a signal attenuation expression of fractional diffusion in PFG intramolecular multiple quantum coherence experiments, which has an n β dependence upon the order of coherence which is different from the familiar n 2 dependence in normal diffusion. The results obtained in this study are in agreement with the results from the literature. The results in this paper provide a set of new, convenient approximation formalisms to interpret complex PFG fractional diffusion experiments.

  2. Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.

    PubMed

    Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza

    2017-01-01

    To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P < 0.001), systolic (126.37 ± 20.25 vs. 119.21 ± 15.81 mmHg, P < 0.001) and diastolic (78.14 ± 14.21 vs. 67.54 ± 11.46 mmHg, P < 0.001) blood pressures. The distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P < 0.001) as well as between patients without retinopathy and those with non-proliferative diabetic retinopathy (NPDR); with larger AOV for smaller vessels in NPDR ( P < 0.001). Controlling for the effect of confounders, patients had a smaller total AOV, larger total SD of AOV, and a more skewed distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.

  3. Distributed gas sensing with optical fibre photothermal interferometry.

    PubMed

    Lin, Yuechuan; Liu, Fei; He, Xiangge; Jin, Wei; Zhang, Min; Yang, Fan; Ho, Hoi Lut; Tan, Yanzhen; Gu, Lijuan

    2017-12-11

    We report the first distributed optical fibre trace-gas detection system based on photothermal interferometry (PTI) in a hollow-core photonic bandgap fibre (HC-PBF). Absorption of a modulated pump propagating in the gas-filled HC-PBF generates distributed phase modulation along the fibre, which is detected by a dual-pulse heterodyne phase-sensitive optical time-domain reflectometry (OTDR) system. Quasi-distributed sensing experiment with two 28-meter-long HC-PBF sensing sections connected by single-mode transmission fibres demonstrated a limit of detection (LOD) of ∼10 ppb acetylene with a pump power level of 55 mW and an effective noise bandwidth (ENBW) of 0.01 Hz, corresponding to a normalized detection limit of 5.5ppb⋅W/Hz. Distributed sensing experiment over a 200-meter-long sensing cable made of serially connected HC-PBFs demonstrated a LOD of ∼ 5 ppm with 62.5 mW peak pump power and 11.8 Hz ENBW, or a normalized detection limit of 312ppb⋅W/Hz. The spatial resolution of the current distributed detection system is limited to ∼ 30 m, but it is possible to reduce down to 1 meter or smaller by optimizing the phase detection system.

  4. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    PubMed

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  5. Is Coefficient Alpha Robust to Non-Normal Data?

    PubMed Central

    Sheng, Yanyan; Sheng, Zhaohui

    2011-01-01

    Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306

  6. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  7. Topics in Statistical Calibration

    DTIC Science & Technology

    2014-03-27

    on a parametric bootstrap where, instead of sampling directly from the residuals , samples are drawn from a normal distribution. This procedure will...addition to centering them (Davison and Hinkley, 1997). When there are outliers in the residuals , the bootstrap distribution of x̂0 can become skewed or...based and inversion methods using the linear mixed-effects model. Then, a simple parametric bootstrap algorithm is proposed that can be used to either

  8. A random effects meta-analysis model with Box-Cox transformation.

    PubMed

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.

  9. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  10. [Raman, FTIR spectra and normal mode analysis of acetanilide].

    PubMed

    Liang, Hui-Qin; Tao, Ya-Ping; Han, Li-Gang; Han, Yun-Xia; Mo, Yu-Jun

    2012-10-01

    The Raman and FTIR spectra of acetanilide (ACN) were measured experimentally in the regions of 3 500-50 and 3 500-600 cm(-1) respectively. The equilibrium geometry and vibration frequencies of ACN were calculated based on density functional theory (DFT) method (B3LYP/6-311G(d, p)). The results showed that the theoretical calculation of molecular structure parameters are in good agreement with previous report and better than the ones calculated based on 6-31G(d), and the calculated frequencies agree well with the experimental ones. Potential energy distribution of each frequency was worked out by normal mode analysis, and based on this, a detailed and accurate vibration frequency assignment of ACN was obtained.

  11. Numerical modeling of nanodrug distribution in tumors with heterogeneous vasculature.

    PubMed

    Chou, Cheng-Ying; Chang, Wan-I; Horng, Tzyy-Leng; Lin, Win-Li

    2017-01-01

    The distribution and accumulation of nanoparticle dosage in a tumor are important in evaluating the effectiveness of cancer treatment. The cell survival rate can quantify the therapeutic effect, and the survival rates after multiple treatments are helpful to evaluate the efficacy of a chemotherapy plan. We developed a mathematical tumor model based on the governing equations describing the fluid flow and particle transport to investigate the drug transportation in a tumor and computed the resulting cumulative concentrations. The cell survival rate was calculated based on the cumulative concentration. The model was applied to a subcutaneous tumor with heterogeneous vascular distributions. Various sized dextrans and doxorubicin were respectively chosen as the nanodrug carrier and the traditional chemotherapeutic agent for comparison. The results showed that: 1) the largest nanoparticle drug in the current simulations yielded the highest cumulative concentration in the well vascular region, but second lowest in the surrounding normal tissues, which implies it has the best therapeutic effect to tumor and at the same time little harmful to normal tissue; 2) on the contrary, molecular chemotherapeutic agent produced the second lowest cumulative concentration in the well vascular tumor region, but highest in the surrounding normal tissue; 3) all drugs have very small cumulative concentrations in the tumor necrotic region, where drug transport is solely through diffusion. This might mean that it is hard to kill tumor stem cells hiding in it. The current model indicated that the effectiveness of the anti-tumor drug delivery was determined by the interplay of the vascular density and nanoparticle size, which governs the drug transport properties. The use of nanoparticles as anti-tumor drug carriers is generally a better choice than molecular chemotherapeutic agent because of its high treatment efficiency on tumor cells and less damage to normal tissues.

  12. Determining the best population-level alcohol consumption model and its impact on estimates of alcohol-attributable harms

    PubMed Central

    2012-01-01

    Background The goals of our study are to determine the most appropriate model for alcohol consumption as an exposure for burden of disease, to analyze the effect of the chosen alcohol consumption distribution on the estimation of the alcohol Population- Attributable Fractions (PAFs), and to characterize the chosen alcohol consumption distribution by exploring if there is a global relationship within the distribution. Methods To identify the best model, the Log-Normal, Gamma, and Weibull prevalence distributions were examined using data from 41 surveys from Gender, Alcohol and Culture: An International Study (GENACIS) and from the European Comparative Alcohol Study. To assess the effect of these distributions on the estimated alcohol PAFs, we calculated the alcohol PAF for diabetes, breast cancer, and pancreatitis using the three above-named distributions and using the more traditional approach based on categories. The relationship between the mean and the standard deviation from the Gamma distribution was estimated using data from 851 datasets for 66 countries from GENACIS and from the STEPwise approach to Surveillance from the World Health Organization. Results The Log-Normal distribution provided a poor fit for the survey data, with Gamma and Weibull distributions providing better fits. Additionally, our analyses showed that there were no marked differences for the alcohol PAF estimates based on the Gamma or Weibull distributions compared to PAFs based on categorical alcohol consumption estimates. The standard deviation of the alcohol distribution was highly dependent on the mean, with a unit increase in alcohol consumption associated with a unit increase in the mean of 1.258 (95% CI: 1.223 to 1.293) (R2 = 0.9207) for women and 1.171 (95% CI: 1.144 to 1.197) (R2 = 0. 9474) for men. Conclusions Although the Gamma distribution and the Weibull distribution provided similar results, the Gamma distribution is recommended to model alcohol consumption from population surveys due to its fit, flexibility, and the ease with which it can be modified. The results showed that a large degree of variance of the standard deviation of the alcohol consumption Gamma distribution was explained by the mean alcohol consumption, allowing for alcohol consumption to be modeled through a Gamma distribution using only average consumption. PMID:22490226

  13. Robustness of location estimators under t-distributions: a literature review

    NASA Astrophysics Data System (ADS)

    Sumarni, C.; Sadik, K.; Notodiputro, K. A.; Sartono, B.

    2017-03-01

    The assumption of normality is commonly used in estimation of parameters in statistical modelling, but this assumption is very sensitive to outliers. The t-distribution is more robust than the normal distribution since the t-distributions have longer tails. The robustness measures of location estimators under t-distributions are reviewed and discussed in this paper. For the purpose of illustration we use the onion yield data which includes outliers as a case study and showed that the t model produces better fit than the normal model.

  14. A simulation study of nonparametric total deviation index as a measure of agreement based on quantile regression.

    PubMed

    Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael

    2016-01-01

    Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.

  15. LIMEPY: Lowered Isothermal Model Explorer in PYthon

    NASA Astrophysics Data System (ADS)

    Gieles, Mark; Zocchi, Alice

    2017-10-01

    LIMEPY solves distribution function (DF) based lowered isothermal models. It solves Poisson's equation used on input parameters and offers fast solutions for isotropic/anisotropic, single/multi-mass models, normalized DF values, density and velocity moments, projected properties, and generates discrete samples.

  16. Empirical analysis on the runners' velocity distribution in city marathons

    NASA Astrophysics Data System (ADS)

    Lin, Zhenquan; Meng, Fan

    2018-01-01

    In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.

  17. A Graphic Anthropometric Aid for Seating and Workplace Design.

    DTIC Science & Technology

    1984-04-01

    required proportion of the pdf . Suppose that some attribute is distributed according to a bivariate Normal pdf of zero mean value and equal variances a...2󈧓 Note that circular contours. dran at the normaliwed radii presented above, will enclose the respective proportions of the bi artate Normal pdf ...INTRODUCTION 1 2. A TWO-DIMENSIONAL MODEL BASE 2 3. CONCEPT OF USE 4 4. VALIDATION OF THE TWO-DIMENSIONAL MODEL 8 4.1 Conventional Anthropometry 9 4.2

  18. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  19. Crack problem in superconducting cylinder with exponential distribution of critical-current density

    NASA Astrophysics Data System (ADS)

    Zhao, Yufeng; Xu, Chi; Shi, Liang

    2018-04-01

    The general problem of a center crack in a long cylindrical superconductor with inhomogeneous critical-current distribution is studied based on the extended Bean model for zero-field cooling (ZFC) and field cooling (FC) magnetization processes, in which the inhomogeneous parameter η is introduced for characterizing the critical-current density distribution in inhomogeneous superconductor. The effect of the inhomogeneous parameter η on both the magnetic field distribution and the variations of the normalized stress intensity factors is also obtained based on the plane strain approach and J-integral theory. The numerical results indicate that the exponential distribution of critical-current density will lead a larger trapped field inside the inhomogeneous superconductor and cause the center of the cylinder to fracture more easily. In addition, it is worth pointing out that the nonlinear field distribution is unique to the Bean model by comparing the curve shapes of the magnetization loop with homogeneous and inhomogeneous critical-current distribution.

  20. Distributed hierarchical control architecture for integrating smart grid assets during normal and disrupted operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karan; Fuller, Jason C.; Somani, Abhishek

    Disclosed herein are representative embodiments of methods, apparatus, and systems for facilitating operation and control of a resource distribution system (such as a power grid). Among the disclosed embodiments is a distributed hierarchical control architecture (DHCA) that enables smart grid assets to effectively contribute to grid operations in a controllable manner, while helping to ensure system stability and equitably rewarding their contribution. Embodiments of the disclosed architecture can help unify the dispatch of these resources to provide both market-based and balancing services.

  1. Mean estimation in highly skewed samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pederson, S P

    The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.

  2. A New Bond Albedo for Performing Orbital Debris Brightness to Size Transformations

    NASA Technical Reports Server (NTRS)

    Mulrooney, Mark K.; Matney, Mark J.

    2008-01-01

    We have developed a technique for estimating the intrinsic size distribution of orbital debris objects via optical measurements alone. The process is predicated on the empirically observed power-law size distribution of debris (as indicated by radar RCS measurements) and the log-normal probability distribution of optical albedos as ascertained from phase (Lambertian) and range-corrected telescopic brightness measurements. Since the observed distribution of optical brightness is the product integral of the size distribution of the parent [debris] population with the albedo probability distribution, it is a straightforward matter to transform a given distribution of optical brightness back to a size distribution by the appropriate choice of a single albedo value. This is true because the integration of a powerlaw with a log-normal distribution (Fredholm Integral of the First Kind) yields a Gaussian-blurred power-law distribution with identical power-law exponent. Application of a single albedo to this distribution recovers a simple power-law [in size] which is linearly offset from the original distribution by a constant whose value depends on the choice of the albedo. Significantly, there exists a unique Bond albedo which, when applied to an observed brightness distribution, yields zero offset and therefore recovers the original size distribution. For physically realistic powerlaws of negative slope, the proper choice of albedo recovers the parent size distribution by compensating for the observational bias caused by the large number of small objects that appear anomalously large (bright) - and thereby skew the small population upward by rising above the detection threshold - and the lower number of large objects that appear anomalously small (dim). Based on this comprehensive analysis, a value of 0.13 should be applied to all orbital debris albedo-based brightness-to-size transformations regardless of data source. Its prima fascia genesis, derived and constructed from the current RCS to size conversion methodology (SiBAM Size-Based Estimation Model) and optical data reduction standards, assures consistency in application with the prior canonical value of 0.1. Herein we present the empirical and mathematical arguments for this approach and by example apply it to a comprehensive set of photometric data acquired via NASA's Liquid Mirror Telescopes during the 2000-2001 observing season.

  3. Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procacci, Piero

    2015-04-21

    In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of onlymore » two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems.« less

  4. Modeling Error Distributions of Growth Curve Models through Bayesian Methods

    ERIC Educational Resources Information Center

    Zhang, Zhiyong

    2016-01-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…

  5. Distribution and function of the peptide transporter PEPT2 in normal and cystic fibrosis human lung.

    PubMed

    Groneberg, D A; Eynott, P R; Döring, F; Dinh, Q Thai; Oates, T; Barnes, P J; Chung, K F; Daniel, H; Fischer, A

    2002-01-01

    Aerosol administration of peptide based drugs has an important role in the treatment of various pulmonary and systemic diseases. The characterisation of pulmonary peptide transport pathways can lead to new strategies in aerosol drug treatment. Immunohistochemistry and ex vivo uptake studies were established to assess the distribution and activity of the beta-lactam transporting high affinity proton coupled peptide transporter PEPT2 in normal and cystic fibrosis human airway tissue. PEPT2 immunoreactivity in normal human airways was localised to cells of the tracheal and bronchial epithelium and the endothelium of small vessels. In peripheral lung immunoreactivity was restricted to type II pneumocytes. In sections of cystic fibrosis lung a similar pattern of distribution was obtained with signals localised to endothelial cells, airway epithelium, and type II pneumocytes. Functional ex vivo uptake studies with fresh lung specimens led to an uptake of the fluorophore conjugated dipeptide derivative D-Ala-L-Lys-AMCA into bronchial epithelial cells and type II pneumocytes. This uptake was competitively inhibited by dipeptides and cephalosporins but not ACE inhibitors, indicating a substrate specificity as described for PEPT2. These findings provide evidence for the expression and function of the peptide transporter PEPT2 in the normal and cystic fibrosis human respiratory tract and suggest that PEPT2 is likely to play a role in the transport of pulmonary peptides and peptidomimetics.

  6. Distribution and function of the peptide transporter PEPT2 in normal and cystic fibrosis human lung

    PubMed Central

    Groneberg, D; Eynott, P; Doring, F; Thai, D; Oates, T; Barnes, P; Chung, K; Daniel, H; Fischer, A

    2002-01-01

    Background: Aerosol administration of peptide based drugs has an important role in the treatment of various pulmonary and systemic diseases. The characterisation of pulmonary peptide transport pathways can lead to new strategies in aerosol drug treatment. Methods: Immunohistochemistry and ex vivo uptake studies were established to assess the distribution and activity of the ß-lactam transporting high affinity proton coupled peptide transporter PEPT2 in normal and cystic fibrosis human airway tissue. Results: PEPT2 immunoreactivity in normal human airways was localised to cells of the tracheal and bronchial epithelium and the endothelium of small vessels. In peripheral lung immunoreactivity was restricted to type II pneumocytes. In sections of cystic fibrosis lung a similar pattern of distribution was obtained with signals localised to endothelial cells, airway epithelium, and type II pneumocytes. Functional ex vivo uptake studies with fresh lung specimens led to an uptake of the fluorophore conjugated dipeptide derivative D-Ala-L-Lys-AMCA into bronchial epithelial cells and type II pneumocytes. This uptake was competitively inhibited by dipeptides and cephalosporins but not ACE inhibitors, indicating a substrate specificity as described for PEPT2. Conclusions: These findings provide evidence for the expression and function of the peptide transporter PEPT2 in the normal and cystic fibrosis human respiratory tract and suggest that PEPT2 is likely to play a role in the transport of pulmonary peptides and peptidomimetics. PMID:11809991

  7. Measuring grain boundary character distributions in Ni-base alloy 725 using high-energy diffraction microscopy

    DOE PAGES

    Bagri, Akbar; Hanson, John P.; Lind, J. P.; ...

    2016-10-25

    We use high-energy X-ray diffraction microscopy (HEDM) to characterize the microstructure of Ni-base alloy 725. HEDM is a non-destructive technique capable of providing three-dimensional reconstructions of grain shapes and orientations in polycrystals. The present analysis yields the grain size distribution in alloy 725 as well as the grain boundary character distribution (GBCD) as a function of lattice misorientation and boundary plane normal orientation. We find that the GBCD of Ni-base alloy 725 is similar to that previously determined in pure Ni and other fcc-base metals. We find an elevated density of Σ9 and Σ3 grain boundaries. We also observe amore » preponderance of grain boundaries along low-index planes, with those along (1 1 1) planes being the most common, even after Σ3 twins have been excluded from the analysis.« less

  8. Approximations to the distribution of a test statistic in covariance structure analysis: A comprehensive study.

    PubMed

    Wu, Hao

    2018-05-01

    In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.

  9. Semi-nonparametric VaR forecasts for hedge funds during the recent crisis

    NASA Astrophysics Data System (ADS)

    Del Brio, Esther B.; Mora-Valencia, Andrés; Perote, Javier

    2014-05-01

    The need to provide accurate value-at-risk (VaR) forecasting measures has triggered an important literature in econophysics. Although these accurate VaR models and methodologies are particularly demanded for hedge fund managers, there exist few articles specifically devoted to implement new techniques in hedge fund returns VaR forecasting. This article advances in these issues by comparing the performance of risk measures based on parametric distributions (the normal, Student’s t and skewed-t), semi-nonparametric (SNP) methodologies based on Gram-Charlier (GC) series and the extreme value theory (EVT) approach. Our results show that normal-, Student’s t- and Skewed t- based methodologies fail to forecast hedge fund VaR, whilst SNP and EVT approaches accurately success on it. We extend these results to the multivariate framework by providing an explicit formula for the GC copula and its density that encompasses the Gaussian copula and accounts for non-linear dependences. We show that the VaR obtained by the meta GC accurately captures portfolio risk and outperforms regulatory VaR estimates obtained through the meta Gaussian and Student’s t distributions.

  10. Modeling Electronic Skin Response to Normal Distributed Force

    PubMed Central

    Seminara, Lucia

    2018-01-01

    The reference electronic skin is a sensor array based on PVDF (Polyvinylidene fluoride) piezoelectric polymers, coupled to a rigid substrate and covered by an elastomer layer. It is first evaluated how a distributed normal force (Hertzian distribution) is transmitted to an extended PVDF sensor through the elastomer layer. A simplified approach based on Boussinesq’s half-space assumption is used to get a qualitative picture and extensive FEM simulations allow determination of the quantitative response for the actual finite elastomer layer. The ultimate use of the present model is to estimate the electrical sensor output from a measure of a basic mechanical action at the skin surface. However this requires that the PVDF piezoelectric coefficient be known a-priori. This was not the case in the present investigation. However, the numerical model has been used to fit experimental data from a real skin prototype and to estimate the sensor piezoelectric coefficient. It turned out that this value depends on the preload and decreases as a result of PVDF aging and fatigue. This framework contains all the fundamental ingredients of a fully predictive model, suggesting a number of future developments potentially useful for skin design and validation of the fabrication technology. PMID:29401692

  11. Modeling Electronic Skin Response to Normal Distributed Force.

    PubMed

    Seminara, Lucia

    2018-02-03

    The reference electronic skin is a sensor array based on PVDF (Polyvinylidene fluoride) piezoelectric polymers, coupled to a rigid substrate and covered by an elastomer layer. It is first evaluated how a distributed normal force (Hertzian distribution) is transmitted to an extended PVDF sensor through the elastomer layer. A simplified approach based on Boussinesq's half-space assumption is used to get a qualitative picture and extensive FEM simulations allow determination of the quantitative response for the actual finite elastomer layer. The ultimate use of the present model is to estimate the electrical sensor output from a measure of a basic mechanical action at the skin surface. However this requires that the PVDF piezoelectric coefficient be known a-priori. This was not the case in the present investigation. However, the numerical model has been used to fit experimental data from a real skin prototype and to estimate the sensor piezoelectric coefficient. It turned out that this value depends on the preload and decreases as a result of PVDF aging and fatigue. This framework contains all the fundamental ingredients of a fully predictive model, suggesting a number of future developments potentially useful for skin design and validation of the fabrication technology.

  12. Non-normal Distributions Commonly Used in Health, Education, and Social Sciences: A Systematic Review

    PubMed Central

    Bono, Roser; Blanca, María J.; Arnau, Jaume; Gómez-Benito, Juana

    2017-01-01

    Statistical analysis is crucial for research and the choice of analytical technique should take into account the specific distribution of data. Although the data obtained from health, educational, and social sciences research are often not normally distributed, there are very few studies detailing which distributions are most likely to represent data in these disciplines. The aim of this systematic review was to determine the frequency of appearance of the most common non-normal distributions in the health, educational, and social sciences. The search was carried out in the Web of Science database, from which we retrieved the abstracts of papers published between 2010 and 2015. The selection was made on the basis of the title and the abstract, and was performed independently by two reviewers. The inter-rater reliability for article selection was high (Cohen’s kappa = 0.84), and agreement regarding the type of distribution reached 96.5%. A total of 262 abstracts were included in the final review. The distribution of the response variable was reported in 231 of these abstracts, while in the remaining 31 it was merely stated that the distribution was non-normal. In terms of their frequency of appearance, the most-common non-normal distributions can be ranked in descending order as follows: gamma, negative binomial, multinomial, binomial, lognormal, and exponential. In addition to identifying the distributions most commonly used in empirical studies these results will help researchers to decide which distributions should be included in simulation studies examining statistical procedures. PMID:28959227

  13. SENSITIVITY OF NORMAL THEORY METHODS TO MODEL MISSPECIFICATION IN THE CALCULATION OF UPPER CONFIDENCE LIMITS ON THE RISK FUNCTION FOR CONTINUOUS RESPONSES. (R825385)

    EPA Science Inventory

    Normal theory procedures for calculating upper confidence limits (UCL) on the risk function for continuous responses work well when the data come from a normal distribution. However, if the data come from an alternative distribution, the application of the normal theory procedure...

  14. Statistical studies of animal response data from USF toxicity screening test method

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.; Machado, A. M.

    1978-01-01

    Statistical examination of animal response data obtained using Procedure B of the USF toxicity screening test method indicates that the data deviate only slightly from a normal or Gaussian distribution. This slight departure from normality is not expected to invalidate conclusions based on theoretical statistics. Comparison of times to staggering, convulsions, collapse, and death as endpoints shows that time to death appears to be the most reliable endpoint because it offers the lowest probability of missed observations and premature judgements.

  15. Complex Network Theory Applied to the Growth of Kuala Lumpur's Public Urban Rail Transit Network.

    PubMed

    Ding, Rui; Ujang, Norsidah; Hamid, Hussain Bin; Wu, Jianjun

    2015-01-01

    Recently, the number of studies involving complex network applications in transportation has increased steadily as scholars from various fields analyze traffic networks. Nonetheless, research on rail network growth is relatively rare. This research examines the evolution of the Public Urban Rail Transit Networks of Kuala Lumpur (PURTNoKL) based on complex network theory and covers both the topological structure of the rail system and future trends in network growth. In addition, network performance when facing different attack strategies is also assessed. Three topological network characteristics are considered: connections, clustering and centrality. In PURTNoKL, we found that the total number of nodes and edges exhibit a linear relationship and that the average degree stays within the interval [2.0488, 2.6774] with heavy-tailed distributions. The evolutionary process shows that the cumulative probability distribution (CPD) of degree and the average shortest path length show good fit with exponential distribution and normal distribution, respectively. Moreover, PURTNoKL exhibits clear cluster characteristics; most of the nodes have a 2-core value, and the CPDs of the centrality's closeness and betweenness follow a normal distribution function and an exponential distribution, respectively. Finally, we discuss four different types of network growth styles and the line extension process, which reveal that the rail network's growth is likely based on the nodes with the biggest lengths of the shortest path and that network protection should emphasize those nodes with the largest degrees and the highest betweenness values. This research may enhance the networkability of the rail system and better shape the future growth of public rail networks.

  16. Log-normal distribution of the trace element data results from a mixture of stocahstic input and deterministic internal dynamics.

    PubMed

    Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya

    2002-04-01

    In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.

  17. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  18. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  19. Interpreting the Coulomb-field approximation for generalized-Born electrostatics using boundary-integral equation theory.

    PubMed

    Bardhan, Jaydeep P

    2008-10-14

    The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.

  20. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  1. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  2. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  3. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  4. The Measurement of Visuo-Spatial and Verbal-Numerical Working Memory: Development of IRT-Based Scales

    ERIC Educational Resources Information Center

    Vock, Miriam; Holling, Heinz

    2008-01-01

    The objective of this study is to explore the potential for developing IRT-based working memory scales for assessing specific working memory components in children (8-13 years). These working memory scales should measure cognitive abilities reliably in the upper range of ability distribution as well as in the normal range, and provide a…

  5. Diameter distribution in a Brazilian tropical dry forest domain: predictions for the stand and species.

    PubMed

    Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C

    2017-01-01

    Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.

  6. Prediction of Malaysian monthly GDP

    NASA Astrophysics Data System (ADS)

    Hin, Pooi Ah; Ching, Soo Huei; Yeing, Pan Wei

    2015-12-01

    The paper attempts to use a method based on multivariate power-normal distribution to predict the Malaysian Gross Domestic Product next month. Letting r(t) be the vector consisting of the month-t values on m selected macroeconomic variables, and GDP, we model the month-(t+1) GDP to be dependent on the present and l-1 past values r(t), r(t-1),…,r(t-l+1) via a conditional distribution which is derived from a [(m+1)l+1]-dimensional power-normal distribution. The 100(α/2)% and 100(1-α/2)% points of the conditional distribution may be used to form an out-of sample prediction interval. This interval together with the mean of the conditional distribution may be used to predict the month-(t+1) GDP. The mean absolute percentage error (MAPE), estimated coverage probability and average length of the prediction interval are used as the criterions for selecting the suitable lag value l-1 and the subset from a pool of 17 macroeconomic variables. It is found that the relatively better models would be those of which 2 ≤ l ≤ 3, and involving one or two of the macroeconomic variables given by Market Indicative Yield, Oil Prices, Exchange Rate and Import Trade.

  7. WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarpelli, M; Eickhoff, J; Perlman, S

    Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less

  8. Non-Gaussian Distributions Affect Identification of Expression Patterns, Functional Annotation, and Prospective Classification in Human Cancer Genomes

    PubMed Central

    Marko, Nicholas F.; Weil, Robert J.

    2012-01-01

    Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863

  9. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed Central

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-01-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544

  10. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    PubMed

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  11. Estimating sales and sales market share from sales rank data for consumer appliances

    NASA Astrophysics Data System (ADS)

    Touzani, Samir; Van Buskirk, Robert

    2016-06-01

    Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.

  12. Analysis of vector wind change with respect to time for Cape Kennedy, Florida: Wind aloft profile change vs. time, phase 1

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1977-01-01

    Wind vector change with respect to time at Cape Kennedy, Florida, is examined according to the theory of multivariate normality. The joint distribution of the four variables represented by the components of the wind vector at an initial time and after a specified elapsed time is hypothesized to be quadravariate normal; the fourteen statistics of this distribution, calculated from fifteen years of twice daily Rawinsonde data are presented by monthly reference periods for each month from 0 to 27 km. The hypotheses that the wind component changes with respect to time is univariate normal, the joint distribution of wind component changes is bivariate normal, and the modulus of vector wind change is Rayleigh, has been tested by comparison with observed distributions. Statistics of the conditional bivariate normal distributions of vector wind at a future time given the vector wind at an initial time are derived. Wind changes over time periods from one to five hours, calculated from Jimsphere data, are presented.

  13. The law of distribution of light beam direction fluctuations in telescopes. [normal density functions

    NASA Technical Reports Server (NTRS)

    Divinskiy, M. L.; Kolchinskiy, I. G.

    1974-01-01

    The distribution of deviations from mean star trail directions was studied on the basis of 105 star trails. It was found that about 93% of the trails yield a distribution in agreement with the normal law. About 4% of the star trails agree with the Charlier distribution.

  14. A multi-agent approach to intelligent monitoring in smart grids

    NASA Astrophysics Data System (ADS)

    Vallejo, D.; Albusac, J.; Glez-Morcillo, C.; Castro-Schez, J. J.; Jiménez, L.

    2014-04-01

    In this paper, we propose a scalable multi-agent architecture to give support to smart grids, paying special attention to the intelligent monitoring of distribution substations. The data gathered by multiple sensors are used by software agents that are responsible for monitoring different aspects or events of interest, such as normal voltage values or unbalanced intensity values that can end up blowing fuses and decreasing the quality of service of end consumers. The knowledge bases of these agents have been built by means of a formal model for normality analysis that has been successfully used in other surveillance domains. The architecture facilitates the integration of new agents and can be easily configured and deployed to monitor different environments. The experiments have been conducted over a power distribution network.

  15. Normal probabilities for Vandenberg AFB wind components - monthly reference periods for all flight azimuths, 0- to 70-km altitudes

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1975-01-01

    Vandenberg Air Force Base (AFB), California, wind component statistics are presented to be used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as a statistical model to represent component winds at Vandenberg AFB. Head tail, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99.865 percent for each month. The results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Vandenberg AFB.

  16. Distribution of intravenously administered acetylcholinesterase inhibitor and acetylcholinesterase activity in the adrenal gland: 11C-donepezil PET study in the normal rat.

    PubMed

    Watabe, Tadashi; Naka, Sadahiro; Ikeda, Hayato; Horitsugi, Genki; Kanai, Yasukazu; Isohashi, Kayako; Ishibashi, Mana; Kato, Hiroki; Shimosegawa, Eku; Watabe, Hiroshi; Hatazawa, Jun

    2014-01-01

    Acetylcholinesterase (AChE) inhibitors have been used for patients with Alzheimer's disease. However, its pharmacokinetics in non-target organs other than the brain has not been clarified yet. The purpose of this study was to evaluate the relationship between the whole-body distribution of intravenously administered (11)C-Donepezil (DNP) and the AChE activity in the normal rat, with special focus on the adrenal glands. The distribution of (11)C-DNP was investigated by PET/CT in 6 normal male Wistar rats (8 weeks old, body weight  = 220 ± 8.9 g). A 30-min dynamic scan was started simultaneously with an intravenous bolus injection of (11)C-DNP (45.0 ± 10.7 MBq). The whole-body distribution of the (11)C-DNP PET was evaluated based on the Vt (total distribution volume) by Logan-plot analysis. A fluorometric assay was performed to quantify the AChE activity in homogenized tissue solutions of the major organs. The PET analysis using Vt showed that the adrenal glands had the 2nd highest level of (11)C-DNP in the body (following the liver) (13.33 ± 1.08 and 19.43 ± 1.29 ml/cm(3), respectively), indicating that the distribution of (11)C-DNP was the highest in the adrenal glands, except for that in the excretory organs. The AChE activity was the third highest in the adrenal glands (following the small intestine and the stomach) (24.9 ± 1.6, 83.1 ± 3.0, and 38.5 ± 8.1 mU/mg, respectively), indicating high activity of AChE in the adrenal glands. We demonstrated the whole-body distribution of (11)C-DNP by PET and the AChE activity in the major organs by fluorometric assay in the normal rat. High accumulation of (11)C-DNP was observed in the adrenal glands, which suggested the risk of enhanced cholinergic synaptic transmission by the use of AChE inhibitors.

  17. Rhythm-based heartbeat duration normalization for atrial fibrillation detection.

    PubMed

    Islam, Md Saiful; Ammour, Nassim; Alajlan, Naif; Aboalsamh, Hatim

    2016-05-01

    Screening of atrial fibrillation (AF) for high-risk patients including all patients aged 65 years and older is important for prevention of risk of stroke. Different technologies such as modified blood pressure monitor, single lead ECG-based finger-probe, and smart phone using plethysmogram signal have been emerging for this purpose. All these technologies use irregularity of heartbeat duration as a feature for AF detection. We have investigated a normalization method of heartbeat duration for improved AF detection. AF is an arrhythmia in which heartbeat duration generally becomes irregularly irregular. From a window of heartbeat duration, we estimate the possible rhythm of the majority of heartbeats and normalize duration of all heartbeats in the window based on the rhythm so that we can measure the irregularity of heartbeats for both AF and non-AF rhythms in the same scale. Irregularity is measured by the entropy of distribution of the normalized duration. Then we classify a window of heartbeats as AF or non-AF by thresholding the measured irregularity. The effect of this normalization is evaluated by comparing AF detection performances using duration with the normalization, without normalization, and with other existing normalizations. Sensitivity and specificity of AF detection using normalized heartbeat duration were tested on two landmark databases available online and compared with results of other methods (with/without normalization) by receiver operating characteristic (ROC) curves. ROC analysis showed that the normalization was able to improve the performance of AF detection and it was consistent for a wide range of sensitivity and specificity for use of different thresholds. Detection accuracy was also computed for equal rates of sensitivity and specificity for different methods. Using normalized heartbeat duration, we obtained 96.38% accuracy which is more than 4% improvement compared to AF detection without normalization. The proposed normalization method was found useful for improving performance and robustness of AF detection. Incorporation of this method in a screening device could be crucial to reduce the risk of AF-related stroke. In general, the incorporation of the rhythm-based normalization in an AF detection method seems important for developing a robust AF screening device. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. NDVI statistical distribution of pasture areas at different times in the Community of Madrid (Spain)

    NASA Astrophysics Data System (ADS)

    Martín-Sotoca, Juan J.; Saa-Requejo, Antonio; Díaz-Ambrona, Carlos G. H.; Tarquis, Ana M.

    2015-04-01

    The severity of drought has many implications for society, including its impacts on the water supply, water pollution, reservoir management and ecosystem. However, its impacts on rain-fed agriculture are especially direct. Because of the importance of drought, there have been many attempts to characterize its severity, resulting in the numerous drought indices that have been developed (Niemeyer 2008). 'Biomass index' based on satellite image derived Normalized Difference Vegetation Index (NDVI) has been used in countries like United States of America, Canada and Spain for pasture and forage crops for some years (Rao, 2010). This type of agricultural insurance is named as 'index-based insurance' (IBI). IBI is perceived to be substantially less costly to operate and manage than multiple peril insurance. IBI contracts pay indemnities based not on the actual yield (or revenue) losses experienced by the insurance purchaser but rather based on realized NDVI values (historical data) that is correlated with farm-level losses (Xiaohui Deng et al., 2008). Definition of when drought event occurs is defined on NDVI threshold values mainly based in statistical parameters, average and standard deviation that characterize a normal distribution. In this work a pasture area at the north of Community of Madrid (Spain) has been delimited. Then, NDVI historical data was reconstructed based on remote sensing imaging MODIS, with 500x500m2 resolution. A statistical analysis of the NDVI histograms at consecutives 46 intervals of that area was applied to search for the best statistical distribution based on the maximum likelihood criteria. The results show that the normal distribution is not the optimal representation when IBI is available; the implications in the context of crop insurance are discussed (Martín-Sotoca, 2014). References Kolli N Rao. 2010. Index based Crop Insurance. Agriculture and Agricultural Science Procedia 1, 193-203. Martín-Sotoca, J.J. (2014) Estructura Espacial de la Sequía en Pastos y sus Aplicaciones en el Seguro Agrario. Master Thesis, UPM (In Spanish). Niemeyer, S., 2008: New drought indices. First Int. Conf. on Drought Management: Scientific and Technological Innovations, Zaragoza, Spain, Joint Research Centre of the European Commission. [Available online at http://www.iamz.ciheam.org/medroplan/zaragoza2008/Sequia2008/Session3/S.Niemeyer.pdf.] Xiaohui Deng, Barry J. Barnett, Gerrit Hoogenboom, Yingzhuo Yu and Axel Garcia y Garcia 2008. Alternative Crop Insurance Indexes. Journal of Agricultural and Applied Economics, 40(1), 223-237. Acknowledgements First author acknowledges the Research Grant obtained from CEIGRAM in 2014

  19. Statistical distribution of building lot frontage: application for Tokyo downtown districts

    NASA Astrophysics Data System (ADS)

    Usui, Hiroyuki

    2018-03-01

    The frontage of a building lot is the determinant factor of the residential environment. The statistical distribution of building lot frontages shows how the perimeters of urban blocks are shared by building lots for a given density of buildings and roads. For practitioners in urban planning, this is indispensable to identify potential districts which comprise a high percentage of building lots with narrow frontage after subdivision and to reconsider the appropriate criteria for the density of buildings and roads as residential environment indices. In the literature, however, the statistical distribution of building lot frontages and the density of buildings and roads has not been fully researched. In this paper, based on the empirical study in the downtown districts of Tokyo, it is found that (1) a log-normal distribution fits the observed distribution of building lot frontages better than a gamma distribution, which is the model of the size distribution of Poisson Voronoi cells on closed curves; (2) the statistical distribution of building lot frontages statistically follows a log-normal distribution, whose parameters are the gross building density, road density, average road width, the coefficient of variation of building lot frontage, and the ratio of the number of building lot frontages to the number of buildings; and (3) the values of the coefficient of variation of building lot frontages, and that of the ratio of the number of building lot frontages to that of buildings are approximately equal to 0.60 and 1.19, respectively.

  20. Aspheric surface testing by irradiance transport equation

    NASA Astrophysics Data System (ADS)

    Shomali, Ramin; Darudi, Ahmad; Nasiri, Sadollah; Asgharsharghi Bonab, Armir

    2010-10-01

    In this paper a method for aspheric surface testing is presented. The method is based on solving the Irradiance Transport Equation (ITE).The accuracy of ITE normally depends on the amount of the pick to valley of the phase distribution. This subject is investigated by a simulation procedure.

  1. Estimating the Classification Efficiency of a Test Battery.

    ERIC Educational Resources Information Center

    De Corte, Wilfried

    2000-01-01

    Shows how a theorem proven by H. Brogden (1951, 1959) can be used to estimate the allocation average (a predictor based classification of a test battery) assuming that the predictor intercorrelations and validities are known and that the predictor variables have a joint multivariate normal distribution. (SLD)

  2. Assessing cadmium exposure risks of vegetables with plant uptake factor and soil property.

    PubMed

    Yang, Yang; Chang, Andrew C; Wang, Meie; Chen, Weiping; Peng, Chi

    2018-07-01

    Plant uptake factors (PUFs) are of great importance in human cadmium (Cd) exposure risk assessment while it has been often treated in a generic way. We collected 1077 pairs of vegetable-soil samples from production fields to characterize Cd PUFs and demonstrated their utility in assessing Cd exposure risks to consumers of locally grown vegetables. The Cd PUFs varied with plant species and pH and organic matter content of soils. Once normalized PUFs against soil parameters, the PUFs distributions were log-normal in nature. In this manner, the PUFs were represented by definable probability distributions instead of a deterministic figure. The Cd exposure risks were then assessed using the normalized PUF based on the Monte Carlo simulation algorithm. Factors affecting the extent of Cd exposures were isolated through sensitivity analyses. Normalized PUF would illustrate the outcomes for uncontaminated and slightly contaminated soils. Among the vegetables, lettuce was potentially hazardous for residents due to its high Cd accumulation but low Zn concentration. To protect 95% of the lettuce production from causing excessive Cd exposure risks, pH of soils needed to be 5.9 and above. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Modelling physiological deterioration in post-operative patient vital-sign data.

    PubMed

    Pimentel, Marco A F; Clifton, David A; Clifton, Lei; Watkinson, Peter J; Tarassenko, Lionel

    2013-08-01

    Patients who undergo upper-gastrointestinal surgery have a high incidence of post-operative complications, often requiring admission to the intensive care unit several days after surgery. A dataset comprising observational vital-sign data from 171 post-operative patients taking part in a two-phase clinical trial at the Oxford Cancer Centre, was used to explore the trajectory of patients' vital-sign changes during their stay in the post-operative ward using both univariate and multivariate analyses. A model of normality based vital-sign data from patients who had a "normal" recovery was constructed using a kernel density estimate, and tested with "abnormal" data from patients who deteriorated sufficiently to be re-admitted to the intensive care unit. The vital-sign distributions from "normal" patients were found to vary over time from admission to the post-operative ward to their discharge home, but no significant changes in their distributions were observed from halfway through their stay on the ward to the time of discharge. The model of normality identified patient deterioration when tested with unseen "abnormal" data, suggesting that such techniques may be used to provide early warning of adverse physiological events.

  4. Time-independent models of asset returns revisited

    NASA Astrophysics Data System (ADS)

    Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.

    2000-07-01

    In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.

  5. Experimental investigation on aero-optics of supersonic turbulent boundary layers.

    PubMed

    Ding, Haolin; Yi, Shihe; Zhu, Yangzhu; He, Lin

    2017-09-20

    Nanoparticle-based planar laser scattering was used to measure the density distribution of the supersonic (Ma=3.0) turbulent boundary layer and the optical path difference (OPD), which is quite crucial for aero-optics study. Results were obtained using ray tracing. The influences of different layers in the boundary layer, turbulence scales, and light incident angle on aero-optics were examined, and the underlying flow physics were analyzed. The inner layer plays a dominant role, followed by the outer layer. One hundred OPD rms of the outer layer at different times satisfy the normal distribution better than that of the inner layer. Aero-optics induced by the outer layer is sensitive to the filter scale. When induced by the inner layer, it is not sensitive to the filter scale. The vortices with scales less than the Kolmogorov scale (=46.0  μm) have little influence on the aero-optics and could be ignored; the validity of the smallest optically active scale (=88.1  μm) proposed by Mani is verified, and vortices with scales less than that are ignored, resulting in a 1.62% decay of aero-optics; the filter with a width of 16-grid spacing (=182.4  μm) decreases OPD rms by 7.04%. With the increase of the angle between the wall-normal direction and the light-incident direction, the aero-optics becomes more serious, and the difference between the distribution of the OPD rms and the normal distribution increases. The difficulty of aero-optics correction is increased. Light tilted toward downstream experiences more distortions than when tilted toward upstream at the same angle relative to the wall-normal direction.

  6. Application of Statistically Derived CPAS Parachute Parameters

    NASA Technical Reports Server (NTRS)

    Romero, Leah M.; Ray, Eric S.

    2013-01-01

    The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.

  7. A general approach to double-moment normalization of drop size distributions

    NASA Astrophysics Data System (ADS)

    Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.

    2003-04-01

    Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.

  8. A Bayesian Nonparametric Meta-Analysis Model

    ERIC Educational Resources Information Center

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  9. Experimental Verification of Application of Looped System and Centralized Voltage Control in a Distribution System with Renewable Energy Sources

    NASA Astrophysics Data System (ADS)

    Hanai, Yuji; Hayashi, Yasuhiro; Matsuki, Junya

    The line voltage control in a distribution network is one of the most important issues for a penetration of Renewable Energy Sources (RES). A loop distribution network configuration is an effective solution to resolve voltage and distribution loss issues concerned about a penetration of RES. In this paper, for a loop distribution network, the authors propose a voltage control method based on tap change control of LRT and active/reactive power control of RES. The tap change control of LRT takes a major role of the proposed voltage control. Additionally the active/reactive power control of RES supports the voltage control when voltage deviation from the upper or lower voltage limit is unavoidable. The proposed method adopts SCADA system based on measured data from IT switches, which are sectionalizing switch with sensor installed in distribution feeder. In order to check the validity of the proposed voltage control method, experimental simulations using a distribution system analog simulator “ANSWER” are carried out. In the simulations, the voltage maintenance capability in the normal and the emergency is evaluated.

  10. [Retrieve of red tide distributions from MODIS data based on the characteristics of water spectrum].

    PubMed

    Qiu, Zhong-Feng; Cui, Ting-Wei; He, Yi-Jun

    2011-08-01

    After comparing the spectral differences between red tide water and normal water, we developed a method to retrieve red tide distributions from MODIS data based on the characteristics of red tide water spectrum. The authors used the 119 series of in situ observations to validate the method and found that only one observation has not been detected correctly. The authors then applied this method to MODIS data on April 4, 2005. In the research areas three locations of red tide water were apparently detected with the total areas about 2 000 km2. The retrieved red tide distributions are in good agreement with the distributions of high chlorophyll a concentrations. The research suggests that the method is available to eliminating the influence of suspended sediments and can be used to retrieve the locations and areas of red tide water.

  11. Structure of velocity distributions in shock waves in granular gases with extension to molecular gases.

    PubMed

    Vilquin, A; Boudet, J F; Kellay, H

    2016-08-01

    Velocity distributions in normal shock waves obtained in dilute granular flows are studied. These distributions cannot be described by a simple functional shape and are believed to be bimodal. Our results show that these distributions are not strictly bimodal but a trimodal distribution is shown to be sufficient. The usual Mott-Smith bimodal description of these distributions, developed for molecular gases, and based on the coexistence of two subpopulations (a supersonic and a subsonic population) in the shock front, can be modified by adding a third subpopulation. Our experiments show that this additional population results from collisions between the supersonic and subsonic subpopulations. We propose a simple approach incorporating the role of this third intermediate population to model the measured probability distributions and apply it to granular shocks as well as shocks in molecular gases.

  12. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    PubMed

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  14. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  15. Exciting Normal Distribution

    ERIC Educational Resources Information Center

    Fuchs, Karl Josef; Simonovits, Reinhard; Thaller, Bernd

    2008-01-01

    This paper describes a high school project where the mathematics teaching and learning software M@th Desktop (MD) based on the Computer Algebra System Mathematica was used for symbolical and numerical calculations and for visualisation. The mathematics teaching and learning software M@th Desktop 2.0 (MD) contains the modules Basics including tools…

  16. A Noncentral "t" Regression Model for Meta-Analysis

    ERIC Educational Resources Information Center

    Camilli, Gregory; de la Torre, Jimmy; Chiu, Chia-Yi

    2010-01-01

    In this article, three multilevel models for meta-analysis are examined. Hedges and Olkin suggested that effect sizes follow a noncentral "t" distribution and proposed several approximate methods. Raudenbush and Bryk further refined this model; however, this procedure is based on a normal approximation. In the current research literature, this…

  17. Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei

    2010-01-01

    This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…

  18. Structural Dynamics of Management Zones for the Site-Specific Control of Tarnished Plant Bugs in Cotton

    USDA-ARS?s Scientific Manuscript database

    Precision-based agricultural application of insecticide relies on a non-random distribution of pests; tarnished plant bugs (Lygus lineolaris) are known to prefer vigorously growing patches of cotton. Management zones for various crops have been readily defined using NDVI (Normalized Difference Vege...

  19. Application of a truncated normal failure distribution in reliability testing

    NASA Technical Reports Server (NTRS)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  20. Phylogenetic groups among Klebsiella pneumoniae isolates from Brazil: relationship with antimicrobial resistance and origin.

    PubMed

    de Melo, Maíra Espíndola Silva; Cabral, Adriane Borges; Maciel, Maria Amélia Vieira; da Silveira, Vera Magalhães; de Souza Lopes, Ana Catarina

    2011-05-01

    The objectives of this study were to determine the distribution of phylogenetic groups among Klebsiella pneumoniae isolates from Recife, Brazil and to assess the relationship between the groups and the isolation sites and resistance profile. Ninety four isolates of K. pneumoniae from hospital or community infections and from normal microbiota were analyzed by gyrA PCR-RFLP, antibiotic susceptibility, and adonitol fermentation. The results revealed the distinction of three phylogenetic groups, as it has also been reported in Europe, showing that these clusters are highly conserved within K. pneumoniae. Group KpI was dominantly represented by hospital and community isolates while groups KpII and KpIII displayed mainly normal microbiota isolates. The resistance to third generation cephalosporins, aztreonam, imipenem, amoxicillin/clavulanic acid, and streptomycin was only observed in KpI. The percentage of resistance was higher in KpI, followed by KpII and KpIII. The differences in the distribution of K. pneumoniae phylogenetic groups observed in this study suggest distinctive clinical and epidemiological characteristics among the three groups, which is important to understand the epidemiology of infections caused by this organism. This is the first study in Brazil on K. pneumoniae isolates from normal microbiota and community infections regarding the distribution of phylogenetic groups based on the gyrA gene.

  1. Prediction of renal crystalline size distributions in space using a PBE analytic model. 1. Effect of microgravity-induced biochemical alterations.

    PubMed

    Kassemi, Mohammad; Thompson, David

    2016-09-01

    An analytical Population Balance Equation model is developed and used to assess the risk of critical renal stone formation for astronauts during future space missions. The model uses the renal biochemical profile of the subject as input and predicts the steady-state size distribution of the nucleating, growing, and agglomerating calcium oxalate crystals during their transit through the kidney. The model is verified through comparison with published results of several crystallization experiments. Numerical results indicate that the model is successful in clearly distinguishing between 1-G normal and 1-G recurrent stone-former subjects based solely on their published 24-h urine biochemical profiles. Numerical case studies further show that the predicted renal calculi size distribution for a microgravity astronaut is closer to that of a recurrent stone former on Earth rather than to a normal subject in 1 G. This interestingly implies that the increase in renal stone risk level in microgravity is relatively more significant for a normal person than a stone former. However, numerical predictions still underscore that the stone-former subject carries by far the highest absolute risk of critical stone formation during space travel. Copyright © 2016 the American Physiological Society.

  2. Development and characterization of a strawberry MAGIC population derived from crosses with six strawberry cultivars

    PubMed Central

    Wada, Takuya; Oku, Koichiro; Nagano, Soichiro; Isobe, Sachiko; Suzuki, Hideyuki; Mori, Miyuki; Takata, Kinuko; Hirata, Chiharu; Shimomura, Katsumi; Tsubone, Masao; Katayama, Takao; Hirashima, Keita; Uchimura, Yosuke; Ikegami, Hidetoshi; Sueyoshi, Takayuki; Obu, Ko-ichi; Hayashida, Tatsuya; Shibato, Yasushi

    2017-01-01

    A strawberry Multi-parent Advanced Generation Intercrosses (MAGIC) population, derived from crosses using six strawberry cultivars was successfully developed. The population was composed of 338 individuals; genome conformation was evaluated by expressed sequence tag-derived simple short repeat (EST-SSR) markers. Cluster analysis and principal component analysis (PCA) based on EST-SSR marker polymorphisms revealed that the MAGIC population was a mosaic of the six founder cultivars and covered the genomic regions of the six founders evenly. Fruit quality related traits, including days to flowering (DTF), fruit weight (FW), fruit firmness (FF), fruit color (FC), soluble solid content (SC), and titratable acidity (TA), of the MAGIC population were evaluated over two years. All traits showed normal transgressive segregation beyond the founder cultivars and most traits, except for DTF, distributed normally. FC exhibited the highest correlation coefficient overall and was distributed normally regardless of differences in DTF, FW, FF, SC, and TA. These facts were supported by PCA using fruit quality related values as explanatory variables, suggesting that major genetic factors, which are not influenced by fluctuations in other fruit traits, could control the distribution of FC. This MAGIC population is a promising resource for genome-wide association studies and genomic selection for efficient strawberry breeding. PMID:29085247

  3. The transmembrane gradient of the dielectric constant influences the DPH lifetime distribution.

    PubMed

    Konopásek, I; Kvasnicka, P; Amler, E; Kotyk, A; Curatola, G

    1995-11-06

    The fluorescence lifetime distribution of 1,6-diphenyl-1,3,5-hexatriene (DPH) and 1-[4-(trimethylamino)phenyl]-6-phenyl-1,3,5-hexatriene (TMA-DPH) in egg-phosphatidylcholine liposomes was measured in normal and heavy water. The lower dielectric constant (by approximately 12%) of heavy water compared with normal water was employed to provide direct evidence that the drop of the dielectric constant along the membrane normal shifts the centers of the distribution of both DPH and TMA-DPH to higher values and sharpens the widths of the distribution. The profile of the dielectric constant along the membrane normal was not found to be a linear gradient (in contrast to [1]) but a more complex function. Presence of cholesterol in liposomes further shifted the center of the distributions to higher value and sharpened them. In addition, it resulted in a more gradient-like profile of the dielectric constant (i.e. linearization) along the normal of the membrane. The effect of the change of dielectric constant on the membrane proteins is discussed.

  4. Normal loads program for aerodynamic lifting surface theory. [evaluation of spanwise and chordwise loading distributions

    NASA Technical Reports Server (NTRS)

    Medan, R. T.; Ray, K. S.

    1974-01-01

    A description of and users manual are presented for a U.S.A. FORTRAN 4 computer program which evaluates spanwise and chordwise loading distributions, lift coefficient, pitching moment coefficient, and other stability derivatives for thin wings in linearized, steady, subsonic flow. The program is based on a kernel function method lifting surface theory and is applicable to a large class of planforms including asymmetrical ones and ones with mixed straight and curved edges.

  5. A probabilistic approach to photovoltaic generator performance prediction

    NASA Astrophysics Data System (ADS)

    Khallat, M. A.; Rahman, S.

    1986-09-01

    A method for predicting the performance of a photovoltaic (PV) generator based on long term climatological data and expected cell performance is described. The equations for cell model formulation are provided. Use of the statistical model for characterizing the insolation level is discussed. The insolation data is fitted to appropriate probability distribution functions (Weibull, beta, normal). The probability distribution functions are utilized to evaluate the capacity factors of PV panels or arrays. An example is presented revealing the applicability of the procedure.

  6. Noninvasive Characterization of Indeterminate Pulmonary Nodules Detected on Chest High-Resolution Computed Tomography

    DTIC Science & Technology

    2016-10-01

    the nodule. The discriminability of benign and malignant nodules were analyzed using t- test and the normal distribution of the individual metric value...22 Surround Distribution Distribution of the 7 parenchymal exemplars (Normal, Honey comb, Reticular, Ground glass, mild low attenuation area...the distribution of honey comb, reticular and ground glass surrounding the nodule. 0.001

  7. 29 CFR 4044.73 - Lump sums and other alternative forms of distribution in lieu of annuities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... distribution is the present value of the normal form of benefit provided by the plan payable at normal... 29 Labor 9 2010-07-01 2010-07-01 false Lump sums and other alternative forms of distribution in... Benefits and Assets Non-Trusteed Plans § 4044.73 Lump sums and other alternative forms of distribution in...

  8. Detection and Parameter Estimation of Chirped Radar Signals.

    DTIC Science & Technology

    2000-01-10

    Wigner - Ville distribution ( WVD ): The WVD belongs to the Cohen’s class of energy distributions ...length. 28 6. Pseudo Wigner - Ville distribution (PWVD): The PWVD introduces a time-window to the WVD definition, thereby reducing the interferences...Frequency normalized to sampling frequency 26 Figure V.2: Wigner - Ville distribution ; time normalized to the pulse length 28 Figure V.3:

  9. Improved quasi parton distribution through Wilson line renormalization

    DOE PAGES

    Chen, Jiunn-Wei; Ji, Xiangdong; Zhang, Jian-Hui

    2016-12-09

    Some recent developments showed that hadron light-cone parton distributions could be directly extracted from spacelike correlators, known as quasi parton distributions, in the large hadron momentum limit. Unlike the normal light-cone parton distribution, a quasi parton distribution contains ultraviolet (UV) power divergence associated with the Wilson line self energy. Here, we show that to all orders in the coupling expansion, the power divergence can be removed by a “mass” counterterm in the auxiliary z-field formalism, in the same way as the renormalization of power divergence for an open Wilson line. After adding this counterterm, the quasi quark distribution is improvedmore » such that it contains at most logarithmic divergences. Based on a simple version of discretized gauge action, we also present the one-loop matching kernel between the improved non-singlet quasi quark distribution with a lattice regulator and the corresponding quark distribution in dimensional regularization.« less

  10. Improved quasi parton distribution through Wilson line renormalization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jiunn-Wei; Ji, Xiangdong; Zhang, Jian-Hui

    Some recent developments showed that hadron light-cone parton distributions could be directly extracted from spacelike correlators, known as quasi parton distributions, in the large hadron momentum limit. Unlike the normal light-cone parton distribution, a quasi parton distribution contains ultraviolet (UV) power divergence associated with the Wilson line self energy. Here, we show that to all orders in the coupling expansion, the power divergence can be removed by a “mass” counterterm in the auxiliary z-field formalism, in the same way as the renormalization of power divergence for an open Wilson line. After adding this counterterm, the quasi quark distribution is improvedmore » such that it contains at most logarithmic divergences. Based on a simple version of discretized gauge action, we also present the one-loop matching kernel between the improved non-singlet quasi quark distribution with a lattice regulator and the corresponding quark distribution in dimensional regularization.« less

  11. A survey of blood pressure in Lebanese children and adolescence

    PubMed Central

    Merhi, Bassem Abou; Al-Hajj, Fatima; Al-Tannir, Mohamad; Ziade, Fouad; El-Rajab, Mariam

    2011-01-01

    Background: Blood pressure varies between populations due to ethnic and environmental factors. Therefore, normal blood pressure values should be determined for different populations. Aims: The aim of this survey was to produce blood pressure nomograms for Lebanese children in order to establish distribution curves of blood pressure by age and sex. Subjects and Methods: We conducted a survey of blood pressure in 5710 Lebanese schoolchildren aged 5 to 15 years (2918 boys and 2792 girls), and studied the distribution of systolic and diastolic blood pressure in these children and adolescents. Blood pressure was measured with a mercury sphygmomanometer using a standardized technique. Results: Both systolic and diastolic blood pressure had a positive correlation with weight, height, age, and body mass index (r= 0.648, 0.643, 0.582, and 0.44, respectively) (P < .001). There was no significant difference in the systolic and diastolic blood pressure in boys compared to girls of corresponding ages. However, the average annual increase in systolic blood pressure was 2.86 mm Hg in boys and 2.63 mm Hg in girls, whereas the annual increase in diastolic blood pressure was 1.72 mm Hg in boys and 1.48 mm Hg in girls. The prevalence of high and high-normal blood pressure at the upper limit of normal (between the 90th and 95th percentile, at risk of future hypertension if not managed adequately), was 10.5% in boys and 6.9% in girls, with similar distributions among the two sexes. Conclusions: We present the first age-specific reference values for blood pressure of Lebanese children aged 5 to 15 years based on a good representative sample. The use of these reference values should help pediatricians identify children with normal, high-normal and high blood pressure. PMID:22540059

  12. A survey of blood pressure in Lebanese children and adolescence.

    PubMed

    Merhi, Bassem Abou; Al-Hajj, Fatima; Al-Tannir, Mohamad; Ziade, Fouad; El-Rajab, Mariam

    2011-01-01

    Blood pressure varies between populations due to ethnic and environmental factors. Therefore, normal blood pressure values should be determined for different populations. The aim of this survey was to produce blood pressure nomograms for Lebanese children in order to establish distribution curves of blood pressure by age and sex. We conducted a survey of blood pressure in 5710 Lebanese schoolchildren aged 5 to 15 years (2918 boys and 2792 girls), and studied the distribution of systolic and diastolic blood pressure in these children and adolescents. Blood pressure was measured with a mercury sphygmomanometer using a standardized technique. Both systolic and diastolic blood pressure had a positive correlation with weight, height, age, and body mass index (r= 0.648, 0.643, 0.582, and 0.44, respectively) (P < .001). There was no significant difference in the systolic and diastolic blood pressure in boys compared to girls of corresponding ages. However, the average annual increase in systolic blood pressure was 2.86 mm Hg in boys and 2.63 mm Hg in girls, whereas the annual increase in diastolic blood pressure was 1.72 mm Hg in boys and 1.48 mm Hg in girls. The prevalence of high and high-normal blood pressure at the upper limit of normal (between the 90(th) and 95(th) percentile, at risk of future hypertension if not managed adequately), was 10.5% in boys and 6.9% in girls, with similar distributions among the two sexes. We present the first age-specific reference values for blood pressure of Lebanese children aged 5 to 15 years based on a good representative sample. The use of these reference values should help pediatricians identify children with normal, high-normal and high blood pressure.

  13. THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parravano, Antonio; Sanchez, Nestor; Alfaro, Emilio J.

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloudmore » structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.« less

  14. The Dependence of Prestellar Core Mass Distributions on the Structure of the Parental Cloud

    NASA Astrophysics Data System (ADS)

    Parravano, Antonio; Sánchez, Néstor; Alfaro, Emilio J.

    2012-08-01

    The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle & Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle & Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root {\\cal N} statistical fluctuations, increasing with H.

  15. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  16. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects

    ERIC Educational Resources Information Center

    Ho, Andrew D.; Yu, Carol C.

    2015-01-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…

  17. Exact Interval Estimation, Power Calculation, and Sample Size Determination in Normal Correlation Analysis

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…

  18. Mutants in Arabidopsis thaliana with altered shoot gravitropism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bullen, B.L.; Poff, K.L.

    1987-04-01

    A procedure has been developed and used to screen 40,000 m-2 seedlings of Arabidopsis thaliana for strains with altered shoot gravitropism. Several strains have been identified for which shoot gravitropism is considerably more random than that of their wild-type parent (based on frequency distribution histograms of the gravitropic response to a 1 g stimulus). One such strain exhibits normal hypocotyl phototropism and normal root gravitropism. Thus, the gravitropism pathway in the shoot contains at least one mutable element which is not required for root gravitropism.

  19. Thermal emittance from ionization-induced trapping in plasma accelerators

    DOE PAGES

    Schroeder, C.  B.; Vay, J. -L.; Esarey, E.; ...

    2014-10-03

    The minimum obtainable transverse emittance (thermal emittance) of electron beams generated and trapped in plasma-based accelerators using laser ionization injection is examined. The initial transverse phase space distribution following ionization and passage through the laser is derived, and expressions for the normalized transverse beam emittance, both along and orthogonal to the laser polarization, are presented. Results are compared to particle-in-cell simulations. Ultralow emittance beams can be generated using laser ionization injection into plasma accelerators, and examples are presented showing normalized emittances on the order of tens of nm.

  20. Mechanistic simulation of normal-tissue damage in radiotherapy—implications for dose-volume analyses

    NASA Astrophysics Data System (ADS)

    Rutkowska, Eva; Baker, Colin; Nahum, Alan

    2010-04-01

    A radiobiologically based 3D model of normal tissue has been developed in which complications are generated when 'irradiated'. The aim is to provide insight into the connection between dose-distribution characteristics, different organ architectures and complication rates beyond that obtainable with simple DVH-based analytical NTCP models. In this model the organ consists of a large number of functional subunits (FSUs), populated by stem cells which are killed according to the LQ model. A complication is triggered if the density of FSUs in any 'critical functioning volume' (CFV) falls below some threshold. The (fractional) CFV determines the organ architecture and can be varied continuously from small (series-like behaviour) to large (parallel-like). A key feature of the model is its ability to account for the spatial dependence of dose distributions. Simulations were carried out to investigate correlations between dose-volume parameters and the incidence of 'complications' using different pseudo-clinical dose distributions. Correlations between dose-volume parameters and outcome depended on characteristics of the dose distributions and on organ architecture. As anticipated, the mean dose and V20 correlated most strongly with outcome for a parallel organ, and the maximum dose for a serial organ. Interestingly better correlation was obtained between the 3D computer model and the LKB model with dose distributions typical for serial organs than with those typical for parallel organs. This work links the results of dose-volume analyses to dataset characteristics typical for serial and parallel organs and it may help investigators interpret the results from clinical studies.

  1. Tips and Tricks for Successful Application of Statistical Methods to Biological Data.

    PubMed

    Schlenker, Evelyn

    2016-01-01

    This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.

  2. Determination of fluence rate and temperature distributions in the rat brain; implications for photodynamic therapy.

    PubMed

    Angell-Petersen, Even; Hirschberg, Henry; Madsen, Steen J

    2007-01-01

    Light and heat distributions are measured in a rat glioma model used in photodynamic therapy. A fiber delivering 632-nm light is fixed in the brain of anesthetized BDIX rats. Fluence rates are measured using calibrated isotropic probes that are positioned stereotactically. Mathematical models are then used to derive tissue optical properties, enabling calculation of fluence rate distributions for general tumor and light application geometries. The fluence rates in tumor-free brains agree well with the models based on diffusion theory and Monte Carlo simulation. In both cases, the best fit is found for absorption and reduced scattering coefficients of 0.57 and 28 cm(-1), respectively. In brains with implanted BT(4)C tumors, a discrepancy between diffusion and Monte Carlo-derived two-layer models is noted. Both models suggest that tumor tissue has higher absorption and less scattering than normal brain. Temperatures are measured by inserting thermocouples directly into tumor-free brains. A model based on diffusion theory and the bioheat equation is found to be in good agreement with the experimental data and predict a thermal penetration depth of 0.60 cm in normal rat brain. The predicted parameters can be used to estimate the fluences, fluence rates, and temperatures achieved during photodynamic therapy.

  3. Layover and shadow detection based on distributed spaceborne single-baseline InSAR

    NASA Astrophysics Data System (ADS)

    Huanxin, Zou; Bin, Cai; Changzhou, Fan; Yun, Ren

    2014-03-01

    Distributed spaceborne single-baseline InSAR is an effective technique to get high quality Digital Elevation Model. Layover and Shadow are ubiquitous phenomenon in SAR images because of geometric relation of SAR imaging. In the signal processing of single-baseline InSAR, the phase singularity of Layover and Shadow leads to the phase difficult to filtering and unwrapping. This paper analyzed the geometric and signal model of the Layover and Shadow fields. Based on the interferometric signal autocorrelation matrix, the paper proposed the signal number estimation method based on information theoretic criteria, to distinguish Layover and Shadow from normal InSAR fields. The effectiveness and practicability of the method proposed in the paper are validated in the simulation experiments and theoretical analysis.

  4. Distributed Ship Navigation Control System Based on Dual Network

    NASA Astrophysics Data System (ADS)

    Yao, Ying; Lv, Wu

    2017-10-01

    Navigation system is very important for ship’s normal running. There are a lot of devices and sensors in the navigation system to guarantee ship’s regular work. In the past, these devices and sensors were usually connected via CAN bus for high performance and reliability. However, as the development of related devices and sensors, the navigation system also needs the ability of high information throughput and remote data sharing. To meet these new requirements, we propose the communication method based on dual network which contains CAN bus and industrial Ethernet. Also, we import multiple distributed control terminals with cooperative strategy based on the idea of synchronizing the status by multicasting UDP message contained operation timestamp to make the system more efficient and reliable.

  5. PLEMT: A NOVEL PSEUDOLIKELIHOOD BASED EM TEST FOR HOMOGENEITY IN GENERALIZED EXPONENTIAL TILT MIXTURE MODELS.

    PubMed

    Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J

    2017-01-01

    Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.

  6. 3-D phononic crystals with ultra-wide band gaps

    PubMed Central

    Lu, Yan; Yang, Yang; Guest, James K.; Srivastava, Ankit

    2017-01-01

    In this paper gradient based topology optimization (TO) is used to discover 3-D phononic structures that exhibit ultra-wide normalized all-angle all-mode band gaps. The challenging computational task of repeated 3-D phononic band-structure evaluations is accomplished by a combination of a fast mixed variational eigenvalue solver and distributed Graphic Processing Unit (GPU) parallel computations. The TO algorithm utilizes the material distribution-based approach and a gradient-based optimizer. The design sensitivity for the mixed variational eigenvalue problem is derived using the adjoint method and is implemented through highly efficient vectorization techniques. We present optimized results for two-material simple cubic (SC), body centered cubic (BCC), and face centered cubic (FCC) crystal structures and show that in each of these cases different initial designs converge to single inclusion network topologies within their corresponding primitive cells. The optimized results show that large phononic stop bands for bulk wave propagation can be achieved at lower than close packed spherical configurations leading to lighter unit cells. For tungsten carbide - epoxy crystals we identify all angle all mode normalized stop bands exceeding 100%, which is larger than what is possible with only spherical inclusions. PMID:28233812

  7. 3-D phononic crystals with ultra-wide band gaps.

    PubMed

    Lu, Yan; Yang, Yang; Guest, James K; Srivastava, Ankit

    2017-02-24

    In this paper gradient based topology optimization (TO) is used to discover 3-D phononic structures that exhibit ultra-wide normalized all-angle all-mode band gaps. The challenging computational task of repeated 3-D phononic band-structure evaluations is accomplished by a combination of a fast mixed variational eigenvalue solver and distributed Graphic Processing Unit (GPU) parallel computations. The TO algorithm utilizes the material distribution-based approach and a gradient-based optimizer. The design sensitivity for the mixed variational eigenvalue problem is derived using the adjoint method and is implemented through highly efficient vectorization techniques. We present optimized results for two-material simple cubic (SC), body centered cubic (BCC), and face centered cubic (FCC) crystal structures and show that in each of these cases different initial designs converge to single inclusion network topologies within their corresponding primitive cells. The optimized results show that large phononic stop bands for bulk wave propagation can be achieved at lower than close packed spherical configurations leading to lighter unit cells. For tungsten carbide - epoxy crystals we identify all angle all mode normalized stop bands exceeding 100%, which is larger than what is possible with only spherical inclusions.

  8. Infilling and quality checking of discharge, precipitation and temperature data using a copula based approach

    NASA Astrophysics Data System (ADS)

    Anwar, Faizan; Bárdossy, András; Seidel, Jochen

    2017-04-01

    Estimating missing values in a time series of a hydrological variable is an everyday task for a hydrologist. Existing methods such as inverse distance weighting, multivariate regression, and kriging, though simple to apply, provide no indication of the quality of the estimated value and depend mainly on the values of neighboring stations at a given step in the time series. Copulas have the advantage of representing the pure dependence structure between two or more variables (given the relationship between them is monotonic). They rid us of questions such as transforming the data before use or calculating functions that model the relationship between the considered variables. A copula-based approach is suggested to infill discharge, precipitation, and temperature data. As a first step the normal copula is used, subsequently, the necessity to use non-normal / non-symmetrical dependence is investigated. Discharge and temperature are treated as regular continuous variables and can be used without processing for infilling and quality checking. Due to the mixed distribution of precipitation values, it has to be treated differently. This is done by assigning a discrete probability to the zeros and treating the rest as a continuous distribution. Building on the work of others, along with infilling, the normal copula is also utilized to identify values in a time series that might be erroneous. This is done by treating the available value as missing, infilling it using the normal copula and checking if it lies within a confidence band (5 to 95% in our case) of the obtained conditional distribution. Hydrological data from two catchments Upper Neckar River (Germany) and Santa River (Peru) are used to demonstrate the application for datasets with different data quality. The Python code used here is also made available on GitHub. The required input is the time series of a given variable at different stations.

  9. Pathways of topological rank analysis (PoTRA): a novel method to detect pathways involved in hepatocellular carcinoma

    PubMed Central

    Liu, Li; Dinu, Valentin

    2018-01-01

    Complex diseases such as cancer are usually the result of a combination of environmental factors and one or several biological pathways consisting of sets of genes. Each biological pathway exerts its function by delivering signaling through the gene network. Theoretically, a pathway is supposed to have a robust topological structure under normal physiological conditions. However, the pathway’s topological structure could be altered under some pathological condition. It is well known that a normal biological network includes a small number of well-connected hub nodes and a large number of nodes that are non-hubs. In addition, it is reported that the loss of connectivity is a common topological trait of cancer networks, which is an assumption of our method. Hence, from normal to cancer, the process of the network losing connectivity might be the process of disrupting the structure of the network, namely, the number of hub genes might be altered in cancer compared to that in normal or the distribution of topological ranks of genes might be altered. Based on this, we propose a new PageRank-based method called Pathways of Topological Rank Analysis (PoTRA) to detect pathways involved in cancer. We use PageRank to measure the relative topological ranks of genes in each biological pathway, then select hub genes for each pathway, and use Fisher’s exact test to test if the number of hub genes in each pathway is altered from normal to cancer. Alternatively, if the distribution of topological ranks of gene in a pathway is altered between normal and cancer, this pathway might also be involved in cancer. Hence, we use the Kolmogorov–Smirnov test to detect pathways that have an altered distribution of topological ranks of genes between two phenotypes. We apply PoTRA to study hepatocellular carcinoma (HCC) and several subtypes of HCC. Very interestingly, we discover that all significant pathways in HCC are cancer-associated generally, while several significant pathways in subtypes of HCC are HCC subtype-associated specifically. In conclusion, PoTRA is a new approach to explore and discover pathways involved in cancer. PoTRA can be used as a complement to other existing methods to broaden our understanding of the biological mechanisms behind cancer at the system-level. PMID:29666752

  10. Pathways of topological rank analysis (PoTRA): a novel method to detect pathways involved in hepatocellular carcinoma.

    PubMed

    Li, Chaoxing; Liu, Li; Dinu, Valentin

    2018-01-01

    Complex diseases such as cancer are usually the result of a combination of environmental factors and one or several biological pathways consisting of sets of genes. Each biological pathway exerts its function by delivering signaling through the gene network. Theoretically, a pathway is supposed to have a robust topological structure under normal physiological conditions. However, the pathway's topological structure could be altered under some pathological condition. It is well known that a normal biological network includes a small number of well-connected hub nodes and a large number of nodes that are non-hubs. In addition, it is reported that the loss of connectivity is a common topological trait of cancer networks, which is an assumption of our method. Hence, from normal to cancer, the process of the network losing connectivity might be the process of disrupting the structure of the network, namely, the number of hub genes might be altered in cancer compared to that in normal or the distribution of topological ranks of genes might be altered. Based on this, we propose a new PageRank-based method called Pathways of Topological Rank Analysis (PoTRA) to detect pathways involved in cancer. We use PageRank to measure the relative topological ranks of genes in each biological pathway, then select hub genes for each pathway, and use Fisher's exact test to test if the number of hub genes in each pathway is altered from normal to cancer. Alternatively, if the distribution of topological ranks of gene in a pathway is altered between normal and cancer, this pathway might also be involved in cancer. Hence, we use the Kolmogorov-Smirnov test to detect pathways that have an altered distribution of topological ranks of genes between two phenotypes. We apply PoTRA to study hepatocellular carcinoma (HCC) and several subtypes of HCC. Very interestingly, we discover that all significant pathways in HCC are cancer-associated generally, while several significant pathways in subtypes of HCC are HCC subtype-associated specifically. In conclusion, PoTRA is a new approach to explore and discover pathways involved in cancer. PoTRA can be used as a complement to other existing methods to broaden our understanding of the biological mechanisms behind cancer at the system-level.

  11. Statistical analysis of the 70 meter antenna surface distortions

    NASA Technical Reports Server (NTRS)

    Kiedron, K.; Chian, C. T.; Chuang, K. L.

    1987-01-01

    Statistical analysis of surface distortions of the 70 meter NASA/JPL antenna, located at Goldstone, was performed. The purpose of this analysis is to verify whether deviations due to gravity loading can be treated as quasi-random variables with normal distribution. Histograms of the RF pathlength error distribution for several antenna elevation positions were generated. The results indicate that the deviations from the ideal antenna surface are not normally distributed. The observed density distribution for all antenna elevation angles is taller and narrower than the normal density, which results in large positive values of kurtosis and a significant amount of skewness. The skewness of the distribution changes from positive to negative as the antenna elevation changes from zenith to horizon.

  12. Resilience-based optimal design of water distribution network

    NASA Astrophysics Data System (ADS)

    Suribabu, C. R.

    2017-11-01

    Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.

  13. An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis

    ERIC Educational Resources Information Center

    Diwakar, Rekha

    2017-01-01

    Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…

  14. Standard Error of Linear Observed-Score Equating for the NEAT Design with Nonnormally Distributed Data

    ERIC Educational Resources Information Center

    Zu, Jiyun; Yuan, Ke-Hai

    2012-01-01

    In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…

  15. From Intensity Profile to Surface Normal: Photometric Stereo for Unknown Light Sources and Isotropic Reflectances.

    PubMed

    Lu, Feng; Matsushita, Yasuyuki; Sato, Imari; Okabe, Takahiro; Sato, Yoichi

    2015-10-01

    We propose an uncalibrated photometric stereo method that works with general and unknown isotropic reflectances. Our method uses a pixel intensity profile, which is a sequence of radiance intensities recorded at a pixel under unknown varying directional illumination. We show that for general isotropic materials and uniformly distributed light directions, the geodesic distance between intensity profiles is linearly related to the angular difference of their corresponding surface normals, and that the intensity distribution of the intensity profile reveals reflectance properties. Based on these observations, we develop two methods for surface normal estimation; one for a general setting that uses only the recorded intensity profiles, the other for the case where a BRDF database is available while the exact BRDF of the target scene is still unknown. Quantitative and qualitative evaluations are conducted using both synthetic and real-world scenes, which show the state-of-the-art accuracy of smaller than 10 degree without using reference data and 5 degree with reference data for all 100 materials in MERL database.

  16. Computer program determines exact two-sided tolerance limits for normal distributions

    NASA Technical Reports Server (NTRS)

    Friedman, H. A.; Webb, S. R.

    1968-01-01

    Computer program determines by numerical integration the exact statistical two-sided tolerance limits, when the proportion between the limits is at least a specified number. The program is limited to situations in which the underlying probability distribution for the population sampled is the normal distribution with unknown mean and variance.

  17. Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models

    ERIC Educational Resources Information Center

    Chun, So Yeon; Shapiro, Alexander

    2009-01-01

    The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…

  18. Moderation analysis with missing data in the predictors.

    PubMed

    Zhang, Qian; Wang, Lijuan

    2017-12-01

    The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    PubMed

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  20. Stick-slip behavior in a continuum-granular experiment.

    PubMed

    Geller, Drew A; Ecke, Robert E; Dahmen, Karin A; Backhaus, Scott

    2015-12-01

    We report moment distribution results from a laboratory experiment, similar in character to an isolated strike-slip earthquake fault, consisting of sheared elastic plates separated by a narrow gap filled with a two-dimensional granular medium. Local measurement of strain displacements of the plates at 203 spatial points located adjacent to the gap allows direct determination of the event moments and their spatial and temporal distributions. We show that events consist of spatially coherent, larger motions and spatially extended (noncoherent), smaller events. The noncoherent events have a probability distribution of event moment consistent with an M(-3/2) power law scaling with Poisson-distributed recurrence times. Coherent events have a log-normal moment distribution and mean temporal recurrence. As the applied normal pressure increases, there are more coherent events and their log-normal distribution broadens and shifts to larger average moment.

  1. A Motion-Based Feature for Event-Based Pattern Recognition

    PubMed Central

    Clady, Xavier; Maro, Jean-Matthieu; Barré, Sébastien; Benosman, Ryad B.

    2017-01-01

    This paper introduces an event-based luminance-free feature from the output of asynchronous event-based neuromorphic retinas. The feature consists in mapping the distribution of the optical flow along the contours of the moving objects in the visual scene into a matrix. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating “spiking” events that encode relative changes in pixels' illumination at high temporal resolutions. The optical flow is computed at each event, and is integrated locally or globally in a speed and direction coordinate frame based grid, using speed-tuned temporal kernels. The latter ensures that the resulting feature equitably represents the distribution of the normal motion along the current moving edges, whatever their respective dynamics. The usefulness and the generality of the proposed feature are demonstrated in pattern recognition applications: local corner detection and global gesture recognition. PMID:28101001

  2. Improved Root Normal Size Distributions for Liquid Atomization

    DTIC Science & Technology

    2015-11-01

    Jackson, Primary Breakup of Round Aerated- Liquid Jets in Supersonic Crossflows, Atomization and Sprays, 16(6), 657-672, 2006 H. C. Simmons, The...Breakup in Liquid - Gas Mixing Layers, Atomization and Sprays, 1, 421-440, 1991 P.-K. Wu, L.-K. Tseng, and G. M. Faeth, Primary Breakup in Gas / Liquid ...Improved Root Normal Size Distributions for Liquid Atomization Distribution Statement A. Approved for public release; distribution is unlimited

  3. Normal and compound poisson approximations for pattern occurrences in NGS reads.

    PubMed

    Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu

    2012-06-01

    Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).

  4. Quantitative neonatal glucose-6-phosphate dehydrogenase screening: distribution, reference values, and classification by phenotype.

    PubMed

    Algur, Nurit; Avraham, Irit; Hammerman, Cathy; Kaplan, Michael

    2012-08-01

    To determine enzyme assay reference values for newborns in a Sephardic Jewish population at high risk for glucose-6-phosphate dehydrogenase (G6PD) deficiency. Quantitative G6PD testing was performed on umbilical cord blood. The reduction of nicotinamide adenine dinucleotide phosphate to nicotinamide adenine dinucleotide phosphate-oxidase, reflecting G6PD activity, was measured spectrophotometrically. Hemoglobin (Hb) was measured on the same sample. G6PD activity was recorded as U/g Hb. Males (N = 1502) were separated into 2 distinct groups: those <7 U/g Hb (n = 243 [16.2%], median 0.28 U/g Hb), designated G6PD deficient, presumably hemizygotes; and those ≥ 9 U/g Hb (n = 1256 [83.8%], 18.76 U/g Hb), designated G6PD normal, presumably hemizygotes. Female (n = 1298) values were a continuum and were categorized based on the male distribution: those <7 U/g Hb (n = 81 [6.2%], 4.84 U/g Hb), G6PD deficient, probably homozogytes; those ≥ 9.5 U/g Hb, equivalent to 50% of the male normal value, (n = 1153 (88.8%), 18.36 U/g Hb), G6PD normal, probably homozygotes; and those with intermediate values (n = 64 [4.9%], 8.61 U/g Hb), probable heterozygotes. Accurate identification of the male G6PD-deficient state was possible despite high normal neonatal G6PD values. Female values were presented as a continuum preventing accurate classification but were classified based on male phenotype for practical use. Copyright © 2012 Mosby, Inc. All rights reserved.

  5. Atrial Electrogram Fractionation Distribution before and after Pulmonary Vein Isolation in Human Persistent Atrial Fibrillation-A Retrospective Multivariate Statistical Analysis.

    PubMed

    Almeida, Tiago P; Chu, Gavin S; Li, Xin; Dastagir, Nawshin; Tuan, Jiun H; Stafford, Peter J; Schlindwein, Fernando S; Ng, G André

    2017-01-01

    Purpose: Complex fractionated atrial electrograms (CFAE)-guided ablation after pulmonary vein isolation (PVI) has been used for persistent atrial fibrillation (persAF) therapy. This strategy has shown suboptimal outcomes due to, among other factors, undetected changes in the atrial tissue following PVI. In the present work, we investigate CFAE distribution before and after PVI in patients with persAF using a multivariate statistical model. Methods: 207 pairs of atrial electrograms (AEGs) were collected before and after PVI respectively, from corresponding LA regions in 18 persAF patients. Twelve attributes were measured from the AEGs, before and after PVI. Statistical models based on multivariate analysis of variance (MANOVA) and linear discriminant analysis (LDA) have been used to characterize the atrial regions and AEGs. Results: PVI significantly reduced CFAEs in the LA (70 vs. 40%; P < 0.0001). Four types of LA regions were identified, based on the AEGs characteristics: (i) fractionated before PVI that remained fractionated after PVI (31% of the collected points); (ii) fractionated that converted to normal (39%); (iii) normal prior to PVI that became fractionated (9%) and; (iv) normal that remained normal (21%). Individually, the attributes failed to distinguish these LA regions, but multivariate statistical models were effective in their discrimination ( P < 0.0001). Conclusion: Our results have unveiled that there are LA regions resistant to PVI, while others are affected by it. Although, traditional methods were unable to identify these different regions, the proposed multivariate statistical model discriminated LA regions resistant to PVI from those affected by it without prior ablation information.

  6. A comparison of intensity modulated x-ray therapy to intensity modulated proton therapy for the delivery of non-uniform dose distributions

    NASA Astrophysics Data System (ADS)

    Flynn, Ryan

    2007-12-01

    The distribution of biological characteristics such as clonogen density, proliferation, and hypoxia throughout tumors is generally non-uniform, therefore it follows that the optimal dose prescriptions should also be non-uniform and tumor-specific. Advances in intensity modulated x-ray therapy (IMXT) technology have made the delivery of custom-made non-uniform dose distributions possible in practice. Intensity modulated proton therapy (IMPT) has the potential to deliver non-uniform dose distributions as well, while significantly reducing normal tissue and organ at risk dose relative to IMXT. In this work, a specialized treatment planning system was developed for the purpose of optimizing and comparing biologically based IMXT and IMPT plans. The IMXT systems of step-and-shoot (IMXT-SAS) and helical tomotherapy (IMXT-HT) and the IMPT systems of intensity modulated spot scanning (IMPT-SS) and distal gradient tracking (IMPT-DGT), were simulated. A thorough phantom study was conducted in which several subvolumes, which were contained within a base tumor region, were boosted or avoided with IMXT and IMPT. Different boosting situations were simulated by varying the size, proximity, and the doses prescribed to the subvolumes, and the size of the phantom. IMXT and IMPT were also compared for a whole brain radiation therapy (WBRT) case, in which a brain metastasis was simultaneously boosted and the hippocampus was avoided. Finally, IMXT and IMPT dose distributions were compared for the case of non-uniform dose prescription in a head and neck cancer patient that was based on PET imaging with the Cu(II)-diacetyl-bis(N4-methylthiosemicarbazone (Cu-ATSM) hypoxia marker. The non-uniform dose distributions within the tumor region were comparable for IMXT and IMPT. IMPT, however, was capable of delivering the same non-uniform dose distributions within a tumor using a 180° arc as for a full 360° rotation, which resulted in the reduction of normal tissue integral dose by a factor of up to three relative to IMXT, and the complete sparing of organs at risk distal to the tumor region.

  7. Downlink power distributions for 2G and 3G mobile communication networks.

    PubMed

    Colombi, Davide; Thors, Björn; Persson, Tomas; Wirén, Niklas; Larsson, Lars-Eric; Jonsson, Mikael; Törnevik, Christer

    2013-12-01

    Knowledge of realistic power levels is key when conducting accurate EMF exposure assessments. In this study, downlink output power distributions for radio base stations in 2G and 3G mobile communication networks have been assessed. The distributions were obtained from network measurement data collected from the Operations Support System, which normally is used for network monitoring and management. Significant amounts of data were gathered simultaneously for large sets of radio base stations covering wide geographical areas and different environments. The method was validated with in situ measurements. For the 3G network, the 90th percentile of the averaged output power during high traffic hours was found to be 43 % of the maximum available power. The corresponding number for 2G, with two or more transceivers installed, was 65 % or below.

  8. [Quantitative analysis method based on fractal theory for medical imaging of normal brain development in infants].

    PubMed

    Li, Heheng; Luo, Liangping; Huang, Li

    2011-02-01

    The present paper is aimed to study the fractal spectrum of the cerebral computerized tomography in 158 normal infants of different age groups, based on the calculation of chaotic theory. The distribution range of neonatal period was 1.88-1.90 (mean = 1.8913 +/- 0.0064); It reached a stable condition at the level of 1.89-1.90 during 1-12 months old (mean = 1.8927 +/- 0.0045); The normal range of 1-2 years old infants was 1.86-1.90 (mean = 1.8863 +/- 4 0.0085); It kept the invariance of the quantitative value among 1.88-1.91(mean = 1.8958 +/- 0.0083) during 2-3 years of age. ANOVA indicated there's no significant difference between boys and girls (F = 0.243, P > 0.05), but the difference of age groups was significant (F = 8.947, P < 0.001). The fractal dimension of cerebral computerized tomography in normal infants computed by box methods was maintained at an efficient stability from 1.86 to 1.91. It indicated that there exit some attractor modes in pediatric brain development.

  9. Detection of cancerous cervical cells using physical adhesion of fluorescent silica particles and centripetal force

    PubMed Central

    Gaikwad, Ravi M.; Dokukin, Maxim E.; Iyer, K. Swaminathan; Woodworth, Craig D.; Volkov, Dmytro O.; Sokolov, Igor

    2012-01-01

    Here we describe a non-traditional method to identify cancerous human cervical epithelial cells in a culture dish based on physical interaction between silica beads and cells. It is a simple optical fluorescence-based technique which detects the relative difference in the amount of fluorescent silica beads physically adherent to surfaces of cancerous and normal cervical cells. The method utilizes the centripetal force gradient that occurs in a rotating culture dish. Due to the variation in the balance between adhesion and centripetal forces, cancerous and normal cells demonstrate clearly distinctive distributions of the fluorescent particles adherent to the cell surface over the culture dish. The method demonstrates higher adhesion of silica particles to normal cells compared to cancerous cells. The difference in adhesion was initially observed by atomic force microscopy (AFM). The AFM data were used to design the parameters of the rotational dish experiment. The optical method that we describe is much faster and technically simpler than AFM. This work provides proof of the concept that physical interactions can be used to accurately discriminate normal and cancer cells. PMID:21305062

  10. Timing Solution and Single-pulse Properties for Eight Rotating Radio Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, B.-Y.; McLaughlin, M. A.; Boyles, J.

    Rotating radio transients (RRATs), loosely defined as objects that are discovered through only their single pulses, are sporadic pulsars that have a wide range of emission properties. For many of them, we must measure their periods and determine timing solutions relying on the timing of their individual pulses, while some of the less sporadic RRATs can be timed by using folding techniques as we do for other pulsars. Here, based on Parkes and Green Bank Telescope (GBT) observations, we introduce our results on eight RRATs including their timing-derived rotation parameters, positions, and dispersion measures (DMs), along with a comparison ofmore » the spin-down properties of RRATs and normal pulsars. Using data for 24 RRATs, we find that their period derivatives are generally larger than those of normal pulsars, independent of any intrinsic correlation with period, indicating that RRATs’ highly sporadic emission may be associated with intrinsically larger magnetic fields. We carry out Lomb–Scargle tests to search for periodicities in RRATs’ pulse detection times with long timescales. Periodicities are detected for all targets, with significant candidates of roughly 3.4 hr for PSR J1623−0841 and 0.7 hr for PSR J1839−0141. We also analyze their single-pulse amplitude distributions, finding that log-normal distributions provide the best fits, as is the case for most pulsars. However, several RRATs exhibit power-law tails, as seen for pulsars emitting giant pulses. This, along with consideration of the selection effects against the detection of weak pulses, imply that RRAT pulses generally represent the tail of a normal intensity distribution.« less

  11. Spatiotemporal Fractionation Schemes for Irradiating Large Cerebral Arteriovenous Malformations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bussière, Marc R.; Chapman, Paul H.

    2016-07-01

    Purpose: To optimally exploit fractionation effects in the context of radiosurgery treatments of large cerebral arteriovenous malformations (AVMs). In current practice, fractionated treatments divide the dose evenly into several fractions, which generally leads to low obliteration rates. In this work, we investigate the potential benefit of delivering distinct dose distributions in different fractions. Methods and Materials: Five patients with large cerebral AVMs were reviewed and replanned for intensity modulated arc therapy delivered with conventional photon beams. Treatment plans allowing for different dose distributions in all fractions were obtained by performing treatment plan optimization based on the cumulative biologically effective dosemore » delivered at the end of treatment. Results: We show that distinct treatment plans can be designed for different fractions, such that high single-fraction doses are delivered to complementary parts of the AVM. All plans create a similar dose bath in the surrounding normal brain and thereby exploit the fractionation effect. This partial hypofractionation in the AVM along with fractionation in normal brain achieves a net improvement of the therapeutic ratio. We show that a biological dose reduction of approximately 10% in the healthy brain can be achieved compared with reference treatment schedules that deliver the same dose distribution in all fractions. Conclusions: Boosting complementary parts of the target volume in different fractions may provide a therapeutic advantage in fractionated radiosurgery treatments of large cerebral AVMs. The strategy allows for a mean dose reduction in normal brain that may be valuable for a patient population with an otherwise normal life expectancy.« less

  12. A method for estimating direct normal solar irradiation from satellite data for a tropical environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janjai, Serm

    In order to investigate a potential use of concentrating solar power technologies and select an optimum site for these technologies, it is necessary to obtain information on the geographical distribution of direct normal solar irradiation over an area of interest. In this work, we have developed a method for estimating direct normal irradiation from satellite data for a tropical environment. The method starts with the estimation of global irradiation on a horizontal surface from MTSAT-1R satellite data and other ground-based ancillary data. Then a satellite-based diffuse fraction model was developed and used to estimate the diffuse component of the satellite-derivedmore » global irradiation. Based on this estimated global and diffuse irradiation and the solar radiation incident angle, the direct normal irradiation was finally calculated. To evaluate its performance, the method was used to estimate the monthly average hourly direct normal irradiation at seven pyrheliometer stations in Thailand. It was found that values of monthly average hourly direct normal irradiation from the measurements and those estimated from the proposed method are in reasonable agreement, with a root mean square difference of 16% and a mean bias of -1.6%, with respect to mean measured values. After the validation, this method was used to estimate the monthly average hourly direct normal irradiation over Thailand by using MTSAT-1R satellite data for the period from June 2005 to December 2008. Results from the calculation were displayed as hourly and yearly irradiation maps. These maps reveal that the direct normal irradiation in Thailand was strongly affected by the tropical monsoons and local topography of the country. (author)« less

  13. [Distribution of individuals by spontaneous frequencies of lymphocytes with micronuclei. Particularity and consequences].

    PubMed

    Serebrianyĭ, A M; Akleev, A V; Aleshchenko, A V; Antoshchina, M M; Kudriashova, O V; Riabchenko, N I; Semenova, L P; Pelevina, I I

    2011-01-01

    By micronucleus (MN) assay with cytokinetic cytochalasin B block, the mean frequency of blood lymphocytes with MN has been determined in 76 Moscow inhabitants, 35 people from Obninsk and 122 from Chelyabinsk region. In contrast to the distribution of individuals on spontaneous frequency of cells with aberrations, which was shown to be binomial (Kusnetzov et al., 1980), the distribution of individuals on the spontaneous frequency of cells with MN in all three massif can be acknowledged as log-normal (chi2 test). Distribution of individuals in the joined massifs (Moscow and Obninsk inhabitants) and in the unique massif of all inspected with great reliability must be acknowledged as log-normal (0.70 and 0.86 correspondingly), but it cannot be regarded as Poisson, binomial or normal. Taking into account that log-normal distribution of children by spontaneous frequency of lymphocytes with MN has been observed by the inspection of 473 children from different kindergartens in Moscow we can make the conclusion that log-normal is regularity inherent in this type of damage of lymphocytes genome. On the contrary the distribution of individuals on induced by irradiation in vitro lymphocytes with MN frequency in most cases must be acknowledged as normal. This distribution character points out that damage appearance in the individual (genomic instability) in a single lymphocytes increases the probability of the damage appearance in another lymphocytes. We can propose that damaged stem cells lymphocyte progenitor's exchange by information with undamaged cells--the type of the bystander effect process. It can also be supposed that transmission of damage to daughter cells occurs in the time of stem cells division.

  14. Frequency distribution of lithium in leaves of Lycium andersonii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romney, E.M.; Wallace, A.; Kinnear, J.

    1977-01-01

    Lycium andersonii A. Gray is an accumulator of Li. Assays were made of 200 samples of it collected from six different locations within the Northern Mojave Desert. Mean concentrations of Li varied from location to location and tended not to follow log/sub e/ normal distribution, and to follow a normal distribution only poorly. There was some negative skewness to the log/sub e/ distribution which did exist. The results imply that the variation in accumulation of Li depends upon native supply of Li. Possibly the Li supply and the ability of L. andersonii plants to accumulate it are both log/sub e/more » normally distributed. The mean leaf concentration of Li in all locations was 29 ..mu..g/g, but the maximum was 166 ..mu..g/g.« less

  15. Indexing the relative abundance of age-0 white sturgeons in an impoundment of the lower Columbia River from highly skewed trawling data

    USGS Publications Warehouse

    Counihan, T.D.; Miller, Allen I.; Parsley, M.J.

    1999-01-01

    The development of recruitment monitoring programs for age-0 white sturgeons Acipenser transmontanus is complicated by the statistical properties of catch-per-unit-effort (CPUE) data. We found that age-0 CPUE distributions from bottom trawl surveys violated assumptions of statistical procedures based on normal probability theory. Further, no single data transformation uniformly satisfied these assumptions because CPUE distribution properties varied with the sample mean (??(CPUE)). Given these analytic problems, we propose that an additional index of age-0 white sturgeon relative abundance, the proportion of positive tows (Ep), be used to estimate sample sizes before conducting age-0 recruitment surveys and to evaluate statistical hypothesis tests comparing the relative abundance of age-0 white sturgeons among years. Monte Carlo simulations indicated that Ep was consistently more precise than ??(CPUE), and because Ep is binomially rather than normally distributed, surveys can be planned and analyzed without violating the assumptions of procedures based on normal probability theory. However, we show that Ep may underestimate changes in relative abundance at high levels and confound our ability to quantify responses to management actions if relative abundance is consistently high. If data suggest that most samples will contain age-0 white sturgeons, estimators of relative abundance other than Ep should be considered. Because Ep may also obscure correlations to climatic and hydrologic variables if high abundance levels are present in time series data, we recommend ??(CPUE) be used to describe relations to environmental variables. The use of both Ep and ??(CPUE) will facilitate the evaluation of hypothesis tests comparing relative abundance levels and correlations to variables affecting age-0 recruitment. Estimated sample sizes for surveys should therefore be based on detecting predetermined differences in Ep, but data necessary to calculate ??(CPUE) should also be collected.

  16. Nonlinear normal modes in electrodynamic systems: A nonperturbative approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudrin, A. V., E-mail: kud@rf.unn.ru; Kudrina, O. A.; Petrov, E. Yu.

    2016-06-15

    We consider electromagnetic nonlinear normal modes in cylindrical cavity resonators filled with a nonlinear nondispersive medium. The key feature of the analysis is that exact analytic solutions of the nonlinear field equations are employed to study the mode properties in detail. Based on such a nonperturbative approach, we rigorously prove that the total energy of free nonlinear oscillations in a distributed conservative system, such as that considered in our work, can exactly coincide with the sum of energies of the normal modes of the system. This fact implies that the energy orthogonality property, which has so far been known tomore » hold only for linear oscillations and fields, can also be observed in a nonlinear oscillatory system.« less

  17. A log-sinh transformation for data normalization and variance stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  18. Evaluation of different strategies for calibration of the simple distributed model SEDD for sediment transport in an olive microcatchment

    NASA Astrophysics Data System (ADS)

    Burguet, M.

    2012-04-01

    M. Burguet (1), E.V. Taguas(2), J.A. Gómez(1) (1)Institute for Sustainable Agriculture (IAS-CSIC).Av. Menéndez Pidal s/n Campus Alameda del Obispo Apartado 4084. 14080 Córdoba. (2)Department of Rural Engineering, University of Córdoba. 14014 Córdoba. Olive groves located in mountainous areas with steep slopes in the south of Spain, have been identified as a major source of sediments in the region, contributing to diffuse pollution of surface water and causing major damage to roads and reservoirs. This study has as objective the evaluation of different calibration approaches of a water erosion distributed model in a 6.7 ha watershed of olive groves, with soil management based on tillage and herbicide in Setenil (Cadiz). The model chosen was SEDD (Ferro and Porto, 2000), which was calibrated using data from rainfall, runoff and soil erosion measured in the same basin in a series of five years, following the original methodology proposed by its creators. It was compared with the modelling approach presented by Taguas et al. (2011), which considers the possibility of binomial distribution of its main parameter coefficient β. In both cases the calibration of the model assumes a constant C value which is not the case in olive orchards (Gómez et al., 2003). In a second stage, the calibration of the model was repeated using a variable C depending on the ground cover and soil moisture evolution along the season. The results indicate that the coefficient β determines the travel time within each sub-basin is a distribution that is far from the normal distribution suggested by Ferro and Porto (2000). This is a similar result to that obtained by Taguas et al. (2011) in another basin of olive groves. In this case the explanation for this deviation from a normal distribution of key parameters of the model β cannot be the evolution of the coverage. It also reflects little predictive power because of the inability of it to capture two major events that caused the greatest erosion of soil loss measured in the 97 events. These results suggest that progress must be made in the calibration of the model, based on different estimates of β characteristic of the basin that is not dependent on an approximation of its distribution to a normal distribution, and including the impact of soil management along the season.

  19. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable

    PubMed Central

    2012-01-01

    Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998

  20. ESTABLISHMENT OF A FIBRINOGEN REFERENCE INTERVAL IN ORNATE BOX TURTLES (TERRAPENE ORNATA ORNATA).

    PubMed

    Parkinson, Lily; Olea-Popelka, Francisco; Klaphake, Eric; Dadone, Liza; Johnston, Matthew

    2016-09-01

    This study sought to establish a reference interval for fibrinogen in healthy ornate box turtles ( Terrapene ornata ornata). A total of 48 turtles were enrolled, with 42 turtles deemed to be noninflammatory and thus fitting the inclusion criteria and utilized to estimate a fibrinogen reference interval. Turtles were excluded based upon physical examination and blood work abnormalities. A Shapiro-Wilk normality test indicated that the noninflammatory turtle fibrinogen values were normally distributed (Gaussian distribution) with an average of 108 mg/dl and a 95% confidence interval of the mean of 97.9-117 mg/dl. Those turtles excluded from the reference interval because of abnormalities affecting their health had significantly different fibrinogen values (P = 0.313). A reference interval for healthy ornate box turtles was calculated. Further investigation into the utility of fibrinogen measurement for clinical usage in ornate box turtles is warranted.

  1. A comparison of portfolio selection models via application on ISE 100 index data

    NASA Astrophysics Data System (ADS)

    Altun, Emrah; Tatlidil, Hüseyin

    2013-10-01

    Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.

  2. Effect of EMIC Wave Normal Angle Distribution on Relativistic Electron Scattering Based on the Newly Developed Self-consistent RC/EMIC Waves Model by Khazanov et al. [2006

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Gallagher, D. L.; Gamayunov, K.

    2007-01-01

    It is well known that the effects of EMIC waves on RC ion and RB electron dynamics strongly depend on such particle/wave characteristics as the phase-space distribution function, frequency, wave-normal angle, wave energy, and the form of wave spectral energy density. Therefore, realistic characteristics of EMIC waves should be properly determined by modeling the RC-EMIC waves evolution self-consistently. Such a selfconsistent model progressively has been developing by Khaznnov et al. [2002-2006]. It solves a system of two coupled kinetic equations: one equation describes the RC ion dynamics and another equation describes the energy density evolution of EMIC waves. Using this model, we present the effectiveness of relativistic electron scattering and compare our results with previous work in this area of research.

  3. Weighted optimization of irradiance for photodynamic therapy of port wine stains

    NASA Astrophysics Data System (ADS)

    He, Linhuan; Zhou, Ya; Hu, Xiaoming

    2016-10-01

    Planning of irradiance distribution (PID) is one of the foremost factors for on-demand treatment of port wine stains (PWS) with photodynamic therapy (PDT). A weighted optimization method for PID was proposed according to the grading of PWS with a three dimensional digital illumination instrument. Firstly, the point clouds of lesions were filtered to remove the error or redundant points, the triangulation was carried out and the lesion was divided into small triangular patches. Secondly, the parameters such as area, normal vector and orthocenter for optimization of each triangular patch were calculated, and the weighted coefficients were determined by the erythema indexes and areas of patches. Then, the optimization initial point was calculated based on the normal vectors and orthocenters to optimize the light direction. In the end, the irradiation can be optimized according to cosine values of irradiance angles and weighted coefficients. Comparing the irradiance distribution before and after optimization, the proposed weighted optimization method can make the irradiance distribution match better with the characteristics of lesions, and has the potential to improve the therapeutic efficacy.

  4. Estimation of Prestress Force Distribution in Multi-Strand System of Prestressed Concrete Structures Using Field Data Measured by Electromagnetic Sensor

    PubMed Central

    Cho, Keunhee; Cho, Jeong-Rae; Kim, Sung Tae; Park, Sung Yong; Kim, Young-Jin; Park, Young-Hwan

    2016-01-01

    The recently developed smart strand can be used to measure the prestress force in the prestressed concrete (PSC) structure from the construction stage to the in-service stage. The higher cost of the smart strand compared to the conventional strand renders it unaffordable to replace all the strands by smart strands, and results in the application of only a limited number of smart strands in the PSC structure. However, the prestress forces developed in the strands of the multi-strand system frequently adopted in PSC structures differ from each other, which means that the prestress force in the multi-strand system cannot be obtained by simple proportional scaling using the measurement of the smart strand. Therefore, this study examines the prestress force distribution in the multi-strand system to find the correlation between the prestress force measured by the smart strand and the prestress force distribution in the multi-strand system. To that goal, the prestress force distribution was measured using electromagnetic sensors for various factors of the multi-strand system adopted on site in the fabrication of actual PSC girders. The results verified the possibility to assume normal distribution for the prestress force distribution per anchor head, and a method computing the mean and standard deviation defining the normal distribution is proposed. This paper presents a meaningful finding by proposing an estimation method of the prestress force based upon field-measured data of the prestress force distribution in the multi-strand system of actual PSC structures. PMID:27548172

  5. Profiling of adrenocorticotropic hormone and arginine vasopressin in human pituitary gland and tumor thin tissue sections using droplet-based liquid-microjunction surface-sampling-HPLC–ESI-MS–MS

    DOE PAGES

    Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R.; ...

    2015-06-18

    Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections using a fully automated droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system for spatially resolved sampling, HPLC separation, and mass spectral detection. Excellent correlation was found between the protein distribution data obtained with this droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system and those data obtained with matrix assisted laser desorption ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland.more » AVP was most abundant in the posterior pituitary gland region (neurohypophysis) and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH secreting adenomas and in normal anterior adenohypophysis compared to non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis as anticipated. This work demonstrates that a fully automated droplet-based liquid microjunction surface sampling system coupled to HPLC-ESI-MS/MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, such as AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity and specificity of the current methodology support the potential of this basic technology with further advancement for assisting surgical decision-making.« less

  6. Profiling of adrenocorticotropic hormone and arginine vasopressin in human pituitary gland and tumor thin tissue sections using droplet-based liquid-microjunction surface-sampling-HPLC-ESI-MS-MS.

    PubMed

    Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R; Changelian, Armen; Laws, Edward R; Santagata, Sandro; Agar, Nathalie Y R; Van Berkel, Gary J

    2015-08-01

    Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections, using a fully automated droplet-based liquid-microjunction surface-sampling-HPLC-ESI-MS-MS system for spatially resolved sampling, HPLC separation, and mass spectrometric detection. Excellent correlation was found between the protein distribution data obtained with this method and data obtained with matrix-assisted laser desorption/ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland. AVP was most abundant in the posterior pituitary gland region (neurohypophysis), and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH-secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH-secreting adenomas and in normal anterior adenohypophysis compared with non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis, as expected. This work reveals that a fully automated droplet-based liquid-microjunction surface-sampling system coupled to HPLC-ESI-MS-MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, including AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity, and specificity of this method support the potential of this basic technology, with further advancement, for assisting surgical decision-making. Graphical Abstract Mass spectrometry based profiling of hormones in human pituitary gland and tumor thin tissue sections.

  7. Profiling of adrenocorticotropic hormone and arginine vasopressin in human pituitary gland and tumor thin tissue sections using droplet-based liquid-microjunction surface-sampling-HPLC–ESI-MS–MS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Calligaris, David; Feldman, Daniel R.

    Described here are the results from the profiling of the proteins arginine vasopressin (AVP) and adrenocorticotropic hormone (ACTH) from normal human pituitary gland and pituitary adenoma tissue sections using a fully automated droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system for spatially resolved sampling, HPLC separation, and mass spectral detection. Excellent correlation was found between the protein distribution data obtained with this droplet-based liquid microjunction surface sampling-HPLC-ESI-MS/MS system and those data obtained with matrix assisted laser desorption ionization (MALDI) chemical imaging analyses of serial sections of the same tissue. The protein distributions correlated with the visible anatomic pattern of the pituitary gland.more » AVP was most abundant in the posterior pituitary gland region (neurohypophysis) and ATCH was dominant in the anterior pituitary gland region (adenohypophysis). The relative amounts of AVP and ACTH sampled from a series of ACTH secreting and non-secreting pituitary adenomas correlated with histopathological evaluation. ACTH was readily detected at significantly higher levels in regions of ACTH secreting adenomas and in normal anterior adenohypophysis compared to non-secreting adenoma and neurohypophysis. AVP was mostly detected in normal neurohypophysis as anticipated. This work demonstrates that a fully automated droplet-based liquid microjunction surface sampling system coupled to HPLC-ESI-MS/MS can be readily used for spatially resolved sampling, separation, detection, and semi-quantitation of physiologically-relevant peptide and protein hormones, such as AVP and ACTH, directly from human tissue. In addition, the relative simplicity, rapidity and specificity of the current methodology support the potential of this basic technology with further advancement for assisting surgical decision-making.« less

  8. Testing models of parental investment strategy and offspring size in ants.

    PubMed

    Gilboa, Smadar; Nonacs, Peter

    2006-01-01

    Parental investment strategies can be fixed or flexible. A fixed strategy predicts making all offspring a single 'optimal' size. Dynamic models predict flexible strategies with more than one optimal size of offspring. Patterns in the distribution of offspring sizes may thus reveal the investment strategy. Static strategies should produce normal distributions. Dynamic strategies should often result in non-normal distributions. Furthermore, variance in morphological traits should be positively correlated with the length of developmental time the traits are exposed to environmental influences. Finally, the type of deviation from normality (i.e., skewed left or right, or platykurtic) should be correlated with the average offspring size. To test the latter prediction, we used simulations to detect significant departures from normality and categorize distribution types. Data from three species of ants strongly support the predicted patterns for dynamic parental investment. Offspring size distributions are often significantly non-normal. Traits fixed earlier in development, such as head width, are less variable than final body weight. The type of distribution observed correlates with mean female dry weight. The overall support for a dynamic parental investment model has implications for life history theory. Predicted conflicts over parental effort, sex investment ratios, and reproductive skew in cooperative breeders follow from assumptions of static parental investment strategies and omnipresent resource limitations. By contrast, with flexible investment strategies such conflicts can be either absent or maladaptive.

  9. Vibrational Product States from Reactions of CN(-) with the Hydrogen Halides and Hydrogen Atoms,

    DTIC Science & Technology

    1981-01-15

    in these Several of the postulated schemes to synthesize CNH distributions. Each distribution is normalized to 1.0 ignoring in outer space are based on...been observed in interstellar space . 22-24 (3) One major advantage of studying HCN instead of, say, CO 2 is that the V3 mode of HCN is very anharmonic... Nebula by radio emission. 22,54-58 (Table IV) for the P3 modes of HCN and CNH in Reactions to- (6). The hatched areas are indications of the errors

  10. Comparative pharmacokinetic and tissue distribution profiles of four major bioactive components in normal and hepatic fibrosis rats after oral administration of Fuzheng Huayu recipe.

    PubMed

    Yang, Tao; Liu, Shan; Wang, Chang-Hong; Tao, Yan-Yan; Zhou, Hua; Liu, Cheng-Hai

    2015-10-10

    Fuzheng Huayu recipe (FZHY) is a herbal product for the treatment of liver fibrosis approved by the Chinese State Food and Drug Administration (SFDA), but its pharmacokinetics and tissue distribution had not been investigated. In this study, the liver fibrotic model was induced with intraperitoneal injection of dimethylnitrosamine (DMN), and FZHY was given orally to the model and normal rats. The plasma pharmacokinetics and tissue distribution profiles of four major bioactive components from FZHY were analyzed in the normal and fibrotic rat groups using an ultrahigh performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method. Results revealed that the bioavailabilities of danshensu (DSS), salvianolic acid B (SAB) and rosmarinic acid (ROS) in liver fibrotic rats increased 1.49, 3.31 and 2.37-fold, respectively, compared to normal rats. There was no obvious difference in the pharmacokinetics of amygdalin (AMY) between the normal and fibrotic rats. The tissue distribution of DSS, SAB, and AMY trended to be mostly in the kidney and lung. The distribution of DSS, SAB, and AMY in liver tissue of the model rats was significantly decreased compared to the normal rats. Significant differences in the pharmacokinetics and tissue distribution profiles of DSS, ROS, SAB and AMY were observed in rats with hepatic fibrosis after oral administration of FZHY. These results provide a meaningful basis for developing a clinical dosage regimen in the treatment of hepatic fibrosis by FZHY. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Discontinuity in the genetic and environmental causes of the intellectual disability spectrum.

    PubMed

    Reichenberg, Abraham; Cederlöf, Martin; McMillan, Andrew; Trzaskowski, Maciej; Kapra, Ori; Fruchter, Eyal; Ginat, Karen; Davidson, Michael; Weiser, Mark; Larsson, Henrik; Plomin, Robert; Lichtenstein, Paul

    2016-01-26

    Intellectual disability (ID) occurs in almost 3% of newborns. Despite substantial research, a fundamental question about its origin and links to intelligence (IQ) still remains. ID has been shown to be inherited and has been accepted as the extreme low of the normal IQ distribution. However, ID displays a complex pattern of inheritance. Previously, noninherited rare mutations were shown to contribute to severe ID risk in individual families, but in the majority of cases causes remain unknown. Common variants associated with ID risk in the population have not been systematically established. Here we evaluate the hypothesis, originally proposed almost 1 century ago, that most ID is caused by the same genetic and environmental influences responsible for the normal distribution of IQ, but that severe ID is not. We studied more than 1,000,000 sibling pairs and 9,000 twin pairs assessed for IQ and for the presence of ID. We evaluated whether genetic and environmental influences at the extremes of the distribution are different from those operating in the normal range. Here we show that factors influencing mild ID (lowest 3% of IQ distribution) were similar to those influencing IQ in the normal range. In contrast, the factors influencing severe ID (lowest 0.5% of IQ distribution) differ from those influencing mild ID or IQ scores in the normal range. Taken together, our results suggest that most severe ID is a distinct condition, qualitatively different from the preponderance of ID, which, in turn, represents the low extreme of the normal distribution of intelligence.

  12. Discontinuity in the genetic and environmental causes of the intellectual disability spectrum

    PubMed Central

    Reichenberg, Abraham; Cederlöf, Martin; McMillan, Andrew; Trzaskowski, Maciej; Kapra, Ori; Fruchter, Eyal; Ginat, Karen; Davidson, Michael; Weiser, Mark; Larsson, Henrik; Plomin, Robert; Lichtenstein, Paul

    2016-01-01

    Intellectual disability (ID) occurs in almost 3% of newborns. Despite substantial research, a fundamental question about its origin and links to intelligence (IQ) still remains. ID has been shown to be inherited and has been accepted as the extreme low of the normal IQ distribution. However, ID displays a complex pattern of inheritance. Previously, noninherited rare mutations were shown to contribute to severe ID risk in individual families, but in the majority of cases causes remain unknown. Common variants associated with ID risk in the population have not been systematically established. Here we evaluate the hypothesis, originally proposed almost 1 century ago, that most ID is caused by the same genetic and environmental influences responsible for the normal distribution of IQ, but that severe ID is not. We studied more than 1,000,000 sibling pairs and 9,000 twin pairs assessed for IQ and for the presence of ID. We evaluated whether genetic and environmental influences at the extremes of the distribution are different from those operating in the normal range. Here we show that factors influencing mild ID (lowest 3% of IQ distribution) were similar to those influencing IQ in the normal range. In contrast, the factors influencing severe ID (lowest 0.5% of IQ distribution) differ from those influencing mild ID or IQ scores in the normal range. Taken together, our results suggest that most severe ID is a distinct condition, qualitatively different from the preponderance of ID, which, in turn, represents the low extreme of the normal distribution of intelligence. PMID:26711998

  13. Earthquake Clustering on Normal Faults: Insight from Rate-and-State Friction Models

    NASA Astrophysics Data System (ADS)

    Biemiller, J.; Lavier, L. L.; Wallace, L.

    2016-12-01

    Temporal variations in slip rate on normal faults have been recognized in Hawaii and the Basin and Range. The recurrence intervals of these slip transients range from 2 years on the flanks of Kilauea, Hawaii to 10 kyr timescale earthquake clustering on the Wasatch Fault in the eastern Basin and Range. In addition to these longer recurrence transients in the Basin and Range, recent GPS results there also suggest elevated deformation rate events with recurrence intervals of 2-4 years. These observations suggest that some active normal fault systems are dominated by slip behaviors that fall between the end-members of steady aseismic creep and periodic, purely elastic, seismic-cycle deformation. Recent studies propose that 200 year to 50 kyr timescale supercycles may control the magnitude, timing, and frequency of seismic-cycle earthquakes in subduction zones, where aseismic slip transients are known to play an important role in total deformation. Seismic cycle deformation of normal faults may be similarly influenced by its timing within long-period supercycles. We present numerical models (based on rate-and-state friction) of normal faults such as the Wasatch Fault showing that realistic rate-and-state parameter distributions along an extensional fault zone can give rise to earthquake clusters separated by 500 yr - 5 kyr periods of aseismic slip transients on some portions of the fault. The recurrence intervals of events within each earthquake cluster range from 200 to 400 years. Our results support the importance of stress and strain history as controls on a normal fault's present and future slip behavior and on the characteristics of its current seismic cycle. These models suggest that long- to medium-term fault slip history may influence the temporal distribution, recurrence interval, and earthquake magnitudes for a given normal fault segment.

  14. Transmittance of semitransparent windows with absorbing cap-shaped droplets condensed on their backside

    NASA Astrophysics Data System (ADS)

    Zhu, Keyong; Pilon, Laurent

    2017-11-01

    This study aims to investigate systematically light transfer through semitransparent windows with absorbing cap-shaped droplets condensed on their backside as encountered in greenhouses, solar desalination plants, photobioreactors and covered raceway ponds. The Monte Carlo ray-tracing method was used to predict the normal-hemispherical transmittance, reflectance, and normal absorptance accounting for reflection and refraction at the air/droplet, droplet/window, and window/air interfaces and absorption in both the droplets and the window. The droplets were monodisperse or polydisperse and arranged either in an ordered hexagonal pattern or randomly distributed on the backside with droplet contact angle θc ranging between 0 and 180° The normal-hemispherical transmittance was found to be independent of the spatial distribution of droplets. However, it decreased with increasing droplet diameter and polydispersity. The normal-hemispherical transmittance featured four distinct optical regimes for semitransparent window supporting nonabsorbing droplets. These optical regimes were defined based on contact angle and critical angle for internal reflection at the droplet/air interface. However, for strongly absorbing droplets, the normal-hemispherical transmittance (i) decreased monotonously with increasing contact angle for θc <90° and (ii) remained constant and independent of droplet absorption index kd, droplet mean diameter dm, and contact angle θc for θc ≥ 90° Analytical expressions for the normal-hemispherical transmittance were provided in the asymptotic cases when (1) the window was absorbing but the droplets were nonabsorbing with any contact angles θc, and (2) the droplets were strongly absorbing with contact angle θc >90° Finally, the spectral normal-hemispherical transmittance of a 3 mm-thick glass window supporting condensed water droplets for wavelength between 0.4 and 5 μm was predicted and discussed in light of the earlier parametric study and asymptotic behavior.

  15. Zn deposition at the bone cartilage interface in equine articular cartilage

    NASA Astrophysics Data System (ADS)

    Bradley, D. A.; Moger, C. J.; Winlove, C. P.

    2007-09-01

    In articular cartilage metalloproteinases, a family of enzymes whose function relies on the presence of divalent cations such as Zn and Ca plays a central role in the normal processes of growth and remodelling and in the degenerative and inflammatory processes of arthritis. Another important enzyme, alkaline phosphatase, involved in cartilage mineralisation also relies on metallic cofactors. The local concentration of divalent cations is therefore of considerable interest in cartilage pathophysiology and several authors have used synchrotron X-ray fluorescence (XRF) to map metal ion distributions in bone and cartilage. We report use of a bench-top XRF analytical microscope, providing spatial resolution of 10 μm and applicable to histological sections, facilitating correlation of the distribution with structural features. The study seeks to establish the elemental distribution in normal tissue as a precursor to investigation of changes in disease. For six samples prepared from equine metacarpophalangeal joint, we observed increased concentration of Zn and Sr ions around the tidemark between normal and mineralised cartilage. This is believed to be an active site of remodelling but its composition has hitherto lacked detailed characterization. We also report preliminary results on two of the samples using Proton-Induced X-ray Emission (PIXE). This confirms our previous observations using synchrotron-based XRF of enhanced deposition of Sr and Zn at the surface of the subchondral bone and in articular cartilage.

  16. Linearization correction of /sup 99m/Tc-labeled hexamethyl-propylene amine oxime (HM-PAO) image in terms of regional CBF distribution: comparison to C VO2 inhalation steady-state method measured by positron emission tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inugami, A.; Kanno, I.; Uemura, K.

    1988-12-01

    The radioisotope distribution following intravenous injection of 99mTc-labeled hexamethylpropyleneamine oxime (HM-PAO) in the brain was measured by single photon emission computed tomography (SPECT) and corrected for the nonlinearity caused by differences in net extraction. The linearization correction was based on a three compartment model, and it required a region of reference to normalize the SPECT image in terms of regional cerebral blood flow distribution. Two different regions of reference, the cerebellum and the whole brain, were tested. The uncorrected and corrected HM-PAO images were compared with cerebral blood flow (CBF) image measured by the C VO2 inhalation steady state methodmore » and positron emission tomography (PET). The relationship between uncorrected HM-PAO and PET-CBF showed a correlation coefficient of 0.85 but tended to saturate at high CBF values, whereas it was improved to 0.93 after the linearization correction. The whole-brain normalization worked just as well as normalization using the cerebellum. This study constitutes a validation of the linearization correction and it suggests that after linearization the HM-PAO image may be scaled to absolute CBF by employing a global hemispheric CBF value as measured by the nontomographic TTXe clearance method.« less

  17. A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.

    PubMed

    Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo

    2016-01-01

    In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.

  18. Comparison of CSF Distribution between Idiopathic Normal Pressure Hydrocephalus and Alzheimer Disease.

    PubMed

    Yamada, S; Ishikawa, M; Yamamoto, K

    2016-07-01

    CSF volumes in the basal cistern and Sylvian fissure are increased in both idiopathic normal pressure hydrocephalus and Alzheimer disease, though the differences in these volumes in idiopathic normal pressure hydrocephalus and Alzheimer disease have not been well-described. Using CSF segmentation and volume quantification, we compared the distribution of CSF in idiopathic normal pressure hydrocephalus and Alzheimer disease. CSF volumes were extracted from T2-weighted 3D spin-echo sequences on 3T MR imaging and quantified semi-automatically. We compared the volumes and ratios of the ventricles and subarachnoid spaces after classification in 30 patients diagnosed with idiopathic normal pressure hydrocephalus, 10 with concurrent idiopathic normal pressure hydrocephalus and Alzheimer disease, 18 with Alzheimer disease, and 26 control subjects 60 years of age or older. Brain to ventricle ratios at the anterior and posterior commissure levels and 3D volumetric convexity cistern to ventricle ratios were useful indices for the differential diagnosis of idiopathic normal pressure hydrocephalus or idiopathic normal pressure hydrocephalus with Alzheimer disease from Alzheimer disease, similar to the z-Evans index and callosal angle. The most distinctive characteristics of the CSF distribution in idiopathic normal pressure hydrocephalus were small convexity subarachnoid spaces and the large volume of the basal cistern and Sylvian fissure. The distribution of the subarachnoid spaces in the idiopathic normal pressure hydrocephalus with Alzheimer disease group was the most deformed among these 3 groups, though the mean ventricular volume of the idiopathic normal pressure hydrocephalus with Alzheimer disease group was intermediate between that of the idiopathic normal pressure hydrocephalus and Alzheimer disease groups. The z-axial expansion of the lateral ventricle and compression of the brain just above the ventricle were the common findings in the parameters for differentiating idiopathic normal pressure hydrocephalus from Alzheimer disease. © 2016 by American Journal of Neuroradiology.

  19. Multiphoton microscopic imaging of human normal and cancerous oesophagus tissue.

    PubMed

    Chen, W S; Wang, Y; Liu, N R; Zhang, J X; Chen, R

    2014-01-01

    In this paper, microstructures of human oesophageal submucosa are evaluated using multiphoton microscopy, based on two-photon excited fluorescence and second harmonic generation. The content and distribution of collagen, elastic fibers and cancer cells in normal and cancerous submucosa layer have been distinctly obtained and briefly discussed. The variation of these components is very relevant to the pathology in oesophagus, especially in early oesophageal cancer. Our results further indicate that the multiphoton microscopy technique has the potential application in vivo in clinical diagnosis and monitoring of early oesophageal cancer. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  20. Transthoracic lung ultrasound in normal dogs and dogs with cardiogenic pulmonary edema: a pilot study.

    PubMed

    Rademacher, Nathalie; Pariaut, Romain; Pate, Julie; Saelinger, Carley; Kearney, Michael T; Gaschen, Lorrie

    2014-01-01

    Pulmonary edema is the most common complication of left-sided heart failure in dogs and early detection is important for effective clinical management. In people, pulmonary edema is commonly diagnosed based on transthoracic ultrasonography and detection of B line artifacts (vertical, narrow-based, well-defined hyperechoic rays arising from the pleural surface). The purpose of this study was to determine whether B line artifacts could also be useful diagnostic predictors for cardiogenic pulmonary edema in dogs. Thirty-one normal dogs and nine dogs with cardiogenic pulmonary edema were prospectively recruited. For each dog, presence or absence of cardiogenic pulmonary edema was based on physical examination, heartworm testing, thoracic radiographs, and echocardiography. A single observer performed transthoracic ultrasonography in all dogs and recorded video clips and still images for each of four quadrants in each hemithorax. Distribution, sonographic characteristics, and number of B lines per thoracic quadrant were determined and compared between groups. B lines were detected in 31% of normal dogs (mean 0.9 ± 0.3 SD per dog) and 100% of dogs with cardiogenic pulmonary edema (mean 6.2 ± 3.8 SD per dog). Artifacts were more numerous and widely distributed in dogs with congestive heart failure (P < 0.0001). In severe cases, B lines increased in number and became confluent. The locations of B line artifacts appeared consistent with locations of edema on radiographs. Findings from the current study supported the use of thoracic ultrasonography and detection of B lines as techniques for diagnosing cardiogenic pulmonary edema in dogs. © 2014 American College of Veterinary Radiology.

  1. Stochastic Frontier Model Approach for Measuring Stock Market Efficiency with Different Distributions

    PubMed Central

    Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  2. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    PubMed

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  3. Exploring the Estimation of Examinee Locations Using Multidimensional Latent Trait Models under Different Distributional Assumptions

    ERIC Educational Resources Information Center

    Jang, Hyesuk

    2014-01-01

    This study aims to evaluate a multidimensional latent trait model to determine how well the model works in various empirical contexts. Contrary to the assumption of these latent trait models that the traits are normally distributed, situations in which the latent trait is not shaped with a normal distribution may occur (Sass et al, 2008; Woods…

  4. SEM with Missing Data and Unknown Population Distributions Using Two-Stage ML: Theory and Its Application

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Lu, Laura

    2008-01-01

    This article provides the theory and application of the 2-stage maximum likelihood (ML) procedure for structural equation modeling (SEM) with missing data. The validity of this procedure does not require the assumption of a normally distributed population. When the population is normally distributed and all missing data are missing at random…

  5. Biostatistics Series Module 3: Comparing Groups: Numerical Variables.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Numerical data that are normally distributed can be analyzed with parametric tests, that is, tests which are based on the parameters that define a normal distribution curve. If the distribution is uncertain, the data can be plotted as a normal probability plot and visually inspected, or tested for normality using one of a number of goodness of fit tests, such as the Kolmogorov-Smirnov test. The widely used Student's t-test has three variants. The one-sample t-test is used to assess if a sample mean (as an estimate of the population mean) differs significantly from a given population mean. The means of two independent samples may be compared for a statistically significant difference by the unpaired or independent samples t-test. If the data sets are related in some way, their means may be compared by the paired or dependent samples t-test. The t-test should not be used to compare the means of more than two groups. Although it is possible to compare groups in pairs, when there are more than two groups, this will increase the probability of a Type I error. The one-way analysis of variance (ANOVA) is employed to compare the means of three or more independent data sets that are normally distributed. Multiple measurements from the same set of subjects cannot be treated as separate, unrelated data sets. Comparison of means in such a situation requires repeated measures ANOVA. It is to be noted that while a multiple group comparison test such as ANOVA can point to a significant difference, it does not identify exactly between which two groups the difference lies. To do this, multiple group comparison needs to be followed up by an appropriate post hoc test. An example is the Tukey's honestly significant difference test following ANOVA. If the assumptions for parametric tests are not met, there are nonparametric alternatives for comparing data sets. These include Mann-Whitney U-test as the nonparametric counterpart of the unpaired Student's t-test, Wilcoxon signed-rank test as the counterpart of the paired Student's t-test, Kruskal-Wallis test as the nonparametric equivalent of ANOVA and the Friedman's test as the counterpart of repeated measures ANOVA.

  6. Asymptotic Distribution of the Likelihood Ratio Test Statistic for Sphericity of Complex Multivariate Normal Distribution.

    DTIC Science & Technology

    1981-08-01

    RATIO TEST STATISTIC FOR SPHERICITY OF COMPLEX MULTIVARIATE NORMAL DISTRIBUTION* C. Fang P. R. Krishnaiah B. N. Nagarsenker** August 1981 Technical...and their applications in time sEries, the reader is referred to Krishnaiah (1976). Motivated by the applications in the area of inference on multiple...for practical purposes. Here, we note that Krishnaiah , Lee and Chang (1976) approxi- mated the null distribution of certain power of the likeli

  7. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  8. Multivariable normal tissue complication probability model-based treatment plan optimization for grade 2-4 dysphagia and tube feeding dependence in head and neck radiotherapy.

    PubMed

    Kierkels, Roel G J; Wopken, Kim; Visser, Ruurd; Korevaar, Erik W; van der Schaaf, Arjen; Bijl, Hendrik P; Langendijk, Johannes A

    2016-12-01

    Radiotherapy of the head and neck is challenged by the relatively large number of organs-at-risk close to the tumor. Biologically-oriented objective functions (OF) could optimally distribute the dose among the organs-at-risk. We aimed to explore OFs based on multivariable normal tissue complication probability (NTCP) models for grade 2-4 dysphagia (DYS) and tube feeding dependence (TFD). One hundred head and neck cancer patients were studied. Additional to the clinical plan, two more plans (an OF DYS and OF TFD -plan) were optimized per patient. The NTCP models included up to four dose-volume parameters and other non-dosimetric factors. A fully automatic plan optimization framework was used to optimize the OF NTCP -based plans. All OF NTCP -based plans were reviewed and classified as clinically acceptable. On average, the Δdose and ΔNTCP were small comparing the OF DYS -plan, OF TFD -plan, and clinical plan. For 5% of patients NTCP TFD reduced >5% using OF TFD -based planning compared to the OF DYS -plans. Plan optimization using NTCP DYS - and NTCP TFD -based objective functions resulted in clinically acceptable plans. For patients with considerable risk factors of TFD, the OF TFD steered the optimizer to dose distributions which directly led to slightly lower predicted NTCP TFD values as compared to the other studied plans. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Spatial event cluster detection using an approximate normal distribution.

    PubMed

    Torabi, Mahmoud; Rosychuk, Rhonda J

    2008-12-12

    In geographic surveillance of disease, areas with large numbers of disease cases are to be identified so that investigations of the causes of high disease rates can be pursued. Areas with high rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. Typically cluster detection tests are applied to incident or prevalent cases of disease, but surveillance of disease-related events, where an individual may have multiple events, may also be of interest. Previously, a compound Poisson approach that detects clusters of events by testing individual areas that may be combined with their neighbours has been proposed. However, the relevant probabilities from the compound Poisson distribution are obtained from a recursion relation that can be cumbersome if the number of events are large or analyses by strata are performed. We propose a simpler approach that uses an approximate normal distribution. This method is very easy to implement and is applicable to situations where the population sizes are large and the population distribution by important strata may differ by area. We demonstrate the approach on pediatric self-inflicted injury presentations to emergency departments and compare the results for probabilities based on the recursion and the normal approach. We also implement a Monte Carlo simulation to study the performance of the proposed approach. In a self-inflicted injury data example, the normal approach identifies twelve out of thirteen of the same clusters as the compound Poisson approach, noting that the compound Poisson method detects twelve significant clusters in total. Through simulation studies, the normal approach well approximates the compound Poisson approach for a variety of different population sizes and case and event thresholds. A drawback of the compound Poisson approach is that the relevant probabilities must be determined through a recursion relation and such calculations can be computationally intensive if the cluster size is relatively large or if analyses are conducted with strata variables. On the other hand, the normal approach is very flexible, easily implemented, and hence, more appealing for users. Moreover, the concepts may be more easily conveyed to non-statisticians interested in understanding the methodology associated with cluster detection test results.

  10. Smoothing of the bivariate LOD score for non-normal quantitative traits.

    PubMed

    Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John

    2005-12-30

    Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.

  11. The application of muscle wrapping to voxel-based finite element models of skeletal structures.

    PubMed

    Liu, Jia; Shi, Junfen; Fitton, Laura C; Phillips, Roger; O'Higgins, Paul; Fagan, Michael J

    2012-01-01

    Finite elements analysis (FEA) is now used routinely to interpret skeletal form in terms of function in both medical and biological applications. To produce accurate predictions from FEA models, it is essential that the loading due to muscle action is applied in a physiologically reasonable manner. However, it is common for muscle forces to be represented as simple force vectors applied at a few nodes on the model's surface. It is certainly rare for any wrapping of the muscles to be considered, and yet wrapping not only alters the directions of muscle forces but also applies an additional compressive load from the muscle belly directly to the underlying bone surface. This paper presents a method of applying muscle wrapping to high-resolution voxel-based finite element (FE) models. Such voxel-based models have a number of advantages over standard (geometry-based) FE models, but the increased resolution with which the load can be distributed over a model's surface is particularly advantageous, reflecting more closely how muscle fibre attachments are distributed. In this paper, the development, application and validation of a muscle wrapping method is illustrated using a simple cylinder. The algorithm: (1) calculates the shortest path over the surface of a bone given the points of origin and ultimate attachment of the muscle fibres; (2) fits a Non-Uniform Rational B-Spline (NURBS) curve from the shortest path and calculates its tangent, normal vectors and curvatures so that normal and tangential components of the muscle force can be calculated and applied along the fibre; and (3) automatically distributes the loads between adjacent fibres to cover the bone surface with a fully distributed muscle force, as is observed in vivo. Finally, we present a practical application of this approach to the wrapping of the temporalis muscle around the cranium of a macaque skull.

  12. Symmetric co-movement between Malaysia and Japan stock markets

    NASA Astrophysics Data System (ADS)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2017-04-01

    The copula approach is a flexible tool known to capture linear, nonlinear, symmetric and asymmetric dependence between two or more random variables. It is often used as a co-movement measure between stock market returns. The information obtained from copulas such as the level of association of financial market during normal and bullish and bearish markets phases are useful for investment strategies and risk management. However, the study of co-movement between Malaysia and Japan markets are limited, especially using copulas. Hence, we aim to investigate the dependence structure between Malaysia and Japan capital markets for the period spanning from 2000 to 2012. In this study, we showed that the bivariate normal distribution is not suitable as the bivariate distribution or to present the dependence between Malaysia and Japan markets. Instead, Gaussian or normal copula was found a good fit to represent the dependence. From our findings, it can be concluded that simple distribution fitting such as bivariate normal distribution does not suit financial time series data, whose characteristics are often leptokurtic. The nature of the data is treated by ARMA-GARCH with heavy tail distributions and these can be associated with copula functions. Regarding the dependence structure between Malaysia and Japan markets, the findings suggest that both markets co-move concurrently during normal periods.

  13. Developing an operational rangeland water requirement satisfaction index

    USGS Publications Warehouse

    Senay, Gabriel B.; Verdin, James P.; Rowland, James

    2011-01-01

    Developing an operational water requirement satisfaction index (WRSI) for rangeland monitoring is an important goal of the famine early warning systems network. An operational WRSI has been developed for crop monitoring, but until recently a comparable WRSI for rangeland was not successful because of the extremely poor performance of the index when based on published crop coefficients (K c) for rangelands. To improve the rangeland WRSI, we developed a simple calibration technique that adjusts the K c values for rangeland monitoring using long-term rainfall distribution and reference evapotranspiration data. The premise for adjusting the K c values is based on the assumption that a viable rangeland should exhibit above-average WRSI (values >80%) during a normal year. The normal year was represented by a median dekadal rainfall distribution (satellite rainfall estimate from 1996 to 2006). Similarly, a long-term average for potential evapotranspiration was used as input to the famine early warning systems network WRSI model in combination with soil-water-holding capacity data. A dekadal rangeland WRSI has been operational for east and west Africa since 2005. User feedback has been encouraging, especially with regard to the end-of-season WRSI anomaly products that compare the index's performance to ‘normal’ years. Currently, rangeland WRSI products are generated on a dekadal basis and posted for free distribution on the US Geological Survey early warning website at http://earlywarning.usgs.gov/adds/

  14. A comparative review of methods for comparing means using partially paired data.

    PubMed

    Guo, Beibei; Yuan, Ying

    2017-06-01

    In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.

  15. Distributed computation of graphics primitives on a transputer network

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    A method is developed for distributing the computation of graphics primitives on a parallel processing network. Off-the-shelf transputer boards are used to perform the graphics transformations and scan-conversion tasks that would normally be assigned to a single transputer based display processor. Each node in the network performs a single graphics primitive computation. Frequently requested tasks can be duplicated on several nodes. The results indicate that the current distribution of commands on the graphics network shows a performance degradation when compared to the graphics display board alone. A change to more computation per node for every communication (perform more complex tasks on each node) may cause the desired increase in throughput.

  16. Gaussian Quadrature is an efficient method for the back-transformation in estimating the usual intake distribution when assessing dietary exposure.

    PubMed

    Dekkers, A L M; Slob, W

    2012-10-01

    In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Laser Raman detection for oral cancer based on a Gaussian process classification method

    NASA Astrophysics Data System (ADS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Zhang, Chijun; Chen, He; Luo, Yusheng; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-06-01

    Oral squamous cell carcinoma is the most common neoplasm of the oral cavity. The incidence rate accounts for 80% of total oral cancer and shows an upward trend in recent years. It has a high degree of malignancy and is difficult to detect in terms of differential diagnosis, as a consequence of which the timing of treatment is always delayed. In this work, Raman spectroscopy was adopted to differentially diagnose oral squamous cell carcinoma and oral gland carcinoma. In total, 852 entries of raw spectral data which consisted of 631 items from 36 oral squamous cell carcinoma patients, 87 items from four oral gland carcinoma patients and 134 items from five normal people were collected by utilizing an optical method on oral tissues. The probability distribution of the datasets corresponding to the spectral peaks of the oral squamous cell carcinoma tissue was analyzed and the experimental result showed that the data obeyed a normal distribution. Moreover, the distribution characteristic of the noise was also in compliance with a Gaussian distribution. A Gaussian process (GP) classification method was utilized to distinguish the normal people and the oral gland carcinoma patients from the oral squamous cell carcinoma patients. The experimental results showed that all the normal people could be recognized. 83.33% of the oral squamous cell carcinoma patients could be correctly diagnosed and the remaining ones would be diagnosed as having oral gland carcinoma. For the classification process of oral gland carcinoma and oral squamous cell carcinoma, the correct ratio was 66.67% and the erroneously diagnosed percentage was 33.33%. The total sensitivity was 80% and the specificity was 100% with the Matthews correlation coefficient (MCC) set to 0.447 213 595. Considering the numerical results above, the application prospects and clinical value of this technique are significantly impressive.

  18. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  19. Modified retrieval algorithm for three types of precipitation distribution using x-band synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Xie, Yanan; Zhou, Mingliang; Pan, Dengke

    2017-10-01

    The forward-scattering model is introduced to describe the response of normalized radar cross section (NRCS) of precipitation with synthetic aperture radar (SAR). Since the distribution of near-surface rainfall is related to the rate of near-surface rainfall and horizontal distribution factor, a retrieval algorithm called modified regression empirical and model-oriented statistical (M-M) based on the volterra integration theory is proposed. Compared with the model-oriented statistical and volterra integration (MOSVI) algorithm, the biggest difference is that the M-M algorithm is based on the modified regression empirical algorithm rather than the linear regression formula to retrieve the value of near-surface rainfall rate. Half of the empirical parameters are reduced in the weighted integral work and a smaller average relative error is received while the rainfall rate is less than 100 mm/h. Therefore, the algorithm proposed in this paper can obtain high-precision rainfall information.

  20. Decision Processes in Discrimination: Fundamental Misrepresentations of Signal Detection Theory

    NASA Technical Reports Server (NTRS)

    Balakrishnan, J. D.

    1998-01-01

    In the first part of this article, I describe a new approach to studying decision making in discrimination tasks that does not depend on the technical assumptions of signal detection theory (e.g., normality of the encoding distributions). Applying these new distribution-free tests to data from three experiments, I show that base rate and payoff manipulations had substantial effects on the participants' encoding distributions but no effect on their decision rules, which were uniformly unbiased in equal and unequal base rate conditions and in symmetric and asymmetric payoff conditions. In the second part of the article, I show that this seemingly paradoxical result is readily explained by the sequential sampling models of discrimination. I then propose a new, "model-free" test for response bias that seems to more properly identify both the nature and direction of the biases induced by the classical bias manipulations.

  1. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  2. EEG based topography analysis in string recognition task

    NASA Astrophysics Data System (ADS)

    Ma, Xiaofei; Huang, Xiaolin; Shen, Yuxiaotong; Qin, Zike; Ge, Yun; Chen, Ying; Ning, Xinbao

    2017-03-01

    Vision perception and recognition is a complex process, during which different parts of brain are involved depending on the specific modality of the vision target, e.g. face, character, or word. In this study, brain activities in string recognition task compared with idle control state are analyzed through topographies based on multiple measurements, i.e. sample entropy, symbolic sample entropy and normalized rhythm power, extracted from simultaneously collected scalp EEG. Our analyses show that, for most subjects, both symbolic sample entropy and normalized gamma power in string recognition task are significantly higher than those in idle state, especially at locations of P4, O2, T6 and C4. It implies that these regions are highly involved in string recognition task. Since symbolic sample entropy measures complexity, from the perspective of new information generation, and normalized rhythm power reveals the power distributions in frequency domain, complementary information about the underlying dynamics can be provided through the two types of indices.

  3. Localization of sialic acid in kidney glomeruli: regionalization in the podocyte plasma membrane and loss in experimental nephrosis.

    PubMed

    Charest, P M; Roth, J

    1985-12-01

    Sialic acid residues were localized by electron microscopy in renal glomeruli of normal and puromycin-treated rats with a cytochemical technique that utilized the Limax flavus lectin. In Lowicryl K4M thin sections from normal rats, sialic acid residues were found along the plasma membrane of the various glomerular cell types and in the glomerular basement membrane as well as the mesangial matrix. In NaDodSO4/PAGE, sialic acid residues of normal glomeruli were mainly confined to a 140-kDa protein previously identified as podocalyxin. The distribution of sialic acid residues in the podocyte plasma membrane was found to be remarkably regionalized. Based on the differential labeling intensity, three plasma membrane domains could be defined: the foot process base, the foot process region above the slit diaphragm, and the body of podocytes. Cytochemical and biochemical analysis of glomeruli from puromycin-treated rats showed a loss of sialic acid residues from glomerular sialoglycoconjugates indicating a perturbated glycosylation.

  4. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  5. powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks

    NASA Astrophysics Data System (ADS)

    Murray, Steven G.

    2018-05-01

    powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.

  6. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  7. Identification of walking human model using agent-based modelling

    NASA Astrophysics Data System (ADS)

    Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir

    2018-03-01

    The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.

  8. Consequences of additional use of PET information for target volume delineation and radiotherapy dose distribution for esophageal cancer.

    PubMed

    Muijs, Christina T; Schreurs, Liesbeth M; Busz, Dianne M; Beukema, Jannet C; van der Borden, Arnout J; Pruim, Jan; Van der Jagt, Eric J; Plukker, John Th; Langendijk, Johannes A

    2009-12-01

    To determine the consequences of target volume (TV) modifications, based on the additional use of PET information, on radiation planning, assuming PET/CT-imaging represents the true extent of the tumour. For 21 patients with esophageal cancer, two separate TV's were retrospectively defined based on CT (CT-TV) and co-registered PET/CT images (PET/CT-TV). Two 3D-CRT plans (prescribed dose 50.4 Gy) were constructed to cover the corresponding TV's. Subsequently, these plans were compared for target coverage, normal tissue dose-volume histograms and the corresponding normal tissue complication probability (NTCP) values. The addition of PET led to the modification of CT-TV with at least 10% in 12 of 21 patients (57%) (reduction in 9, enlargement in 3). PET/CT-TV was inadequately covered by the CT-based treatment plan in 8 patients (36%). Treatment plan modifications resulted in significant changes (p<0.05) in dose distributions to heart and lungs. Corresponding changes in NTCP values ranged from -3% to +2% for radiation pneumonitis and from -0.2% to +1.2% for cardiac mortality. This study demonstrated that TV's based on CT might exclude PET-avid disease. Consequences are under dosing and thereby possibly ineffective treatment. Moreover, the addition of PET in radiation planning might result in clinical important changes in NTCP.

  9. Confidence bounds and hypothesis tests for normal distribution coefficients of variation

    Treesearch

    Steve Verrill; Richard A. Johnson

    2007-01-01

    For normally distributed populations, we obtain confidence bounds on a ratio of two coefficients of variation, provide a test for the equality of k coefficients of variation, and provide confidence bounds on a coefficient of variation shared by k populations.

  10. An Empirical Bayes Approach to Mantel-Haenszel DIF Analysis.

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Thayer, Dorothy T.; Lewis, Charles

    1999-01-01

    Developed an empirical Bayes enhancement to Mantel-Haenszel (MH) analysis of differential item functioning (DIF) in which it is assumed that the MH statistics are normally distributed and that the prior distribution of underlying DIF parameters is also normal. (Author/SLD)

  11. Categorization of hyperspectral information (HSI) based on the distribution of spectra in hyperspace

    NASA Astrophysics Data System (ADS)

    Resmini, Ronald G.

    2003-09-01

    Hyperspectral information (HSI) data are commonly categorized by a description of the dominant physical geographic background captured in the image cube. In other words, HSI categorization is commonly based on a cursory, visual assessment of whether the data are of desert, forest, urban, littoral, jungle, alpine, etc., terrains. Additionally, often the design of HSI collection experiments is based on the acquisition of data of the various backgrounds or of objects of interest within the various terrain types. These data are for assessing and quantifying algorithm performance as well as for algorithm development activities. Here, results of an investigation into the validity of the backgrounds-driven mode of characterizing the diversity of hyperspectral data are presented. HSI data are described quantitatively, in the space where most algorithms operate: n-dimensional (n-D) hyperspace, where n is the number of bands in an HSI data cube. Nineteen metrics designed to probe hyperspace are applied to 14 HYDICE HSI data cubes that represent nine different backgrounds. Each of the 14 sets (one for each HYDICE cube) of 19 metric values was analyzed for clustering. With the present set of data and metrics, there is no clear, unambiguous break-out of metrics based on the nine different geographic backgrounds. The break-outs clump seemingly unrelated data types together; e.g., littoral and urban/residential. Most metrics are normally distributed and indicate no clustering; one metric is one outlier away from normal (i.e., two clusters); and five are comprised of two distributions (i.e., two clusters). Overall, there are three different break-outs that do not correspond to conventional background categories. Implications of these preliminary results are discussed as are recommendations for future work.

  12. SU-E-T-625: Robustness Evaluation and Robust Optimization of IMPT Plans Based on Per-Voxel Standard Deviation of Dose Distributions.

    PubMed

    Liu, W; Mohan, R

    2012-06-01

    Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD Anderson Cancer Center, and MD Anderson’s cancer center support grant CA016672. © 2012 American Association of Physicists in Medicine.

  13. Damage identification in highway bridges using distribution factors

    NASA Astrophysics Data System (ADS)

    Gangone, Michael V.; Whelan, Matthew J.

    2017-04-01

    The U.S. infrastructure system is well behind the needs of the 21st century and in dire need of improvements. The American Society of Civil Engineers (ASCE) graded America's Infrastructure as a "D+" in its recent 2013 Report Card. Bridges are a major component of the infrastructure system and were awarded a "C+". Nearly 25 percent of the nation's bridges are categorized as deficient by the Federal Highway Administration (FWHA). Most bridges were designed with an expected service life of roughly 50 years and today the average age of a bridge is 42 years. Finding alternative methods of condition assessment which captures the true performance of the bridge is of high importance. This paper discusses the monitoring of two multi-girder/stringer bridges at different ages of service life. Normal strain measurements were used to calculate the load distribution factor at the midspan of the bridge under controlled loading conditions. Controlled progressive damage was implemented to one of the superstructures to determine if the damage could be detected using the distribution factor. An uncertainty analysis, based on the accuracy and precision of the normal strain measurement, was undertaken to determine how effective it is to use the distribution factor measurement as a damage indicator. The analysis indicates that this load testing parameter may be an effective measure for detecting damage.

  14. A Role for the X Chromosome in Sex Differences in Variability in General Intelligence?

    PubMed

    Johnson, Wendy; Carothers, Andrew; Deary, Ian J

    2009-11-01

    There is substantial evidence that males are more variable than females in general intelligence. In recent years, researchers have presented this as a reason that, although there is little, if any, mean sex difference in general intelligence, males tend to be overrepresented at both ends of its overall distribution. Part of the explanation could be the presence of genes on the X chromosome related both to syndromal disorders involving mental retardation and to population variation in general intelligence occurring normally. Genes on the X chromosome appear overrepresented among genes with known involvement in mental retardation, which is consistent with a model we developed of the population distribution of general intelligence as a mixture of two normal distributions. Using this model, we explored the expected ratios of males to females at various points in the distribution and estimated the proportion of variance in general intelligence potentially due to genes on the X chromosome. These estimates provide clues to the extent to which biologically based sex differences could be manifested in the environment as sex differences in displayed intellectual abilities. We discuss these observations in the context of sex differences in specific cognitive abilities and evolutionary theories of sexual selection. © 2009 Association for Psychological Science.

  15. Distribution pattern of urine albumin creatinine ratio and the prevalence of high-normal levels in untreated asymptomatic non-diabetic hypertensive patients.

    PubMed

    Ohmaru, Natsuki; Nakatsu, Takaaki; Izumi, Reishi; Mashima, Keiichi; Toki, Misako; Kobayashi, Asako; Ogawa, Hiroko; Hirohata, Satoshi; Ikeda, Satoru; Kusachi, Shozo

    2011-01-01

    Even high-normal albuminuria is reportedly associated with cardiovascular events. We determined the urine albumin creatinine ratio (UACR) in spot urine samples and analyzed the UACR distribution and the prevalence of high-normal levels. The UACR was determined using immunoturbidimetry in 332 untreated asymptomatic non-diabetic Japanese patients with hypertension and in 69 control subjects. The microalbuminuria and macroalbuminuria levels were defined as a UCAR ≥30 and <300 µg/mg·creatinine and a UCAR ≥300 µg/mg·creatinine, respectively. The distribution patterns showed a highly skewed distribution for the lower levels, and a common logarithmic transformation produced a close fit to a Gaussian distribution with median, 25th and 75th percentile values of 22.6, 13.5 and 48.2 µg/mg·creatinine, respectively. When a high-normal UACR was set at >20 to <30 µg/mg·creatinine, 19.9% (66/332) of the hypertensive patients exhibited a high-normal UACR. Microalbuminuria and macroalbuminuria were observed in 36.1% (120/336) and 2.1% (7/332) of the patients, respectively. UACR was significantly correlated with the systolic and diastolic blood pressures and the pulse pressure. A stepwise multivariate analysis revealed that these pressures as well as age were independent factors that increased UACR. The UACR distribution exhibited a highly skewed pattern, with approximately 60% of untreated, non-diabetic hypertensive patients exhibiting a high-normal or larger UACR. Both hypertension and age are independent risk factors that increase the UACR. The present study indicated that a considerable percentage of patients require anti-hypertensive drugs with antiproteinuric effects at the start of treatment.

  16. Wealth of the world's richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power-law as universal distributions?

    NASA Astrophysics Data System (ADS)

    Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.

    2017-04-01

    Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

  17. Evidence for the Gompertz curve in the income distribution of Brazil 1978-2005

    NASA Astrophysics Data System (ADS)

    Moura, N. J., Jr.; Ribeiro, M. B.

    2009-01-01

    This work presents an empirical study of the evolution of the personal income distribution in Brazil. Yearly samples available from 1978 to 2005 were studied and evidence was found that the complementary cumulative distribution of personal income for 99% of the economically less favorable population is well represented by a Gompertz curve of the form G(x) = exp [exp (A-Bx)], where x is the normalized individual income. The complementary cumulative distribution of the remaining 1% richest part of the population is well represented by a Pareto power law distribution P(x) = βx-α. This result means that similarly to other countries, Brazil’s income distribution is characterized by a well defined two class system. The parameters A, B, α, β were determined by a mixture of boundary conditions, normalization and fitting methods for every year in the time span of this study. Since the Gompertz curve is characteristic of growth models, its presence here suggests that these patterns in income distribution could be a consequence of the growth dynamics of the underlying economic system. In addition, we found out that the percentage share of both the Gompertzian and Paretian components relative to the total income shows an approximate cycling pattern with periods of about 4 years and whose maximum and minimum peaks in each component alternate at about every 2 years. This finding suggests that the growth dynamics of Brazil’s economic system might possibly follow a Goodwin-type class model dynamics based on the application of the Lotka-Volterra equation to economic growth and cycle.

  18. Mass and number size distributions of emitted particulates at five important operation units in a hazardous industrial waste incineration plant.

    PubMed

    Lin, Chi-Chi; Huang, Hsiao-Lin; Hsiao, Wen-Yuan

    2016-01-01

    Past studies indicated particulates generated by waste incineration contain various hazardous compounds. The aerosol characteristics are very important for particulate hazard control and workers' protection. This study explores the detailed characteristics of emitted particulates from each important operation unit in a rotary kiln-based hazardous industrial waste incineration plant. A dust size analyzer (Grimm 1.109) and a scanning mobility particle sizer (SMPS) were used to measure the aerosol mass concentration, mass size distribution, and number size distribution at five operation units (S1-S5) during periods of normal operation, furnace shutdown, and annual maintenance. The place with the highest measured PM10 concentration was located at the area of fly ash discharge from air pollution control equipment (S5) during the period of normal operation. Fine particles (PM2.5) constituted the majority of the emitted particles from the incineration plant. The mass size distributions (elucidated) made it clear that the size of aerosols caused by the increased particulate mass, resulting from work activities, were mostly greater than 1.5 μm. Whereas the number size distributions showed that the major diameters of particulates that caused the increase of particulate number concentrations, from work activities, were distributed in the sub micrometer range. The process of discharging fly ash from air pollution control equipment can significantly increase the emission of nanoparticles. The mass concentrations and size distributions of emitted particulates were different at each operation unit. This information is valuable for managers to take appropriate strategy to reduce the particulate emission and associated worker exposure.

  19. Effect of non-normality on test statistics for one-way independent groups designs.

    PubMed

    Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R

    2012-02-01

    The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.

  20. Nonpoint Source Solute Transport Normal to Aquifer Bedding in Heterogeneous, Markov Chain Random Fields

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Harter, T.; Sivakumar, B.

    2005-12-01

    Facies-based geostatistical models have become important tools for the stochastic analysis of flow and transport processes in heterogeneous aquifers. However, little is known about the dependency of these processes on the parameters of facies- based geostatistical models. This study examines the nonpoint source solute transport normal to the major bedding plane in the presence of interconnected high conductivity (coarse- textured) facies in the aquifer medium and the dependence of the transport behavior upon the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute travel time probability distribution functions (pdfs) for solute flux from the water table to the bottom boundary (production horizon) of the aquifer. The cases examined include, two-, three-, and four-facies models with horizontal to vertical facies mean length anisotropy ratios, ek, from 25:1 to 300:1, and with a wide range of facies volume proportions (e.g, from 5% to 95% coarse textured facies). Predictions of travel time pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer, the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and - to a lesser degree - the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, travel time pdfs are not log- normally distributed as is often assumed. Also, macrodispersive behavior (variance of the travel time pdf) was found to not be a unique function of the conductivity variance. The skewness of the travel time pdf varied from negatively skewed to strongly positively skewed within the parameter range examined. We also show that the Markov chain approach may give significantly different travel time pdfs when compared to the more commonly used Gaussian random field approach even though the first and second order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport.

  1. MicroRaman Spectroscopy and Raman Imaging of Basal Cell Carcinoma

    NASA Astrophysics Data System (ADS)

    Short, M. A.; Zeng, H.; Lui, H.

    2005-03-01

    We have measured the Raman spectra of normal and cancerous skin tissues using a confocal microRaman spectrograph with a sub-micron spatial resolution. We found that the Raman spectrum of a cell nucleolus is different from the spectra measured outside the nucleolus and considerably different from those measured outside the nucleus. In addition, we found significant spectroscopic differences between normal and cancer-bearing sites in the dermis region. In order to utilize these differences for non-invasive skin cancer diagnosis, we have developed a Raman imaging system that clearly demonstrates the structure, location and distribution of cells in unstained skin biopsy samples. Our method is expected to be useful for the detection and characterization of skin cancer based on the known distinct cellular differences between normal and malignant skin.

  2. Correlation of tissue-plasma partition coefficients between normal tissues and subcutaneous xenografts of human tumor cell lines in mouse as a prediction tool of drug penetration in tumors.

    PubMed

    Poulin, Patrick; Hop, Cornelis Eca; Salphati, Laurent; Liederer, Bianca M

    2013-04-01

    Understanding drug distribution and accumulation in tumors would be informative in the assessment of efficacy in targeted therapy; however, existing methods for predicting tissue drug distribution focus on normal tissues and do not incorporate tumors. The main objective of this study was to describe the relationships between tissue-plasma concentration ratios (Kp ) of normal tissues and those of subcutaneous xenograft tumors under nonsteady-state conditions, and establish regression equations that could potentially be used for the prediction of drug levels in several human tumor xenografts in mouse, based solely on a Kp value determined in a normal tissue (e.g., muscle). A dataset of 17 compounds was collected from the literature and from Genentech. Tissue and plasma concentration data in mouse were obtained following oral gavage or intraperitoneal administration. Linear regression analyses were performed between Kp values in several normal tissues (muscle, lung, liver, or brain) and those in human tumor xenografts (CL6, EBC-1, HT-29, PC3, U-87, MCF-7-neo-Her2, or BT474M1.1). The tissue-plasma ratios in normal tissues reasonably correlated with the tumor-plasma ratios in CL6, EBC-1, HT-29, U-87, BT474M1.1, and MCF-7-neo-Her2 xenografts (r(2) in the range 0.62-1) but not with the PC3 xenograft. In general, muscle and lung exhibited the strongest correlation with tumor xenografts, followed by liver. Regression coefficients from brain were low, except between brain and the glioblastoma U-87 xenograft (r(2) in the range 0.62-0.94). Furthermore, reasonably strong correlations were observed between muscle and lung and between muscle and liver (r(2) in the range 0.67-0.96). The slopes of the regressions differed depending on the class of drug (strong vs. weak base) and type of tissue (brain vs. other tissues and tumors). Overall, this study will contribute to our understanding of tissue-plasma partition coefficients for tumors and facilitate the use of physiologically based pharmacokinetics (PBPK) modeling for chemotherapy in oncology studies. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 102:1355-1369, 2013. Copyright © 2013 Wiley Periodicals, Inc.

  3. Distribution of collagenous colitis: utility of flexible sigmoidoscopy.

    PubMed Central

    Tanaka, M; Mazzoleni, G; Riddell, R H

    1992-01-01

    We investigated the distribution of the collagen band in 33 patients with collagenous colitis to estimate the likelihood of the disease being diagnosed in biopsy specimens from the left side of the colon, such as those obtained using flexible sigmoidoscopy. To be included in this study patients had a subepithelial collagen band greater than or equal to 10 microns, an increase in chronic inflammatory cells in the same specimen, and diarrhoea for which there was no other apparent cause. In 17 patients undergoing full colonoscopy with a thickened collagen band, collagenous colitis was frequently patchy, even though overall the thickened collagen band was almost equally distributed throughout the colon. Rectal biopsy specimens showed a normal collagen band in 73% of patients, while a thickened collagen band was found in 82% of patients in at least one specimen from the left side of the colon. Three patients had a thickened collagen band only in the caecum. In three of eight rectal biopsy specimens with a normal collagen band there was no mucosal inflammation to raise the possibility of proximal disease, although all but one specimen with a normal collagen band from the sigmoid and descending colon were inflamed. Rectal biopsy alone is therefore a relatively poor method of making the diagnosis. Flexible sigmoidoscopy with multiple biopsy specimens from several sites is a reasonable initial investigation but not sufficient to exclude collagenous colitis when based on the presence of a thickened collagen band alone. Should left sided biopsy specimens show a normal collagen band but an inflamed mucosa, total colonoscopy with multiple specimens including the caecum may be required to establish the diagnosis. Images Figure 1 Figure 2 PMID:1740280

  4. Sketching Curves for Normal Distributions--Geometric Connections

    ERIC Educational Resources Information Center

    Bosse, Michael J.

    2006-01-01

    Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…

  5. On generalisations of the log-Normal distribution by means of a new product definition in the Kapteyn process

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, Sílvio M.

    2012-07-01

    We discuss the modification of the Kapteyn multiplicative process using the q-product of Borges [E.P. Borges, A possible deformed algebra and calculus inspired in nonextensive thermostatistics, Physica A 340 (2004) 95]. Depending on the value of the index q a generalisation of the log-Normal distribution is yielded. Namely, the distribution increases the tail for small (when q<1) or large (when q>1) values of the variable upon analysis. The usual log-Normal distribution is retrieved when q=1, which corresponds to the traditional Kapteyn multiplicative process. The main statistical features of this distribution as well as related random number generators and tables of quantiles of the Kolmogorov-Smirnov distance are presented. Finally, we illustrate the validity of this scenario by describing a set of variables of biological and financial origin.

  6. Tools for Basic Statistical Analysis

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.

    2005-01-01

    Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.

  7. How log-normal is your country? An analysis of the statistical distribution of the exported volumes of products

    NASA Astrophysics Data System (ADS)

    Annunziata, Mario Alberto; Petri, Alberto; Pontuale, Giorgio; Zaccaria, Andrea

    2016-10-01

    We have considered the statistical distributions of the volumes of 1131 products exported by 148 countries. We have found that the form of these distributions is not unique but heavily depends on the level of development of the nation, as expressed by macroeconomic indicators like GDP, GDP per capita, total export and a recently introduced measure for countries' economic complexity called fitness. We have identified three major classes: a) an incomplete log-normal shape, truncated on the left side, for the less developed countries, b) a complete log-normal, with a wider range of volumes, for nations characterized by intermediate economy, and c) a strongly asymmetric shape for countries with a high degree of development. Finally, the log-normality hypothesis has been checked for the distributions of all the 148 countries through different tests, Kolmogorov-Smirnov and Cramér-Von Mises, confirming that it cannot be rejected only for the countries of intermediate economy.

  8. Using monolingual neuropsychological test norms with bilingual Hispanic americans: application of an individual comparison standard.

    PubMed

    Gasquoine, Philip Gerard; Gonzalez, Cassandra Dayanira

    2012-05-01

    Conventional neuropsychological norms developed for monolinguals likely overestimate normal performance in bilinguals on language but not visual-perceptual format tests. This was studied by comparing neuropsychological false-positive rates using the 50th percentile of conventional norms and individual comparison standards (Picture Vocabulary or Matrix Reasoning scores) as estimates of preexisting neuropsychological skill level against the number expected from the normal distribution for a consecutive sample of 56 neurologically intact, bilingual, Hispanic Americans. Participants were tested in separate sessions in Spanish and English in the counterbalanced order on La Bateria Neuropsicologica and the original English language tests on which this battery was based. For language format measures, repeated-measures multivariate analysis of variance showed that individual estimates of preexisting skill level in English generated the mean number of false positives most approximate to that expected from the normal distribution, whereas the 50th percentile of conventional English language norms did the same for visual-perceptual format measures. When using conventional Spanish or English monolingual norms for language format neuropsychological measures with bilingual Hispanic Americans, individual estimates of preexisting skill level are recommended over the 50th percentile.

  9. Normal and tumoral melanocytes exhibit q-Gaussian random search patterns.

    PubMed

    da Silva, Priscila C A; Rosembach, Tiago V; Santos, Anésia A; Rocha, Márcio S; Martins, Marcelo L

    2014-01-01

    In multicellular organisms, cell motility is central in all morphogenetic processes, tissue maintenance, wound healing and immune surveillance. Hence, failures in its regulation potentiates numerous diseases. Here, cell migration assays on plastic 2D surfaces were performed using normal (Melan A) and tumoral (B16F10) murine melanocytes in random motility conditions. The trajectories of the centroids of the cell perimeters were tracked through time-lapse microscopy. The statistics of these trajectories was analyzed by building velocity and turn angle distributions, as well as velocity autocorrelations and the scaling of mean-squared displacements. We find that these cells exhibit a crossover from a normal to a super-diffusive motion without angular persistence at long time scales. Moreover, these melanocytes move with non-Gaussian velocity distributions. This major finding indicates that amongst those animal cells supposedly migrating through Lévy walks, some of them can instead perform q-Gaussian walks. Furthermore, our results reveal that B16F10 cells infected by mycoplasmas exhibit essentially the same diffusivity than their healthy counterparts. Finally, a q-Gaussian random walk model was proposed to account for these melanocytic migratory traits. Simulations based on this model correctly describe the crossover to super-diffusivity in the cell migration tracks.

  10. ROBUST: an interactive FORTRAN-77 package for exploratory data analysis using parametric, ROBUST and nonparametric location and scale estimates, data transformations, normality tests, and outlier assessment

    NASA Astrophysics Data System (ADS)

    Rock, N. M. S.

    ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.

  11. Economic values under inappropriate normal distribution assumptions.

    PubMed

    Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R

    2012-08-01

    The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.

  12. Gestational age estimates from singleton births conceived using assisted reproductive technology.

    PubMed

    Callaghan, William M; Schieve, Laura A; Dietz, Patricia M

    2007-09-01

    Information on gestational age for public health research and surveillance in the US is usually obtained from vital records and is primarily based on the first day of the woman's last menstrual period (LMP). However, using LMP as a marker of conception is subject to a variety of errors and results in misclassification of gestational age. Pregnancies conceived through assisted reproductive technology (ART) are unique in that the estimates of gestational age are not based on the LMP, but on the date when fertilisation actually occurred, and thus most gestational age errors are likely to be due to errors introduced in recording and data entry. The purpose of this paper was to examine the birthweight distribution by gestational age for ART singleton livebirths reported to a national ART surveillance system. Gestational age was categorised as 20-27, 28-31, 32-36 and 37-44 weeks; birthweight distributions were plotted for each category. The distributions of very-low-birthweight (VLBW; <1500 g), moderately low-birthweight (1500-2499 g) and normal-birthweight infants for each gestational week were examined. At both 20-27 and 28-31 weeks, there was an extended right tail to the distribution and a small second mode. At 32-36 weeks, there were long tails in either direction and at 37-44 weeks, an extended tail to the left. There was a high proportion of VLBW infants at low gestational ages and a decreasing proportion of VLBW infants with increasing gestational age. However, there was also a fairly constant proportion of normal-birthweight infants at every gestational age below 34 weeks, which suggested misclassification of gestational age. Approximately 12% of ART births classified as 28-31 weeks' gestation had a birthweight in the second mode of the birthweight distribution compared with approximately 29% in national vital statistics data. Even when the birthweight and dates of conception and birth are known, questions remain regarding the residual amount of misclassification and the true nature of the birthweight distributions.

  13. Priority of a Hesitant Fuzzy Linguistic Preference Relation with a Normal Distribution in Meteorological Disaster Risk Assessment.

    PubMed

    Wang, Lihong; Gong, Zaiwu

    2017-10-10

    As meteorological disaster systems are large complex systems, disaster reduction programs must be based on risk analysis. Consequently, judgment by an expert based on his or her experience (also known as qualitative evaluation) is an important link in meteorological disaster risk assessment. In some complex and non-procedural meteorological disaster risk assessments, a hesitant fuzzy linguistic preference relation (HFLPR) is often used to deal with a situation in which experts may be hesitant while providing preference information of a pairwise comparison of alternatives, that is, the degree of preference of one alternative over another. This study explores hesitation from the perspective of statistical distributions, and obtains an optimal ranking of an HFLPR based on chance-restricted programming, which provides a new approach for hesitant fuzzy optimisation of decision-making in meteorological disaster risk assessments.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagri, Akbar; Hanson, John P.; Lind, J. P.

    We use high-energy X-ray diffraction microscopy (HEDM) to characterize the microstructure of Ni-base alloy 725. HEDM is a non-destructive technique capable of providing three-dimensional reconstructions of grain shapes and orientations in polycrystals. The present analysis yields the grain size distribution in alloy 725 as well as the grain boundary character distribution (GBCD) as a function of lattice misorientation and boundary plane normal orientation. We find that the GBCD of Ni-base alloy 725 is similar to that previously determined in pure Ni and other fcc-base metals. We find an elevated density of Σ9 and Σ3 grain boundaries. We also observe amore » preponderance of grain boundaries along low-index planes, with those along (1 1 1) planes being the most common, even after Σ3 twins have been excluded from the analysis.« less

  15. Hierarchical distance-based fuzzy approach to evaluate urban water supply systems in a semi-arid region.

    PubMed

    Yekta, Tahereh Sadeghi; Khazaei, Mohammad; Nabizadeh, Ramin; Mahvi, Amir Hossein; Nasseri, Simin; Yari, Ahmad Reza

    2015-01-01

    Hierarchical distance-based fuzzy multi-criteria group decision making was served as a tool to evaluate the drinking water supply systems of Qom, a semi-arid city located in central part of Iran. A list of aspects consisting of 6 criteria and 35 sub-criteria were evaluated based on a linguistic term set by five decision-makers. Four water supply alternatives including "Public desalinated distribution system", "PET Bottled Drinking Water", "Private desalinated water suppliers" and "Household desalinated water units" were assessed based on criteria and sub-criteria. Data were aggregated and normalized to apply Performance Ratings of Alternatives. Also, the Performance Ratings of Alternatives were aggregated again to achieve the Aggregate Performance Ratings. The weighted distances from ideal solution and anti-ideal solution were calculated after secondary normalization. The proximity of each alternative to the ideal solution was determined as the final step. The alternatives were ranked based on the magnitude of ideal solutions. Results showed that "Public desalinated distribution system" was the most appropriate alternative to supply the drinking needs of Qom population. Also, "PET Bottled Drinking Water" was the second acceptable option. A novel classification of alternatives to satisfy the drinking water requirements was proposed which is applicable for the other cities located in semi-arid regions of Iran. The health issues were considered as independent criterion, distinct from the environmental issues. The constraints of high-tech alternatives were also considered regarding to the level of dependency on overseas.

  16. Box–Cox Transformation and Random Regression Models for Fecal egg Count Data

    PubMed Central

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P.; Sonstegard, Tad S.; Cobuci, Jaime Araujo; Gasbarre, Louis C.

    2012-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box–Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box–Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated. PMID:22303406

  17. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    PubMed

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  18. Three-dimensional modeling and quantitative analysis of gap junction distributions in cardiac tissue.

    PubMed

    Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W

    2011-11-01

    Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.

  19. Normal bone and soft tissue distribution of fluorine-18-sodium fluoride and artifacts on 18F-NaF PET/CT bone scan: a pictorial review.

    PubMed

    Sarikaya, Ismet; Elgazzar, Abdelhamid H; Sarikaya, Ali; Alfeeli, Mahmoud

    2017-10-01

    Fluorine-18-sodium fluoride (F-NaF) PET/CT is a relatively new and high-resolution bone imaging modality. Since the use of F-NaF PET/CT has been increasing, it is important to accurately assess the images and be aware of normal distribution and major artifacts. In this pictorial review article, we will describe the normal uptake patterns of F-NaF in the bone tissues, particularly in complex structures, as well as its physiologic soft tissue distribution and certain artifacts seen on F-NaF PET/CT images.

  20. Transferrin receptors in human tissues: their distribution and possible clinical relevance.

    PubMed

    Gatter, K C; Brown, G; Trowbridge, I S; Woolston, R E; Mason, D Y

    1983-05-01

    The distribution of transferrin receptors (TR) has been studied in a range of normal and malignant tissues using four monoclonal antibodies, BK19.9, B3/25, T56/14 and T58/1. In normal tissues TR was found in a limited number of sites, notably basal epidermis, the endocrine pancreas, hepatocytes, Kupffer cells, testis and pituitary. This restricted pattern of distribution may be relevant to the characteristic pattern of iron deposition in primary haemachromatosis. In contrast to this limited pattern of expression in normal tissue, the receptor was widely distributed in carcinomas, sarcomas and in samples from cases of Hodgkin's disease. This malignancy-associated expression of the receptor may play a role in the anaemia of advanced malignancy by competing with the bone marrow for serum iron.

  1. Transferrin receptors in human tissues: their distribution and possible clinical relevance.

    PubMed Central

    Gatter, K C; Brown, G; Trowbridge, I S; Woolston, R E; Mason, D Y

    1983-01-01

    The distribution of transferrin receptors (TR) has been studied in a range of normal and malignant tissues using four monoclonal antibodies, BK19.9, B3/25, T56/14 and T58/1. In normal tissues TR was found in a limited number of sites, notably basal epidermis, the endocrine pancreas, hepatocytes, Kupffer cells, testis and pituitary. This restricted pattern of distribution may be relevant to the characteristic pattern of iron deposition in primary haemachromatosis. In contrast to this limited pattern of expression in normal tissue, the receptor was widely distributed in carcinomas, sarcomas and in samples from cases of Hodgkin's disease. This malignancy-associated expression of the receptor may play a role in the anaemia of advanced malignancy by competing with the bone marrow for serum iron. Images PMID:6302135

  2. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    ERIC Educational Resources Information Center

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  3. Differential distribution of blood and lymphatic vessels in the murine cornea.

    PubMed

    Ecoiffier, Tatiana; Yuen, Don; Chen, Lu

    2010-05-01

    Because of its unique characteristics, the cornea has been widely used for blood and lymphatic vessel research. However, whether limbal or corneal vessels are evenly distributed under normal or inflamed conditions has never been studied. The purpose of this study was to investigate this question and to examine whether and how the distribution patterns change during corneal inflammatory lymphangiogenesis (LG) and hemangiogenesis (HG). Corneal inflammatory LG and HG were induced in two most commonly used mouse strains, BALB/c and C57BL/6 (6-8 weeks of age), by a standardized two-suture placement model. Oriented flat-mount corneas together with the limbal tissues were used for immunofluorescence microscope studies. Blood and lymphatic vessels under normal and inflamed conditions were analyzed and quantified to compare their distributions. The data demonstrate, for the first time, greater distribution of both blood and lymphatic vessels in the nasal side in normal murine limbal areas. This nasal-dominant pattern was maintained during corneal inflammatory LG, whereas it was lost for HG. Blood and lymphatic vessels are not evenly distributed in normal limbal areas. Furthermore, corneal LG and HG respond differently to inflammatory stimuli. These new findings will shed some light on corneal physiology and pathogenesis and on the development of experimental models and therapeutic strategies for corneal diseases.

  4. Three-dimensional finite analysis of acetabular contact pressure and contact area during normal walking.

    PubMed

    Wang, Guangye; Huang, Wenjun; Song, Qi; Liang, Jinfeng

    2017-11-01

    This study aims to analyze the contact areas and pressure distributions between the femoral head and mortar during normal walking using a three-dimensional finite element model (3D-FEM). Computed tomography (CT) scanning technology and a computer image processing system were used to establish the 3D-FEM. The acetabular mortar model was used to simulate the pressures during 32 consecutive normal walking phases and the contact areas at different phases were calculated. The distribution of the pressure peak values during the 32 consecutive normal walking phases was bimodal, which reached the peak (4.2 Mpa) at the initial phase where the contact area was significantly higher than that at the stepping phase. The sites that always kept contact were concentrated on the acetabular top and leaned inwards, while the anterior and posterior acetabular horns had no pressure concentration. The pressure distributions of acetabular cartilage at different phases were significantly different, the zone of increased pressure at the support phase distributed at the acetabular top area, while that at the stepping phase distributed in the inside of acetabular cartilage. The zones of increased contact pressure and the distributions of acetabular contact areas had important significance towards clinical researches, and could indicate the inductive factors of acetabular osteoarthritis. Copyright © 2016. Published by Elsevier Taiwan.

  5. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  6. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  7. An Application of Extreme Value Theory to Learning Analytics: Predicting Collaboration Outcome from Eye-Tracking Data

    ERIC Educational Resources Information Center

    Sharma, Kshitij; Chavez-Demoulin, Valérie; Dillenbourg, Pierre

    2017-01-01

    The statistics used in education research are based on central trends such as the mean or standard deviation, discarding outliers. This paper adopts another viewpoint that has emerged in statistics, called extreme value theory (EVT). EVT claims that the bulk of normal distribution is comprised mainly of uninteresting variations while the most…

  8. "It's Not Like a Normal 9 to 5!": The Learning Journeys of Media Production Apprentices in Distributed Working Conditions

    ERIC Educational Resources Information Center

    Lahiff, Ann; Guile, David

    2016-01-01

    An apprenticeship in media production in England is at the centre of this case study exploration. The context is exemplified by the organisation of the process of production around project teams and the development of project-based working cultures. Given these developments, the working conditions and learning opportunities presented to…

  9. Improvement of distributed snowmelt energy balance modeling with MODIS-based NDSI-derived fractional snow-covered area data

    Treesearch

    Joel W. Homan; Charles H. Luce; James P. McNamara; Nancy F. Glenn

    2011-01-01

    Describing the spatial variability of heterogeneous snowpacks at a watershed or mountain-front scale is important for improvements in large-scale snowmelt modelling. Snowmelt depletion curves, which relate fractional decreases in snowcovered area (SCA) against normalized decreases in snow water equivalent (SWE), are a common approach to scale-up snowmelt models....

  10. Graphene-Based Polymer Nanocomposites

    DTIC Science & Technology

    2015-03-31

    Raman band I(δ) X - ray scattering intensity in the azimuthal scan I(r) Raman band intensity within laser spot I(ω...Krenchel orientation factor Θ Angle between the incident and the scattering X - ray θ Angle between the surface normal of graphene and sample λ...Wavelength of laser or X - ray λ2/λ4 Parameter in orientation distribution function µ Molecular dipole moment

  11. Detection of cancerous cervical cells using physical adhesion of fluorescent silica particles and centripetal force.

    PubMed

    Gaikwad, Ravi M; Dokukin, Maxim E; Iyer, K Swaminathan; Woodworth, Craig D; Volkov, Dmytro O; Sokolov, Igor

    2011-04-07

    Here we describe a non-traditional method to identify cancerous human cervical epithelial cells in a culture dish based on physical adhesion between silica beads and cells. It is a simple optical fluorescence-based technique which detects the relative difference in the amount of fluorescent silica beads physically adherent to surfaces of cancerous and normal cervical cells. The method utilizes the centripetal force gradient that occurs in a rotating culture dish. Due to the variation in the balance between adhesion and centripetal forces, cancerous and normal cells demonstrate clearly distinctive distributions of the fluorescent particles adherent to the cell surface over the culture dish. The method demonstrates higher adhesion of silica particles to normal cells compared to cancerous cells. The difference in adhesion was initially observed by atomic force microscopy (AFM). The AFM data were used to design the parameters of the rotational dish experiment. The optical method that we describe is much faster and technically simpler than AFM. This work provides proof of the concept that physical interactions can be used to accurately discriminate normal and cancer cells. © The Royal Society of Chemistry 2011

  12. Sampling errors for satellite-derived tropical rainfall - Monte Carlo study using a space-time stochastic model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.

    1990-01-01

    Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.

  13. A direct comparison of popular models of normal memory loss and Alzheimer's disease in samples of African Americans, Mexican Americans, and refugees and immigrants from the former Soviet Union.

    PubMed

    Schrauf, Robert W; Iris, Madelyn

    2011-04-01

    To understand how people differentiate normal memory loss from Alzheimer's disease (AD) by investigating cultural models of these conditions. Ethnographic interviews followed by a survey. Cultural consensus analysis was used to test for the presence of group models, derive the "culturally correct" set of beliefs, and compare models of normal memory loss and AD. Chicago, Illinois. One hundred eight individuals from local neighborhoods: African Americans, Mexican Americans, and refugees and immigrants from the former Soviet Union. Participants responded to yes-or-no questions about the nature and causes of normal memory loss and AD and provided information on ethnicity, age, sex, acculturation, and experience with AD. Groups held a common model of AD as a brain-based disease reflecting irreversible cognitive decline. Higher levels of acculturation predicted greater knowledge of AD. Russian speakers favored biological over psychological models of the disease. Groups also held a common model of normal memory loss, including the important belief that "normal" forgetting involves eventual recall of the forgotten material. Popular models of memory loss and AD confirm that patients and clinicians are speaking the same "language" in their discussions of memory loss and AD. Nevertheless, the presence of coherent models of memory loss and AD, and the unequal distribution of that knowledge across groups, suggests that clinicians should include wider circles of patients' families and friends in their consultations. These results frame knowledge as distributed across social groups rather than simply the possession of individual minds. © 2011, Copyright the Authors. Journal compilation © 2011, The American Geriatrics Society.

  14. Multiple imputation in the presence of non-normal data.

    PubMed

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Pulsatile flows and wall-shear stresses in models simulating normal and stenosed aortic arches

    NASA Astrophysics Data System (ADS)

    Huang, Rong Fung; Yang, Ten-Fang; Lan, Y.-K.

    2010-03-01

    Pulsatile aqueous glycerol solution flows in the models simulating normal and stenosed human aortic arches are measured by means of particle image velocimetry. Three transparent models were used: normal, 25% stenosed, and 50% stenosed aortic arches. The Womersley parameter, Dean number, and time-averaged Reynolds number are 17.31, 725, and 1,081, respectively. The Reynolds numbers based on the peak velocities of the normal, 25% stenosed, and 50% stenosed aortic arches are 2,484, 3,456, and 3,931, respectively. The study presents the temporal/spatial evolution processes of the flow pattern, velocity distribution, and wall-shear stress during the systolic and diastolic phases. It is found that the flow pattern evolving in the central plane of normal and stenosed aortic arches exhibits (1) a separation bubble around the inner arch, (2) a recirculation vortex around the outer arch wall upstream of the junction of the brachiocephalic artery, (3) an accelerated main stream around the outer arch wall near the junctions of the left carotid and the left subclavian arteries, and (4) the vortices around the entrances of the three main branches. The study identifies and discusses the reasons for the flow physics’ contribution to the formation of these features. The oscillating wall-shear stress distributions are closely related to the featured flow structures. On the outer wall of normal and slightly stenosed aortas, large wall-shear stresses appear in the regions upstream of the junction of the brachiocephalic artery as well as the corner near the junctions of the left carotid artery and the left subclavian artery. On the inner wall, the largest wall-shear stress appears in the region where the boundary layer separates.

  16. Time-invariant component-based normalization for a simultaneous PET-MR scanner.

    PubMed

    Belzunce, M A; Reader, A J

    2016-05-07

    Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this scanner. Moreover, to extend the analysis to other scanners, we generated distributions of crystal efficiencies with greater fluctuations than those found in the Biograph mMR scanner and evaluated their impact in simulations with a wide variety of noise levels. An important finding of this work is that a regular normalization scan is not needed in scanners with photodetectors with relatively low dispersion in their efficiencies.

  17. Time-invariant component-based normalization for a simultaneous PET-MR scanner

    NASA Astrophysics Data System (ADS)

    Belzunce, M. A.; Reader, A. J.

    2016-05-01

    Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this scanner. Moreover, to extend the analysis to other scanners, we generated distributions of crystal efficiencies with greater fluctuations than those found in the Biograph mMR scanner and evaluated their impact in simulations with a wide variety of noise levels. An important finding of this work is that a regular normalization scan is not needed in scanners with photodetectors with relatively low dispersion in their efficiencies.

  18. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  19. Statistical analysis of content of Cs-137 in soils in Bansko-Razlog region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobilarov, R. G., E-mail: rkobi@tu-sofia.bg

    Statistical analysis of the data set consisting of the activity concentrations of {sup 137}Cs in soils in Bansko–Razlog region is carried out in order to establish the dependence of the deposition and the migration of {sup 137}Cs on the soil type. The descriptive statistics and the test of normality show that the data set have not normal distribution. Positively skewed distribution and possible outlying values of the activity of {sup 137}Cs in soils were observed. After reduction of the effects of outliers, the data set is divided into two parts, depending on the soil type. Test of normality of themore » two new data sets shows that they have a normal distribution. Ordinary kriging technique is used to characterize the spatial distribution of the activity of {sup 137}Cs over an area covering 40 km{sup 2} (whole Razlog valley). The result (a map of the spatial distribution of the activity concentration of {sup 137}Cs) can be used as a reference point for future studies on the assessment of radiological risk to the population and the erosion of soils in the study area.« less

  20. Box-Cox transformation of firm size data in statistical analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2014-03-01

    Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.

  1. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  2. Non-linear learning in online tutorial to enhance students’ knowledge on normal distribution application topic

    NASA Astrophysics Data System (ADS)

    Kartono; Suryadi, D.; Herman, T.

    2018-01-01

    This study aimed to analyze the enhancement of non-linear learning (NLL) in the online tutorial (OT) content to students’ knowledge of normal distribution application (KONDA). KONDA is a competence expected to be achieved after students studied the topic of normal distribution application in the course named Education Statistics. The analysis was performed by quasi-experiment study design. The subject of the study was divided into an experimental class that was given OT content in NLL model and a control class which was given OT content in conventional learning (CL) model. Data used in this study were the results of online objective tests to measure students’ statistical prior knowledge (SPK) and students’ pre- and post-test of KONDA. The statistical analysis test of a gain score of KONDA of students who had low and moderate SPK’s scores showed students’ KONDA who learn OT content with NLL model was better than students’ KONDA who learn OT content with CL model. Meanwhile, for students who had high SPK’s scores, the gain score of students who learn OT content with NLL model had relatively similar with the gain score of students who learn OT content with CL model. Based on those findings it could be concluded that the NLL model applied to OT content could enhance KONDA of students in low and moderate SPK’s levels. Extra and more challenging didactical situation was needed for students in high SPK’s level to achieve the significant gain score.

  3. [Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].

    PubMed

    Zhu, Chun; Zhang, Xu

    2010-10-01

    Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.

  4. Full Waveform Inversion Using Student's t Distribution: a Numerical Study for Elastic Waveform Inversion and Simultaneous-Source Method

    NASA Astrophysics Data System (ADS)

    Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki

    2015-06-01

    Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.

  5. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  6. The Theoretical Distribution of Evoked Brainstem Activity in Preterm, High-Risk, and Healthy Infants.

    ERIC Educational Resources Information Center

    Salamy, A.

    1981-01-01

    Determines the frequency distribution of Brainstem Auditory Evoked Potential variables (BAEP) for premature babies at different stages of development--normal newborns, infants, young children, and adults. The author concludes that the assumption of normality underlying most "standard" statistical analyses can be met for many BAEP…

  7. Multivariate stochastic simulation with subjective multivariate normal distributions

    Treesearch

    P. J. Ince; J. Buongiorno

    1991-01-01

    In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...

  8. Discrete Latent Markov Models for Normally Distributed Response Data

    ERIC Educational Resources Information Center

    Schmittmann, Verena D.; Dolan, Conor V.; van der Maas, Han L. J.; Neale, Michael C.

    2005-01-01

    Van de Pol and Langeheine (1990) presented a general framework for Markov modeling of repeatedly measured discrete data. We discuss analogical single indicator models for normally distributed responses. In contrast to discrete models, which have been studied extensively, analogical continuous response models have hardly been considered. These…

  9. Investigation into the Use of Normal and Half-Normal Plots for Interpreting Results from Screening Experiments.

    DTIC Science & Technology

    1987-03-25

    by Lloyd (1952) using generalized least squares instead of ordinary least squares, and by Wilk, % 20 Gnanadesikan , and Freeny (1963) using a maximum...plot. The half-normal distribution is a special case of the gamma distribution proposed by Wilk, Gnanadesikan , and Huyett (1962). VARIATIONS ON THE... Gnanadesikan , R. Probability plotting methods for the analysis of data. Biometrika, 1968, 55, 1-17. This paper describes and discusses graphical techniques

  10. Antagonism Between Luminal and Caffeine, Studied by the Use of Radioisotopes; RECHERCHES SUR LES BARBITURIQUES RADIOACTIFS: LA DISTRIBUTION DU LUMINAL DANS L'ORGANISME ANIMAL EN PRESENCE DE CAFEINE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aliprandi, B.; Masironi, R.

    1959-10-31

    The normal pattern of distribution of luminal in the animal organism was determined in mice using a tracer technique. The effect of an antagonistic drug, e.g., caffeine, on this normal distribution pattern was studied. The results confirmed the hypothesis of the in vivo breaking of the barbituric ring. (J.S.R.)

  11. Total energy based flight control system

    NASA Technical Reports Server (NTRS)

    Lambregts, Antonius A. (Inventor)

    1985-01-01

    An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.

  12. Planar Laser Imaging of Sprays for Liquid Rocket Studies

    NASA Technical Reports Server (NTRS)

    Lee, W.; Pal, S.; Ryan, H. M.; Strakey, P. A.; Santoro, Robert J.

    1990-01-01

    A planar laser imaging technique which incorporates an optical polarization ratio technique for droplet size measurement was studied. A series of pressure atomized water sprays were studied with this technique and compared with measurements obtained using a Phase Doppler Particle Analyzer. In particular, the effects of assuming a logarithmic normal distribution function for the droplet size distribution within a spray was evaluated. Reasonable agreement between the instrument was obtained for the geometric mean diameter of the droplet distribution. However, comparisons based on the Sauter mean diameter show larger discrepancies, essentially because of uncertainties in the appropriate standard deviation to be applied for the polarization ratio technique. Comparisons were also made between single laser pulse (temporally resolved) measurements with multiple laser pulse visualizations of the spray.

  13. Information pricing based on trusted system

    NASA Astrophysics Data System (ADS)

    Liu, Zehua; Zhang, Nan; Han, Hongfeng

    2018-05-01

    Personal information has become a valuable commodity in today's society. So our goal aims to develop a price point and a pricing system to be realistic. First of all, we improve the existing BLP system to prevent cascading incidents, design a 7-layer model. Through the cost of encryption in each layer, we develop PI price points. Besides, we use association rules mining algorithms in data mining algorithms to calculate the importance of information in order to optimize informational hierarchies of different attribute types when located within a multi-level trusted system. Finally, we use normal distribution model to predict encryption level distribution for users in different classes and then calculate information prices through a linear programming model with the help of encryption level distribution above.

  14. A comparative study of the spatial distribution of mast cells and microvessels in the foetal, adult human thymus and thymoma.

    PubMed

    Raica, Marius; Cimpean, Anca Maria; Nico, Beatrice; Guidolin, Diego; Ribatti, Domenico

    2010-02-01

    Mast cells (MCs) are widely distributed in human and animal tissues and have been shown to play an important role in angiogenesis in normal and pathological conditions. Few data are available about the relationship between MCs and blood vessels in the normal human thymus, and there are virtually no data about their distribution and significance in thymoma. The aim of this study was to analyse the spatial distribution of MCs and microvessels in the normal foetal and adult thymus and thymoma. Twenty biopsy specimens of human thymus, including foetal and adult normal thymus and thymoma were analysed. Double staining with CD34 and mast cell tryptase was used to count both mast cells and microvessels in the same fields. Computer-assisted image analysis was performed to characterize the spatial distribution of MCs and blood vessels in selected specimens. Results demonstrated that MCs were localized exclusively to the medulla. Their number was significantly higher in thymoma specimens as compared with adult and foetal normal specimens respectively. In contrast the microvessel area was unchanged. The analysis of the spatial distribution and relationship between MCs and microvessels revealed that only in the thymoma specimens was there a significant spatial association between MCs and microvessels. Overall, these data suggest that MCs do not contribute significantly to the development of the vascular network in foetal and adult thymus, whereas in thymoma they show a close relationship to blood vessels. This could be an expression of their involvement not only in endothelial cells but also in tumour cell proliferation.

  15. Multiplicity distributions of charged hadrons in vp and charged current interactions

    NASA Astrophysics Data System (ADS)

    Jones, G. T.; Jones, R. W. L.; Kennedy, B. W.; Morrison, D. R. O.; Mobayyen, M. M.; Wainstein, S.; Aderholz, M.; Hantke, D.; Katz, U. F.; Kern, J.; Schmitz, N.; Wittek, W.; Borner, H. P.; Myatt, G.; Radojicic, D.; Burke, S.

    1992-03-01

    Using data on vp andbar vp charged current interactions from a bubble chamber experiment with BEBC at CERN, the multiplicity distributions of charged hadrons are investigated. The analysis is based on ˜20000 events with incident v and ˜10000 events with incidentbar v. The invariant mass W of the total hadronic system ranges from 3 GeV to ˜14 GeV. The experimental multiplicity distributions are fitted by the binomial function (for different intervals of W and in different intervals of the rapidity y), by the Levy function and the lognormal function. All three parametrizations give acceptable values for X 2. For fixed W, forward and backward multiplicities are found to be uncorrelated. The normalized moments of the charged multiplicity distributions are measured as a function of W. They show a violation of KNO scaling.

  16. Monte Carlo based electron treatment planning and cutout output factor calculations

    NASA Astrophysics Data System (ADS)

    Mitrou, Ellis

    Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.

  17. Construction and validation of the midsagittal reference plane based on the skull base symmetry for three-dimensional cephalometric craniofacial analysis.

    PubMed

    Kim, Hak-Jin; Kim, Bong Chul; Kim, Jin-Geun; Zhengguo, Piao; Kang, Sang Hoon; Lee, Sang-Hwy

    2014-03-01

    The objective of this study was to determine the reliable midsagittal (MS) reference plane in practical ways for the three-dimensional craniofacial analysis on three-dimensional computed tomography images. Five normal human dry skulls and 20 normal subjects without any dysmorphoses or asymmetries were used. The accuracies and stability on repeated plane construction for almost every possible candidate MS plane based on the skull base structures were examined by comparing the discrepancies in distances and orientations from the reference points and planes of the skull base and facial bones on three-dimensional computed tomography images. The following reference points of these planes were stable, and their distribution was balanced: nasion and foramen cecum at the anterior part of the skull base, sella at the middle part, and basion and opisthion at the posterior part. The candidate reference planes constructed using the aforementioned reference points were thought to be reliable for use as an MS reference plane for the three-dimensional analysis of maxillofacial dysmorphosis.

  18. The impact of sample non-normality on ANOVA and alternative methods.

    PubMed

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  19. The importance of organic matter distribution and extract soil:solution ratio on the desorption of heavy metals from soils.

    PubMed

    Yin, Yujun; Impellitteri, Christopher A; You, Sun-Jae; Allen, Herbert E

    2002-03-15

    The lability (mobility and bioavailability) of metals varies significantly with soil properties for similar total soil metal concentrations. We studied desorption of Cu, Ni and Zn, from 15 diverse, unamended soils. These studies included evaluation of the effects of soil:solution extraction ratio and the roles of soil properties on metal desorption. Dcsorption was examined for each metal by computing distribution coefficients (Kd) for each metal in each soil where Kd = [M]soil/[M]solution, Results from soil:solution ratio studies demonstrated that Kd values for the metals tended to increase with increasing soil:solution ratio. This result also held true for distribution of soil organic matter (SOM). Because the soil:solution ratio has a significant effect on measured metal distributions, we selected a high soil:solution ratio to more closely approach natural soil conditions. Copper showed strong affinity to operationally defined dissolved organic matter (DOM). In this study, DOM was operationally defined based on the total organic carbon (TOC) content in 0.45-microm or 0.22-microm filtrates of the extracts. The Kd of Cu correlated linearly (r2 = 0.91) with the Kd of organic matter (Kd-om) where the Kd-om is equal to SOM as measured by Walkley-Black wet combustion and converted to total carbon (TC) by a factor of 0.59. These values representing solid phase TC were then divided by soluble organic carbon as measured by TOC analysis (DOM). The conversion factor of 0.59 was employed in order to construct Kd-om values based on solid phase carbon and solution phase carbon. SOM plays a significant role in the fate of Cu in soil systems. Soil-solution distribution of Ni and Zn, as well as the activity of free Cu2+, were closely related to SOM, but not to DOM. Kd values for Ni, Zn and free Cu2+ in a particular soil were divided by the SOM content in the same soil. This normalization of the Kd values for Ni, Zn, and free Cu2+ to the SOM content resulted in significant improvements in the linear relationships between non-normalized Kd values and soil pH. The semi-empirical normalized regression equations can be used to predict the solubility of Ni and Zn and the activity of free Cu2+ as a function of pH.

  20. Size distribution and sorption of polychlorinated biphenyls during haze episodes

    NASA Astrophysics Data System (ADS)

    Zhu, Qingqing; Liu, Guorui; Zheng, Minghui; Zhang, Xian; Gao, Lirong; Su, Guijin; Liang, Yong

    2018-01-01

    There is a lack of studies on the size distribution of polychlorinated biphenyls (PCBs) during haze days, and their sorption mechanisms on aerosol particles remain unclear. In this study, PCBs in particle-sized aerosols from urban atmospheres of Beijing, China were investigated during haze and normal days. The concentrations, gas/particle partitioning, size distribution, and associated human daily intake of PCBs via inhalation were compared during haze days and normal days. Compared with normal days, higher particle mass-associated PCB levels were measured during haze days. The concentrations of ∑PCBs in particulate fractions were 11.9-134 pg/m3 and 6.37-14.9 pg/m3 during haze days and normal days, respectively. PCBs increased with decreasing particle size (>10 μm, 10-2.5 μm, 2.5-1.0 μm, and ≤1.0 μm). During haze days, PCBs were overwhelmingly associated with a fine particle fraction of ≤1.0 μm (64.6%), while during normal days the contribution was 33.7%. Tetra-CBs were the largest contributors (51.8%-66.7%) both in the gas and particle fractions during normal days. The profiles in the gas fraction were conspicuously different than those in the PM fractions during haze days, with di-CBs predominating in the gas fraction and higher homologues (tetra-CBs, penta-CBs, and hexa-CBs) concurrently accounting for most of the PM fractions. The mean-normalized size distributions of particulate mass and PCBs exhibited unimodal patterns, and a similar trend was observed for PCBs during both days. They all tended to be in the PM fraction of 1.0-2.5 μm. Adsorption might be the predominating mechanism for the gas-particle partitioning of PCBs during haze days, whereas absorption might be dominative during normal days.

  1. Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Das, Samiran

    2018-04-01

    The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.

  2. Nonlinear saturation of wave packets excited by low-energy electron horseshoe distributions.

    PubMed

    Krafft, C; Volokitin, A

    2013-05-01

    Horseshoe distributions are shell-like particle distributions that can arise in space and laboratory plasmas when particle beams propagate into increasing magnetic fields. The present paper studies the stability and the dynamics of wave packets interacting resonantly with electrons presenting low-energy horseshoe or shell-type velocity distributions in a magnetized plasma. The linear instability growth rates are determined as a function of the ratio of the plasma to the cyclotron frequencies, of the velocity and the opening angle of the horseshoe, and of the relative thickness of the shell. The nonlinear stage of the instability is investigated numerically using a symplectic code based on a three-dimensional Hamiltonian model. Simulation results show that the dynamics of the system is mainly governed by wave-particle interactions at Landau and normal cyclotron resonances and that the high-order normal cyclotron resonances play an essential role. Specific features of the dynamics of particles interacting simultaneously with two or more waves at resonances of different natures and orders are discussed, showing that such complex processes determine the main characteristics of the wave spectrum's evolution. Simulations with wave packets presenting quasicontinuous spectra provide a full picture of the relaxation of the horseshoe distribution, revealing two main phases of the evolution: an initial stage of wave energy growth, characterized by a fast filling of the shell, and a second phase of slow damping of the wave energy, accompanied by final adjustments of the electron distribution. The influence of the density inhomogeneity along the horseshoe on the wave-particle dynamics is also discussed.

  3. Grain coarsening in two-dimensional phase-field models with an orientation field

    NASA Astrophysics Data System (ADS)

    Korbuly, Bálint; Pusztai, Tamás; Henry, Hervé; Plapp, Mathis; Apel, Markus; Gránásy, László

    2017-05-01

    In the literature, contradictory results have been published regarding the form of the limiting (long-time) grain size distribution (LGSD) that characterizes the late stage grain coarsening in two-dimensional and quasi-two-dimensional polycrystalline systems. While experiments and the phase-field crystal (PFC) model (a simple dynamical density functional theory) indicate a log-normal distribution, other works including theoretical studies based on conventional phase-field simulations that rely on coarse grained fields, like the multi-phase-field (MPF) and orientation field (OF) models, yield significantly different distributions. In a recent work, we have shown that the coarse grained phase-field models (whether MPF or OF) yield very similar limiting size distributions that seem to differ from the theoretical predictions. Herein, we revisit this problem, and demonstrate in the case of OF models [R. Kobayashi, J. A. Warren, and W. C. Carter, Physica D 140, 141 (2000), 10.1016/S0167-2789(00)00023-3; H. Henry, J. Mellenthin, and M. Plapp, Phys. Rev. B 86, 054117 (2012), 10.1103/PhysRevB.86.054117] that an insufficient resolution of the small angle grain boundaries leads to a log-normal distribution close to those seen in the experiments and the molecular scale PFC simulations. Our paper indicates, furthermore, that the LGSD is critically sensitive to the details of the evaluation process, and raises the possibility that the differences among the LGSD results from different sources may originate from differences in the detection of small angle grain boundaries.

  4. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    NASA Astrophysics Data System (ADS)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  5. Risk of portfolio with simulated returns based on copula model

    NASA Astrophysics Data System (ADS)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2015-02-01

    The commonly used tool for measuring risk of a portfolio with equally weighted stocks is variance-covariance method. Under extreme circumstances, this method leads to significant underestimation of actual risk due to its multivariate normality assumption of the joint distribution of stocks. The purpose of this research is to compare the actual risk of portfolio with the simulated risk of portfolio in which the joint distribution of two return series is predetermined. The data used is daily stock prices from the ASEAN market for the period January 2000 to December 2012. The copula approach is applied to capture the time varying dependence among the return series. The results shows that the chosen copula families are not suitable to present the dependence structures of each bivariate returns. Exception for the Philippines-Thailand pair where by t copula distribution appears to be the appropriate choice to depict its dependence. Assuming that the t copula distribution is the joint distribution of each paired series, simulated returns is generated and value-at-risk (VaR) is then applied to evaluate the risk of each portfolio consisting of two simulated return series. The VaR estimates was found to be symmetrical due to the simulation of returns via elliptical copula-GARCH approach. By comparison, it is found that the actual risks are underestimated for all pairs of portfolios except for Philippines-Thailand. This study was able to show that disregard of the non-normal dependence structure of two series will result underestimation of actual risk of the portfolio.

  6. Anharmonic Normal Mode Analysis of Elastic Network Model Improves the Modeling of Atomic Fluctuations in Protein Crystal Structures

    PubMed Central

    Zheng, Wenjun

    2010-01-01

    Abstract Protein conformational dynamics, despite its significant anharmonicity, has been widely explored by normal mode analysis (NMA) based on atomic or coarse-grained potential functions. To account for the anharmonic aspects of protein dynamics, this study proposes, and has performed, an anharmonic NMA (ANMA) based on the Cα-only elastic network models, which assume elastic interactions between pairs of residues whose Cα atoms or heavy atoms are within a cutoff distance. The key step of ANMA is to sample an anharmonic potential function along the directions of eigenvectors of the lowest normal modes to determine the mean-squared fluctuations along these directions. ANMA was evaluated based on the modeling of anisotropic displacement parameters (ADPs) from a list of 83 high-resolution protein crystal structures. Significant improvement was found in the modeling of ADPs by ANMA compared with standard NMA. Further improvement in the modeling of ADPs is attained if the interactions between a protein and its crystalline environment are taken into account. In addition, this study has determined the optimal cutoff distances for ADP modeling based on elastic network models, and these agree well with the peaks of the statistical distributions of distances between Cα atoms or heavy atoms derived from a large set of protein crystal structures. PMID:20550915

  7. Application of the Bootstrap Statistical Method in Deriving Vibroacoustic Specifications

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; Paez, Thomas L.

    2006-01-01

    This paper discusses the Bootstrap Method for specification of vibroacoustic test specifications. Vibroacoustic test specifications are necessary to properly accept or qualify a spacecraft and its components for the expected acoustic, random vibration and shock environments seen on an expendable launch vehicle. Traditionally, NASA and the U.S. Air Force have employed methods of Normal Tolerance Limits to derive these test levels based upon the amount of data available, and the probability and confidence levels desired. The Normal Tolerance Limit method contains inherent assumptions about the distribution of the data. The Bootstrap is a distribution-free statistical subsampling method which uses the measured data themselves to establish estimates of statistical measures of random sources. This is achieved through the computation of large numbers of Bootstrap replicates of a data measure of interest and the use of these replicates to derive test levels consistent with the probability and confidence desired. The comparison of the results of these two methods is illustrated via an example utilizing actual spacecraft vibroacoustic data.

  8. Characterization of TSP-bound n-alkanes and polycyclic aromatic hydrocarbons at rural and urban sites of Tianjin, China.

    PubMed

    Wu, Shui-Ping; Tao, Shu; Zhang, Zhi-Huan; Lan, Tian; Zuo, Qian

    2007-05-01

    Total suspended particle (TSP) was collected and analyzed at rural and urban sites in Tianjin, China during the domestic heating season (from 15 November to 15 March) of 2003/4 for n-alkanes and 16 polycyclic aromatic hydrocarbons (PAHs). The normalized distribution of n-alkanes with the peak at C22, C23, C24 or C25 suggested that fossil fuel utilization was the major source of particulate n-alkanes at both sites. PAHs normalized distribution for each sample was similar and the higher molecular weight PAH dominated the profile (around 90%) indicating a stronger combustion source at both sites. Precipitation and wind were the most important meteorological factors influencing TSP and PAHs atmospheric concentrations. In the urban area the emission height had significant influence on PAHs levels at different heights under the relative stable atmospheric conditions. Coal combustion was the major source for TSP-bound PAHs at both sites based on some diagnostic ratios.

  9. Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio

    1993-02-01

    The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.

  10. Circularly-symmetric complex normal ratio distribution for scalar transmissibility functions. Part III: Application to statistical modal analysis

    NASA Astrophysics Data System (ADS)

    Yan, Wang-Ji; Ren, Wei-Xin

    2018-01-01

    This study applies the theoretical findings of circularly-symmetric complex normal ratio distribution Yan and Ren (2016) [1,2] to transmissibility-based modal analysis from a statistical viewpoint. A probabilistic model of transmissibility function in the vicinity of the resonant frequency is formulated in modal domain, while some insightful comments are offered. It theoretically reveals that the statistics of transmissibility function around the resonant frequency is solely dependent on 'noise-to-signal' ratio and mode shapes. As a sequel to the development of the probabilistic model of transmissibility function in modal domain, this study poses the process of modal identification in the context of Bayesian framework by borrowing a novel paradigm. Implementation issues unique to the proposed approach are resolved by Lagrange multiplier approach. Also, this study explores the possibility of applying Bayesian analysis in distinguishing harmonic components and structural ones. The approaches are verified through simulated data and experimentally testing data. The uncertainty behavior due to variation of different factors is also discussed in detail.

  11. Stereophotogrammetrie Mass Distribution Parameter Determination Of The Lower Body Segments For Use In Gait Analysis

    NASA Astrophysics Data System (ADS)

    Sheffer, Daniel B.; Schaer, Alex R.; Baumann, Juerg U.

    1989-04-01

    Inclusion of mass distribution information in biomechanical analysis of motion is a requirement for the accurate calculation of external moments and forces acting on the segmental joints during locomotion. Regression equations produced from a variety of photogrammetric, anthropometric and cadaeveric studies have been developed and espoused in literature. Because of limitations in the accuracy of predicted inertial properties based on the application of regression equation developed on one population and then applied on a different study population, the employment of a measurement technique that accurately defines the shape of each individual subject measured is desirable. This individual data acquisition method is especially needed when analyzing the gait of subjects with large differences in their extremity geo-metry from those considered "normal", or who may possess gross asymmetries in shape in their own contralateral limbs. This study presents the photogrammetric acquisition and data analysis methodology used to assess the inertial tensors of two groups of subjects, one with spastic diplegic cerebral palsy and the other considered normal.

  12. DENBRAN: A basic program for a significance test for multivariate normality of clusters from branching patterns in dendrograms

    NASA Astrophysics Data System (ADS)

    Sneath, P. H. A.

    A BASIC program is presented for significance tests to determine whether a dendrogram is derived from clustering of points that belong to a single multivariate normal distribution. The significance tests are based on statistics of the Kolmogorov—Smirnov type, obtained by comparing the observed cumulative graph of branch levels with a graph for the hypothesis of multivariate normality. The program also permits testing whether the dendrogram could be from a cluster of lower dimensionality due to character correlations. The program makes provision for three similarity coefficients, (1) Euclidean distances, (2) squared Euclidean distances, and (3) Simple Matching Coefficients, and for five cluster methods (1) WPGMA, (2) UPGMA, (3) Single Linkage (or Minimum Spanning Trees), (4) Complete Linkage, and (5) Ward's Increase in Sums of Squares. The program is entitled DENBRAN.

  13. Wavelet entropy characterization of elevated intracranial pressure.

    PubMed

    Xu, Peng; Scalzo, Fabien; Bergsneider, Marvin; Vespa, Paul; Chad, Miller; Hu, Xiao

    2008-01-01

    Intracranial Hypertension (ICH) often occurs for those patients with traumatic brain injury (TBI), stroke, tumor, etc. Pathology of ICH is still controversial. In this work, we used wavelet entropy and relative wavelet entropy to study the difference existed between normal and hypertension states of ICP for the first time. The wavelet entropy revealed the similar findings as the approximation entropy that entropy during ICH state is smaller than that in normal state. Moreover, with wavelet entropy, we can see that ICH state has the more focused energy in the low wavelet frequency band (0-3.1 Hz) than the normal state. The relative wavelet entropy shows that the energy distribution in the wavelet bands between these two states is actually different. Based on these results, we suggest that ICH may be formed by the re-allocation of oscillation energy within brain.

  14. Scattering properties of normal and cancerous tissues from human stomach based on phase-contrast microscope

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Li, Zhifang; Li, Hui

    2012-12-01

    In order to study scattering properties of normal and cancerous tissues from human stomach, we collect images for human gastric specimens by using phase-contrast microscope. The images were processed by the way of mathematics morphology. The equivalent particle size distribution of tissues can be obtained. Combining with Mie scattering theory, the scattering properties of tissues can be calculated. Assume scattering of light in biological tissue can be seen as separate scattering events by different particles, total scattering properties can be equivalent to as scattering sum of particles with different diameters. The results suggest that scattering coefficient of the cancerous tissue is significantly higher than that of normal tissue. The scattering phase function is different especially in the backscattering area. Those are significant clinical benefits to diagnosis cancerous tissue

  15. Study on discrimination of oral cancer from normal using blood plasma based on fluorescence steady and excited state at excitation wavelength 280 nm

    NASA Astrophysics Data System (ADS)

    Rekha, Pachaiappan; Aruna, Prakasa Rao; Ganesan, Singaravelu

    2016-03-01

    Many research works based on fluorescence spectroscopy have proven its potential in the diagnosis of various diseases using the spectral signatures of the native key fluorophores such as tryptophan, tyrosine, collagen, NADH, FAD and porphyrin. These fluorophores distribution, concentration and their conformation may be changed depending upon the pathological and metabolic conditions of cells and tissues. In this study, we have made an attempt to characterize the blood plasma of normal subject and oral cancer patients by native fluorescence spectroscopy at 280 nm excitation. Further, the fluorescence data were analyzed by employing the multivariate statistical method - linear discriminant analyses (LDA) using leaves one out cross validation method. The results illustrate the potential of fluorescence spectroscopy technique in the diagnosis of oral cancer using blood plasma.

  16. On a framework for generating PoD curves assisted by numerical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in

    2015-03-31

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less

  17. On a framework for generating PoD curves assisted by numerical simulations

    NASA Astrophysics Data System (ADS)

    Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar

    2015-03-01

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.

  18. Demography-based adaptive network model reproduces the spatial organization of human linguistic groups

    NASA Astrophysics Data System (ADS)

    Capitán, José A.; Manrubia, Susanna

    2015-12-01

    The distribution of human linguistic groups presents a number of interesting and nontrivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.

  19. Demography-based adaptive network model reproduces the spatial organization of human linguistic groups.

    PubMed

    Capitán, José A; Manrubia, Susanna

    2015-12-01

    The distribution of human linguistic groups presents a number of interesting and nontrivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.

  20. Effect of rapid thermal annealing temperature on the dispersion of Si nanocrystals in SiO2 matrix

    NASA Astrophysics Data System (ADS)

    Saxena, Nupur; Kumar, Pragati; Gupta, Vinay

    2015-05-01

    Effect of rapid thermal annealing temperature on the dispersion of silicon nanocrystals (Si-NC's) embedded in SiO2 matrix grown by atom beam sputtering (ABS) method is reported. The dispersion of Si NCs in SiO2 is an important issue to fabricate high efficiency devices based on Si-NC's. The transmission electron microscopy studies reveal that the precipitation of excess silicon is almost uniform and the particles grow in almost uniform size upto 850 °C. The size distribution of the particles broadens and becomes bimodal as the temperature is increased to 950 °C. This suggests that by controlling the annealing temperature, the dispersion of Si-NC's can be controlled. The results are supported by selected area diffraction (SAED) studies and micro photoluminescence (PL) spectroscopy. The discussion of effect of particle size distribution on PL spectrum is presented based on tight binding approximation (TBA) method using Gaussian and log-normal distribution of particles. The study suggests that the dispersion and consequently emission energy varies as a function of particle size distribution and that can be controlled by annealing parameters.

Top