Mapping of quantitative trait loci using the skew-normal distribution.
Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos
2007-11-01
In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.
Chou, C P; Bentler, P M; Satorra, A
1991-11-01
Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
Realized Volatility Analysis in A Spin Model of Financial Markets
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
We calculate the realized volatility of returns in the spin model of financial markets and examine the returns standardized by the realized volatility. We find that moments of the standardized returns agree with the theoretical values of standard normal variables. This is the first evidence that the return distributions of the spin financial markets are consistent with a finite-variance of mixture of normal distributions that is also observed empirically in real financial markets.
Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per
2011-01-01
Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175
Rochon, Justine; Kieser, Meinhard
2011-11-01
Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-01-30
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18 F-FLT PET SUV distributions (P > 0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
NASA Astrophysics Data System (ADS)
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-02-01
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18F-FLT PET SUV distributions (P > 0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
Sketching Curves for Normal Distributions--Geometric Connections
ERIC Educational Resources Information Center
Bosse, Michael J.
2006-01-01
Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…
Limpert, Eckhard; Stahel, Werner A.
2011-01-01
Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325
Limpert, Eckhard; Stahel, Werner A
2011-01-01
The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.
Detection of Person Misfit in Computerized Adaptive Tests with Polytomous Items.
ERIC Educational Resources Information Center
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.
2002-01-01
Compared the nominal and empirical null distributions of the standardized log-likelihood statistic for polytomous items for paper-and-pencil (P&P) and computerized adaptive tests (CATs). Results show that the empirical distribution of the statistic differed from the assumed standard normal distribution for both P&P tests and CATs. Also…
Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas
2002-01-01
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
ERIC Educational Resources Information Center
Salamy, A.
1981-01-01
Determines the frequency distribution of Brainstem Auditory Evoked Potential variables (BAEP) for premature babies at different stages of development--normal newborns, infants, young children, and adults. The author concludes that the assumption of normality underlying most "standard" statistical analyses can be met for many BAEP…
Confidence Limits for the Indirect Effect: Distribution of the Product and Resampling Methods
ERIC Educational Resources Information Center
MacKinnon, David P.; Lockwood, Chondra M.; Williams, Jason
2004-01-01
The most commonly used method to test an indirect effect is to divide the estimate of the indirect effect by its standard error and compare the resulting z statistic with a critical value from the standard normal distribution. Confidence limits for the indirect effect are also typically based on critical values from the standard normal…
NASA Technical Reports Server (NTRS)
Usry, J. W.
1983-01-01
Wind shear statistics were calculated for a simulated set of wind profiles based on a proposed standard wind field data base. Wind shears were grouped in altitude in altitude bands of 100 ft between 100 and 1400 ft and in wind shear increments of 0.025 knot/ft. Frequency distributions, means, and standard deviations for each altitude band were derived for the total sample were derived for both sets. It was found that frequency distributions in each altitude band for the simulated data set were more dispersed below 800 ft and less dispersed above 900 ft than those for the measured data set. Total sample frequency of occurrence for the two data sets was about equal for wind shear values between +0.075 knot/ft, but the simulated data set had significantly larger values for all wind shears outside these boundaries. It is shown that normal distribution in both data sets neither data set was normally distributed; similar results are observed from the cumulative frequency distributions.
The retest distribution of the visual field summary index mean deviation is close to normal.
Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz
2016-09-01
When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
[Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].
Zhu, Chun; Zhang, Xu
2010-10-01
Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Time-independent models of asset returns revisited
NASA Astrophysics Data System (ADS)
Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.
2000-07-01
In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.
Distribution Development for STORM Ingestion Input Parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulton, John
The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr tomore » a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e -4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e -4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)« less
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-01-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented. PMID:15238544
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
Plasma Electrolyte Distributions in Humans-Normal or Skewed?
Feldman, Mark; Dickson, Beverly
2017-11-01
It is widely believed that plasma electrolyte levels are normally distributed. Statistical tests and calculations using plasma electrolyte data are often reported based on this assumption of normality. Examples include t tests, analysis of variance, correlations and confidence intervals. The purpose of our study was to determine whether plasma sodium (Na + ), potassium (K + ), chloride (Cl - ) and bicarbonate [Formula: see text] distributions are indeed normally distributed. We analyzed plasma electrolyte data from 237 consecutive adults (137 women and 100 men) who had normal results on a standard basic metabolic panel which included plasma electrolyte measurements. The skewness of each distribution (as a measure of its asymmetry) was compared to the zero skewness of a normal (Gaussian) distribution. The plasma Na + distribution was skewed slightly to the right, but the skew was not significantly different from zero skew. The plasma Cl - distribution was skewed slightly to the left, but again the skew was not significantly different from zero skew. On the contrary, both the plasma K + and [Formula: see text] distributions were significantly skewed to the right (P < 0.01 zero skew). There was also a suggestion from examining frequency distribution curves that K + and [Formula: see text] distributions were bimodal. In adults with a normal basic metabolic panel, plasma potassium and bicarbonate levels are not normally distributed and may be bimodal. Thus, statistical methods to evaluate these 2 plasma electrolytes should be nonparametric tests and not parametric ones that require a normal distribution. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2016-08-12
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
ERIC Educational Resources Information Center
Kelava, Augustin; Nagengast, Benjamin
2012-01-01
Structural equation models with interaction and quadratic effects have become a standard tool for testing nonlinear hypotheses in the social sciences. Most of the current approaches assume normally distributed latent predictor variables. In this article, we present a Bayesian model for the estimation of latent nonlinear effects when the latent…
Economic values under inappropriate normal distribution assumptions.
Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R
2012-08-01
The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.
Multivariate Models for Normal and Binary Responses in Intervention Studies
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Chang, Wanchen
2016-01-01
Use of multivariate analysis (e.g., multivariate analysis of variance) is common when normally distributed outcomes are collected in intervention research. However, when mixed responses--a set of normal and binary outcomes--are collected, standard multivariate analyses are no longer suitable. While mixed responses are often obtained in…
Location tests for biomarker studies: a comparison using simulations for the two-sample case.
Scheinhardt, M O; Ziegler, A
2013-01-01
Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
Estimating insect flight densities from attractive trap catches and flight height distributions
USDA-ARS?s Scientific Manuscript database
Insect species often exhibit a specific mean flight height and vertical flight distribution that approximates a normal distribution with a characteristic standard deviation (SD). Many studies in the literature report catches on passive (non-attractive) traps at several heights. These catches were us...
Yang, Huixia; Wei, Yumei; Su, Rina; Wang, Chen; Meng, Wenying; Wang, Yongqing; Shang, Lixin; Cai, Zhenyu; Ji, Liping; Wang, Yunfeng; Sun, Ying; Liu, Jiaxiu; Wei, Li; Sun, Yufeng; Zhang, Xueying; Luo, Tianxia; Chen, Haixia; Yu, Lijun
2016-01-01
Objective To use Z-scores to compare different charts of femur length (FL) applied to our population with the aim of identifying the most appropriate chart. Methods A retrospective study was conducted in Beijing. Fifteen hospitals in Beijing were chosen as clusters using a systemic cluster sampling method, in which 15,194 pregnant women delivered from June 20th to November 30th, 2013. The measurements of FL in the second and third trimester were recorded, as well as the last measurement obtained before delivery. Based on the inclusion and exclusion criteria, we identified FL measurements from 19996 ultrasounds from 7194 patients between 11 and 42 weeks gestation. The FL data were then transformed into Z-scores that were calculated using three series of reference equations obtained from three reports: Leung TN, Pang MW et al (2008); Chitty LS, Altman DG et al (1994); and Papageorghiou AT et al (2014). Each Z-score distribution was presented as the mean and standard deviation (SD). Skewness and kurtosis and were compared with the standard normal distribution using the Kolmogorov-Smirnov test. The histogram of their distributions was superimposed on the non-skewed standard normal curve (mean = 0, SD = 1) to provide a direct visual impression. Finally, the sensitivity and specificity of each reference chart for identifying fetuses <5th or >95th percentile (based on the observed distribution of Z-scores) were calculated. The Youden index was also listed. A scatter diagram with the 5th, 50th, and 95th percentile curves calculated from and superimposed on each reference chart was presented to provide a visual impression. Results The three Z-score distribution curves appeared to be normal, but none of them matched the expected standard normal distribution. In our study, the Papageorghiou reference curve provided the best results, with a sensitivity of 100% for identifying fetuses with measurements < 5th and > 95th percentile, and specificities of 99.9% and 81.5%, respectively. Conclusions It is important to choose an appropriate reference curve when defining what is normal. The Papageorghiou reference curve for FL seems to be the best fit for our population. Perhaps it is time to change our reference curve for femur length. PMID:27458922
Bidisperse and polydisperse suspension rheology at large solid fraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pednekar, Sidhant; Chun, Jaehun; Morris, Jeffrey F.
At the same solid volume fraction, bidisperse and polydisperse suspensions display lower viscosities, and weaker normal stress response, compared to monodisperse suspensions. The reduction of viscosity associated with size distribution can be explained by an increase of the maximum flowable, or jamming, solid fraction. In this work, concentrated or "dense" suspensions are simulated under strong shearing, where thermal motion and repulsive forces are negligible, but we allow for particle contact with a mild frictional interaction with interparticle friction coefficient of 0.2. Aspects of bidisperse suspension rheology are first revisited to establish that the approach reproduces established trends; the study ofmore » bidisperse suspensions at size ratios of large to small particle radii (2 to 4) shows that a minimum in the viscosity occurs for zeta slightly above 0.5, where zeta=phi_{large}/phi is the fraction of the total solid volume occupied by the large particles. The simple shear flows of polydisperse suspensions with truncated normal and log normal size distributions, and bidisperse suspensions which are statistically equivalent with these polydisperse cases up to third moment of the size distribution, are simulated and the rheologies are extracted. Prior work shows that such distributions with equivalent low-order moments have similar phi_{m}, and the rheological behaviors of normal, log normal and bidisperse cases are shown to be in close agreement for a wide range of standard deviation in particle size, with standard correlations which are functionally dependent on phi/phi_{m} providing excellent agreement with the rheology found in simulation. The close agreement of both viscosity and normal stress response between bi- and polydisperse suspensions demonstrates the controlling in influence of the maximum packing fraction in noncolloidal suspensions. Microstructural investigations and the stress distribution according to particle size are also presented.« less
Determining Normal-Distribution Tolerance Bounds Graphically
NASA Technical Reports Server (NTRS)
Mezzacappa, M. A.
1983-01-01
Graphical method requires calculations and table lookup. Distribution established from only three points: mean upper and lower confidence bounds and lower confidence bound of standard deviation. Method requires only few calculations with simple equations. Graphical procedure establishes best-fit line for measured data and bounds for selected confidence level and any distribution percentile.
A log-normal distribution model for the molecular weight of aquatic fulvic acids
Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.
2000-01-01
The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.
Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan
2016-01-01
The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
Code of Federal Regulations, 2010 CFR
2010-01-01
... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Structure Flight Loads § 23.321 General. (a) Flight load factors represent the ratio of the aerodynamic force component (acting normal to... distribution of disposable load within the operating limitations specified in §§ 23.1583 through 23.1589. (c...
[Establishment of Assessment Method for Air Bacteria and Fungi Contamination].
Zhang, Hua-ling; Yao, Da-jun; Zhang, Yu; Fang, Zi-liang
2016-03-15
In this paper, in order to settle existing problems in the assessment of air bacteria and fungi contamination, the indoor and outdoor air bacteria and fungi filed concentrations by impact method and settlement method in existing documents were collected and analyzed, then the goodness of chi square was used to test whether these concentration data obeyed normal distribution at the significant level of α = 0.05, and combined with the 3σ principle of normal distribution and the current assessment standards, the suggested concentrations ranges of air microbial concentrations were determined. The research results could provide a reference for developing air bacteria and fungi contamination assessment standards in the future.
Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua
2016-03-01
This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.
Extreme Mean and Its Applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.
1979-01-01
Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.
NASA Technical Reports Server (NTRS)
Marble, Frank E.; Ritter, William K.; Miller, Mahlon A.
1946-01-01
For the normal range of engine power the impeller provided marked improvement over the standard spray-bar injection system. Mixture distribution at cruising was excellent, maximum cylinder temperatures were reduced about 30 degrees F, and general temperature distribution was improved. The uniform mixture distribution restored the normal response of cylinder temperature to mixture enrichment and it reduced the possibility of carburetor icing, while no serious loss in supercharger pressure rise resulted from injection of fuel near the impeller outlet. The injection impeller also furnished a convenient means of adding water to the charge mixture for internal cooling.
14 CFR 27.787 - Cargo and baggage compartments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Design and Construction Personnel and Cargo... for its placarded maximum weight of contents and for the critical load distributions at the... authorized weight of cargo and baggage at the critical loading distribution. (d) If cargo compartment lamps...
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
Hansen, John P
2003-01-01
Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.
A Posteriori Correction of Forecast and Observation Error Variances
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid
2005-01-01
Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.
14 CFR 27.1503 - Airspeed limitations: general.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Operating Limitations and Information... established. (b) When airspeed limitations are a function of weight, weight distribution, altitude, rotor...
Code of Federal Regulations, 2010 CFR
2010-01-01
... STANDARDS: NORMAL CATEGORY ROTORCRAFT Strength Requirements Flight Loads § 27.321 General. (a) The flight load factor must be assumed to act normal to the longitudinal axis of the rotorcraft, and to be equal... from the design minimum weight to the design maximum weight; and (2) With any practical distribution of...
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong
2016-03-01
Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p < 0.05). In a real dataset, SAN had the lowest SDM and Kolmogorov-Smirnov values for blood urea nitrogen, hematocrit, hemoglobin, and serum potassium, and the lowest SDM for serum creatinine (p < 0.05). Subgroup-adjusted normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin
2015-12-01
Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.
14 CFR 23.1361 - Master switch arrangement.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Equipment Electrical... to allow ready disconnection of each electric power source from power distribution systems, except as...
Optimizing fish sampling for fish - mercury bioaccumulation factors
Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste A.; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.
2015-01-01
Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.
Pan-European comparison of candidate distributions for climatological drought indices, SPI and SPEI
NASA Astrophysics Data System (ADS)
Stagge, James; Tallaksen, Lena; Gudmundsson, Lukas; Van Loon, Anne; Stahl, Kerstin
2013-04-01
Drought indices are vital to objectively quantify and compare drought severity, duration, and extent across regions with varied climatic and hydrologic regimes. The Standardized Precipitation Index (SPI), a well-reviewed meterological drought index recommended by the WMO, and its more recent water balance variant, the Standardized Precipitation-Evapotranspiration Index (SPEI) both rely on selection of univariate probability distributions to normalize the index, allowing for comparisons across climates. The SPI, considered a universal meteorological drought index, measures anomalies in precipitation, whereas the SPEI measures anomalies in climatic water balance (precipitation minus potential evapotranspiration), a more comprehensive measure of water availability that incorporates temperature. Many reviewers recommend use of the gamma (Pearson Type III) distribution for SPI normalization, while developers of the SPEI recommend use of the three parameter log-logistic distribution, based on point observation validation. Before the SPEI can be implemented at the pan-European scale, it is necessary to further validate the index using a range of candidate distributions to determine sensitivity to distribution selection, identify recommended distributions, and highlight those instances where a given distribution may not be valid. This study rigorously compares a suite of candidate probability distributions using WATCH Forcing Data, a global, historical (1958-2001) climate dataset based on ERA40 reanalysis with 0.5 x 0.5 degree resolution and bias-correction based on CRU-TS2.1 observations. Using maximum likelihood estimation, alternative candidate distributions are fit for the SPI and SPEI across the range of European climate zones. When evaluated at this scale, the gamma distribution for the SPI results in negatively skewed values, exaggerating the index severity of extreme dry conditions, while decreasing the index severity of extreme high precipitation. This bias is particularly notable for shorter aggregation periods (1-6 months) during the summer months in southern Europe (below 45° latitude), and can partially be attributed to distribution fitting difficulties in semi-arid regions where monthly precipitation totals cluster near zero. By contrast, the SPEI has potential for avoiding this fitting difficulty because it is not bounded by zero. However, the recommended log-logistic distribution produces index values with less variation than the standard normal distribution. Among the alternative candidate distributions, the best fit distribution and the distribution parameters vary in space and time, suggesting regional commonalities within hydroclimatic regimes, as discussed further in the presentation.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
14 CFR 27.1583 - Operating limitations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Operating Limitations and Information Rotorcraft Flight... the instruments required by §§ 27.1549 through 27.1553. (c) Weight and loading distribution. The...
Automated scoring system of standard uptake value for torso FDG-PET images
NASA Astrophysics Data System (ADS)
Hara, Takeshi; Kobayashi, Tatsunori; Kawai, Kazunao; Zhou, Xiangrong; Itoh, Satoshi; Katafuchi, Tetsuro; Fujita, Hiroshi
2008-03-01
The purpose of this work was to develop an automated method to calculate the score of SUV for torso region on FDG-PET scans. The three dimensional distributions for the mean and the standard deviation values of SUV were stored in each volume to score the SUV in corresponding pixel position within unknown scans. The modeling methods is based on SPM approach using correction technique of Euler characteristic and Resel (Resolution element). We employed 197 nor-mal cases (male: 143, female: 54) to assemble the normal metabolism distribution of FDG. The physique were registered each other in a rectangular parallelepiped shape using affine transformation and Thin-Plate-Spline technique. The regions of the three organs were determined based on semi-automated procedure. Seventy-three abnormal spots were used to estimate the effectiveness of the scoring methods. As a result, the score images correctly represented that the scores for normal cases were between zeros to plus/minus 2 SD. Most of the scores of abnormal spots associated with cancer were lager than the upper of the SUV interval of normal organs.
Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei
2010-01-01
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…
14 CFR 27.1389 - Position light distribution and intensities.
Code of Federal Regulations, 2012 CFR
2012-01-01
... provided by new equipment with light covers and color filters in place. Intensities must be determined with... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Position light distribution and intensities... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Equipment Lights § 27.1389 Position...
14 CFR 27.1389 - Position light distribution and intensities.
Code of Federal Regulations, 2011 CFR
2011-01-01
... provided by new equipment with light covers and color filters in place. Intensities must be determined with... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Position light distribution and intensities... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Equipment Lights § 27.1389 Position...
14 CFR 27.1389 - Position light distribution and intensities.
Code of Federal Regulations, 2013 CFR
2013-01-01
... provided by new equipment with light covers and color filters in place. Intensities must be determined with... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Position light distribution and intensities... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Equipment Lights § 27.1389 Position...
14 CFR 27.1389 - Position light distribution and intensities.
Code of Federal Regulations, 2014 CFR
2014-01-01
... provided by new equipment with light covers and color filters in place. Intensities must be determined with... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Position light distribution and intensities... TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL CATEGORY ROTORCRAFT Equipment Lights § 27.1389 Position...
14 CFR 23.787 - Baggage and cargo compartments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and... critical load distributions at the appropriate maximum load factors corresponding to the flight and ground...
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Sileshi, G
2006-10-01
Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
The probability integral of the multivariate normal distribution (ND) has received considerable attention since W. F. Sheppard's (1900) and K. Pearson's (1901) seminal work on the bivariate ND. This paper evaluates the formula that represents the "n x n" correlation matrix of the "chi(sub i)" and the standardized multivariate…
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around...
Using R to Simulate Permutation Distributions for Some Elementary Experimental Designs
ERIC Educational Resources Information Center
Eudey, T. Lynn; Kerr, Joshua D.; Trumbo, Bruce E.
2010-01-01
Null distributions of permutation tests for two-sample, paired, and block designs are simulated using the R statistical programming language. For each design and type of data, permutation tests are compared with standard normal-theory and nonparametric tests. These examples (often using real data) provide for classroom discussion use of metrics…
Analysis of vector wind change with respect to time for Cape Kennedy, Florida
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
1978-01-01
Multivariate analysis was used to determine the joint distribution of the four variables represented by the components of the wind vector at an initial time and after a specified elapsed time is hypothesized to be quadravariate normal; the fourteen statistics of this distribution, calculated from 15 years of twice-daily rawinsonde data are presented by monthly reference periods for each month from 0 to 27 km. The hypotheses that the wind component changes with respect to time is univariate normal, that the joint distribution of wind component change with respect to time is univariate normal, that the joint distribution of wind component changes is bivariate normal, and that the modulus of vector wind change is Rayleigh are tested by comparison with observed distributions. Statistics of the conditional bivariate normal distributions of vector wind at a future time given the vector wind at an initial time are derived. Wind changes over time periods from 1 to 5 hours, calculated from Jimsphere data, are presented. Extension of the theoretical prediction (based on rawinsonde data) of wind component change standard deviation to time periods of 1 to 5 hours falls (with a few exceptions) within the 95 percentile confidence band of the population estimate obtained from the Jimsphere sample data. The joint distributions of wind change components, conditional wind components, and 1 km vector wind shear change components are illustrated by probability ellipses at the 95 percentile level.
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
Differential distribution of blood and lymphatic vessels in the murine cornea.
Ecoiffier, Tatiana; Yuen, Don; Chen, Lu
2010-05-01
Because of its unique characteristics, the cornea has been widely used for blood and lymphatic vessel research. However, whether limbal or corneal vessels are evenly distributed under normal or inflamed conditions has never been studied. The purpose of this study was to investigate this question and to examine whether and how the distribution patterns change during corneal inflammatory lymphangiogenesis (LG) and hemangiogenesis (HG). Corneal inflammatory LG and HG were induced in two most commonly used mouse strains, BALB/c and C57BL/6 (6-8 weeks of age), by a standardized two-suture placement model. Oriented flat-mount corneas together with the limbal tissues were used for immunofluorescence microscope studies. Blood and lymphatic vessels under normal and inflamed conditions were analyzed and quantified to compare their distributions. The data demonstrate, for the first time, greater distribution of both blood and lymphatic vessels in the nasal side in normal murine limbal areas. This nasal-dominant pattern was maintained during corneal inflammatory LG, whereas it was lost for HG. Blood and lymphatic vessels are not evenly distributed in normal limbal areas. Furthermore, corneal LG and HG respond differently to inflammatory stimuli. These new findings will shed some light on corneal physiology and pathogenesis and on the development of experimental models and therapeutic strategies for corneal diseases.
He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian
2013-09-01
The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents with various solvent extracts. The TQSMSS can characterize the sample similarity, by which we can quantitate the correct probability with the test of power under to make positive and negative conclusions no matter the samples come from same population under confident coefficient a or not, by which we can realize an analysis at both macroscopic and microcosmic levels, as an important similar analytical method for medical theoretical research.
Standard deviation and standard error of the mean.
Lee, Dong Kyu; In, Junyong; Lee, Sangseok
2015-06-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.
Standard deviation and standard error of the mean
In, Junyong; Lee, Sangseok
2015-01-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923
1989-08-01
Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S
Kao, Johnny; Pettit, Jeffrey; Zahid, Soombal; Gold, Kenneth D; Palatt, Terry
2015-01-01
The optimal technique for performing lung IMRT remains poorly defined. We hypothesize that improved dose distributions associated with normal tissue-sparing IMRT can allow safe dose escalation resulting in decreased acute and late toxicity. We performed a retrospective analysis of 82 consecutive lung cancer patients treated with curative intent from 1/10 to 9/14. From 1/10 to 4/12, 44 patients were treated with the community standard of three-dimensional conformal radiotherapy or IMRT without specific esophagus or contralateral lung constraints (standard RT). From 5/12 to 9/14, 38 patients were treated with normal tissue-sparing IMRT with selective sparing of contralateral lung and esophagus. The study endpoints were dosimetry, toxicity, and overall survival. Despite higher mean prescribed radiation doses in the normal tissue-sparing IMRT cohort (64.5 vs. 60.8 Gy, p = 0.04), patients treated with normal tissue-sparing IMRT had significantly lower lung V20, V10, V5, mean lung, esophageal V60, and mean esophagus doses compared to patients treated with standard RT (p ≤ 0.001). Patients in the normal tissue-sparing IMRT group had reduced acute grade ≥3 esophagitis (0 vs. 11%, p < 0.001), acute grade ≥2 weight loss (2 vs. 16%, p = 0.04), and late grade ≥2 pneumonitis (7 vs. 21%, p = 0.02). The 2-year overall survival was 52% with normal tissue-sparing IMRT arm compared to 28% for standard RT (p = 0.015). These data provide proof of principle that suboptimal radiation dose distributions are associated with significant acute and late lung and esophageal toxicity that may result in hospitalization or even premature mortality. Strict attention to contralateral lung and esophageal dose-volume constraints are feasible in the community hospital setting without sacrificing disease control.
The missing impact craters on Venus
NASA Technical Reports Server (NTRS)
Speidel, D. H.
1993-01-01
The size-frequency pattern of the 842 impact craters on Venus measured to date can be well described (across four standard deviation units) as a single log normal distribution with a mean crater diameter of 14.5 km. This result was predicted in 1991 on examination of the initial Magellan analysis. If this observed distribution is close to the real distribution, the 'missing' 90 percent of the small craters and the 'anomalous' lack of surface splotches may thus be neither missing nor anomalous. I think that the missing craters and missing splotches can be satisfactorily explained by accepting that the observed distribution approximates the real one, that it is not craters that are missing but the impactors. What you see is what you got. The implication that Venus crossing impactors would have the same type of log normal distribution is consistent with recently described distribution for terrestrial craters and Earth crossing asteroids.
WE-H-207A-03: The Universality of the Lognormal Behavior of [F-18]FLT PET SUV Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarpelli, M; Eickhoff, J; Perlman, S
Purpose: Log transforming [F-18]FDG PET standardized uptake values (SUVs) has been shown to lead to normal SUV distributions, which allows utilization of powerful parametric statistical models. This study identified the optimal transformation leading to normally distributed [F-18]FLT PET SUVs from solid tumors and offers an example of how normal distributions permits analysis of non-independent/correlated measurements. Methods: Forty patients with various metastatic diseases underwent up to six FLT PET/CT scans during treatment. Tumors were identified by nuclear medicine physician and manually segmented. Average uptake was extracted for each patient giving a global SUVmean (gSUVmean) for each scan. The Shapiro-Wilk test wasmore » used to test distribution normality. One parameter Box-Cox transformations were applied to each of the six gSUVmean distributions and the optimal transformation was found by selecting the parameter that maximized the Shapiro-Wilk test statistic. The relationship between gSUVmean and a serum biomarker (VEGF) collected at imaging timepoints was determined using a linear mixed effects model (LMEM), which accounted for correlated/non-independent measurements from the same individual. Results: Untransformed gSUVmean distributions were found to be significantly non-normal (p<0.05). The optimal transformation parameter had a value of 0.3 (95%CI: −0.4 to 1.6). Given the optimal parameter was close to zero (which corresponds to log transformation), the data were subsequently log transformed. All log transformed gSUVmean distributions were normally distributed (p>0.10 for all timepoints). Log transformed data were incorporated into the LMEM. VEGF serum levels significantly correlated with gSUVmean (p<0.001), revealing log-linear relationship between SUVs and underlying biology. Conclusion: Failure to account for correlated/non-independent measurements can lead to invalid conclusions and motivated transformation to normally distributed SUVs. The log transformation was found to be close to optimal and sufficient for obtaining normally distributed FLT PET SUVs. These transformations allow utilization of powerful LMEMs when analyzing quantitative imaging metrics.« less
NASA Astrophysics Data System (ADS)
Rock, N. M. S.
ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.
14 CFR 23.511 - Ground load; unsymmetrical loads on multiple-wheel units.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER... distribution, to the dual wheels and tires in each dual wheel landing gear unit. (c) Deflated tire loads. For...
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arutyunyan, R.V.; Bol`shov, L.A.; Vasil`ev, S.K.
1994-06-01
The objective of this study was to clarify a number of issues related to the spatial distribution of contaminants from the Chernobyl accident. The effects of local statistics were addressed by collecting and analyzing (for Cesium 137) soil samples from a number of regions, and it was found that sample activity differed by a factor of 3-5. The effect of local non-uniformity was estimated by modeling the distribution of the average activity of a set of five samples for each of the regions, with the spread in the activities for a {+-}2 range being equal to 25%. The statistical characteristicsmore » of the distribution of contamination were then analyzed and found to be a log-normal distribution with the standard deviation being a function of test area. All data for the Bryanskaya Oblast area were analyzed statistically and were adequately described by a log-normal function.« less
Robust Mediation Analysis Based on Median Regression
Yuan, Ying; MacKinnon, David P.
2014-01-01
Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925
Richard A. Johnson; James W. Evans; David W. Green
2003-01-01
Ratios of strength properties of lumber are commonly used to calculate property values for standards. Although originally proposed in terms of means, ratios are being applied without regard to position in the distribution. It is now known that lumber strength properties are generally not normally distributed. Therefore, nonparametric methods are often used to derive...
Kollins, Scott H; McClernon, F Joseph; Epstein, Jeff N
2009-02-01
Smoking abstinence differentially affects cognitive functioning in smokers with ADHD, compared to non-ADHD smokers. Alternative approaches for analyzing reaction time data from these tasks may further elucidate important group differences. Adults smoking > or = 15 cigarettes with (n=12) or without (n=14) a diagnosis of ADHD completed a continuous performance task (CPT) during two sessions under two separate laboratory conditions--a 'Satiated' condition wherein participants smoked up to and during the session; and an 'Abstinent' condition, in which participants were abstinent overnight and during the session. Reaction time (RT) distributions from the CPT were modeled to fit an ex-Gaussian distribution. The indicator of central tendency for RT from the normal component of the RT distribution (mu) showed a main effect of Group (ADHD < Control) and a Group x Session interaction (ADHD group RTs decreased when abstinent). RT standard deviation for the normal component of the distribution (sigma) showed no effects. The ex-Gaussian parameter tau, which describes the mean and standard deviation of the non-normal component of the distribution, showed significant effects of session (Abstinent > Satiated), Group x Session interaction (ADHD increased significantly under Abstinent condition compared to Control), and a trend toward a main effect of Group (ADHD > Control). Alternative approaches to analyzing RT data provide a more detailed description of the effects of smoking abstinence in ADHD and non-ADHD smokers and results differ from analyses using more traditional approaches. These findings have implications for understanding the neuropsychopharmacology of nicotine and nicotine withdrawal.
Preliminary analysis of hot spot factors in an advanced reactor for space electric power systems
NASA Technical Reports Server (NTRS)
Lustig, P. H.; Holms, A. G.; Davison, H. W.
1973-01-01
The maximum fuel pin temperature for nominal operation in an advanced power reactor is 1370 K. Because of possible nitrogen embrittlement of the clad, the fuel temperature was limited to 1622 K. Assuming simultaneous occurrence of the most adverse conditions a deterministic analysis gave a maximum fuel temperature of 1610 K. A statistical analysis, using a synthesized estimate of the standard deviation for the highest fuel pin temperature, showed probabilities of 0.015 of that pin exceeding the temperature limit by the distribution free Chebyshev inequality and virtually nil assuming a normal distribution. The latter assumption gives a 1463 K maximum temperature at 3 standard deviations, the usually assumed cutoff. Further, the distribution and standard deviation of the fuel-clad gap are the most significant contributions to the uncertainty in the fuel temperature.
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Juang, K W; Lee, D Y; Ellsworth, T R
2001-01-01
The spatial distribution of a pollutant in contaminated soils is usually highly skewed. As a result, the sample variogram often differs considerably from its regional counterpart and the geostatistical interpolation is hindered. In this study, rank-order geostatistics with standardized rank transformation was used for the spatial interpolation of pollutants with a highly skewed distribution in contaminated soils when commonly used nonlinear methods, such as logarithmic and normal-scored transformations, are not suitable. A real data set of soil Cd concentrations with great variation and high skewness in a contaminated site of Taiwan was used for illustration. The spatial dependence of ranks transformed from Cd concentrations was identified and kriging estimation was readily performed in the standardized-rank space. The estimated standardized rank was back-transformed into the concentration space using the middle point model within a standardized-rank interval of the empirical distribution function (EDF). The spatial distribution of Cd concentrations was then obtained. The probability of Cd concentration being higher than a given cutoff value also can be estimated by using the estimated distribution of standardized ranks. The contour maps of Cd concentrations and the probabilities of Cd concentrations being higher than the cutoff value can be simultaneously used for delineation of hazardous areas of contaminated soils.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
Grid Frequency Extreme Event Analysis and Modeling: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Florita, Anthony R; Clark, Kara; Gevorgian, Vahan
Sudden losses of generation or load can lead to instantaneous changes in electric grid frequency and voltage. Extreme frequency events pose a major threat to grid stability. As renewable energy sources supply power to grids in increasing proportions, it becomes increasingly important to examine when and why extreme events occur to prevent destabilization of the grid. To better understand frequency events, including extrema, historic data were analyzed to fit probability distribution functions to various frequency metrics. Results showed that a standard Cauchy distribution fit the difference between the frequency nadir and prefault frequency (f_(C-A)) metric well, a standard Cauchy distributionmore » fit the settling frequency (f_B) metric well, and a standard normal distribution fit the difference between the settling frequency and frequency nadir (f_(B-C)) metric very well. Results were inconclusive for the frequency nadir (f_C) metric, meaning it likely has a more complex distribution than those tested. This probabilistic modeling should facilitate more realistic modeling of grid faults.« less
NASA Astrophysics Data System (ADS)
Musdalifah, N.; Handajani, S. S.; Zukhronah, E.
2017-06-01
Competition between the homoneous companies cause the company have to keep production quality. To cover this problem, the company controls the production with statistical quality control using control chart. Shewhart control chart is used to normal distributed data. The production data is often non-normal distribution and occured small process shift. Grand median control chart is a control chart for non-normal distributed data, while cumulative sum (cusum) control chart is a sensitive control chart to detect small process shift. The purpose of this research is to compare grand median and cusum control charts on shuttlecock weight variable in CV Marjoko Kompas dan Domas by generating data as the actual distribution. The generated data is used to simulate multiplier of standard deviation on grand median and cusum control charts. Simulation is done to get average run lenght (ARL) 370. Grand median control chart detects ten points that out of control, while cusum control chart detects a point out of control. It can be concluded that grand median control chart is better than cusum control chart.
Tarroun, Abdullah; Bonnefoy, Marc; Bouffard-Vercelli, Juliette; Gedeon, Claire; Vallee, Bernard; Cotton, François
2007-02-01
Although mild progressive specific structural brain changes are commonly associated with normal human aging, it is unclear whether automatic or manual measurements of these structures can differentiate normal brain aging in elderly persons from patients suffering from cognitive impairment. The objective of this study was primarily to define, with a standard high resolution MRI, the range of normal linear age-specific values for the hippocampal formation (HF), and secondarily to differentiate hippocampal atrophy in normal aging from that occurring in Alzheimer disease (AD). Two MRI-based linear measurements of the hippocampal formation at the level of the head and of the tail, standardized by the cranial dimensions, were obtained from coronal and sagittal T1-weighted MR images in 25 normal elderly subjects, and 26 patients with AD. In this study, dimensions of the HF have been standardized and they revealed normal distributions for each side and each sex: the width of the hippocampal head at the level of the amygdala was 16.42 +/- 1.9 mm, and its height 7.93 +/- 1.4 mm; the width of the tail at the level of the cerebral aqueduct was 8.54 +/- 1.2 mm, and the height 5.74 +/- 0.4 mm. There were no significant differences in standardized dimensions of the HF between sides, sexes, or in comparison to head dimensions in the two groups. In addition, the median inter-observer agreement index was 93%. In contrast, the dimensions of the hippocampal formation decreased gradually with increasing age, owing to physiological atrophy, but this atrophy is more significant in the group of AD.
Height and the normal distribution: evidence from Italian military data.
A'Hearn, Brian; Peracchi, Franco; Vecchi, Giovanni
2009-02-01
Researchers modeling historical heights have typically relied on the restrictive assumption of a normal distribution, only the mean of which is affected by age, income, nutrition, disease, and similar influences. To avoid these restrictive assumptions, we develop a new semiparametric approach in which covariates are allowed to affect the entire distribution without imposing any parametric shape. We apply our method to a new database of height distributions for Italian provinces, drawn from conscription records, of unprecedented length and geographical disaggregation. Our method allows us to standardize distributions to a single age and calculate moments of the distribution that are comparable through time. Our method also allows us to generate counterfactual distributions for a range of ages, from which we derive age-height profiles. These profiles reveal how the adolescent growth spurt (AGS) distorts the distribution of stature, and they document the earlier and earlier onset of the AGS as living conditions improved over the second half of the nineteenth century. Our new estimates of provincial mean height also reveal a previously unnoticed "regime switch "from regional convergence to divergence in this period.
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Rijal, Omar M; Abdullah, Norli A; Isa, Zakiah M; Noor, Norliza M; Tawfiq, Omar F
2013-01-01
The knowledge of teeth positions on the maxillary arch is useful in the rehabilitation of the edentulous patient. A combination of angular (θ), and linear (l) variables representing position of four teeth were initially proposed as the shape descriptor of the maxillary dental arch. Three categories of shape were established, each having a multivariate normal distribution. It may be argued that 4 selected teeth on the standardized digital images of the dental casts could be considered as insufficient with respect to representing shape. However, increasing the number of points would create problems with dimensions and proof of existence of the multivariate normal distribution is extremely difficult. This study investigates the ability of Fourier descriptors (FD) using all maxillary teeth to find alternative shape models. Eight FD terms were sufficient to represent 21 points on the arch. Using these 8 FD terms as an alternative shape descriptor, three categories of shape were verified, each category having the complex normal distribution.
Statistical distribution of mechanical properties for three graphite-epoxy material systems
NASA Technical Reports Server (NTRS)
Reese, C.; Sorem, J., Jr.
1981-01-01
Graphite-epoxy composites are playing an increasing role as viable alternative materials in structural applications necessitating thorough investigation into the predictability and reproducibility of their material strength properties. This investigation was concerned with tension, compression, and short beam shear coupon testing of large samples from three different material suppliers to determine their statistical strength behavior. Statistical results indicate that a two Parameter Weibull distribution model provides better overall characterization of material behavior for the graphite-epoxy systems tested than does the standard Normal distribution model that is employed for most design work. While either a Weibull or Normal distribution model provides adequate predictions for average strength values, the Weibull model provides better characterization in the lower tail region where the predictions are of maximum design interest. The two sets of the same material were found to have essentially the same material properties, and indicate that repeatability can be achieved.
Probabilistic model of bridge vehicle loads in port area based on in-situ load testing
NASA Astrophysics Data System (ADS)
Deng, Ming; Wang, Lei; Zhang, Jianren; Wang, Rei; Yan, Yanhong
2017-11-01
Vehicle load is an important factor affecting the safety and usability of bridges. An statistical analysis is carried out in this paper to investigate the vehicle load data of Tianjin Haibin highway in Tianjin port of China, which are collected by the Weigh-in- Motion (WIM) system. Following this, the effect of the vehicle load on test bridge is calculated, and then compared with the calculation result according to HL-93(AASHTO LRFD). Results show that the overall vehicle load follows a distribution with a weighted sum of four normal distributions. The maximum vehicle load during the design reference period follows a type I extremum distribution. The vehicle load effect also follows a weighted sum of four normal distributions, and the standard value of the vehicle load is recommended as 1.8 times that of the calculated value according to HL-93.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
Huang, Dan; Chen, Xuejuan; Gong, Qi; Yuan, Chaoqun; Ding, Hui; Bai, Jing; Zhu, Hui; Fu, Zhujun; Yu, Rongbin; Liu, Hu
2016-01-01
This survey was conducted to determine the testability, distribution and associations of ocular biometric parameters in Chinese preschool children. Ocular biometric examinations, including the axial length (AL) and corneal radius of curvature (CR), were conducted on 1,688 3-year-old subjects by using an IOLMaster in August 2015. Anthropometric parameters, including height and weight, were measured according to a standardized protocol, and body mass index (BMI) was calculated. The testability was 93.7% for the AL and 78.6% for the CR overall, and both measures improved with age. Girls performed slightly better in AL measurements (P = 0.08), and the difference in CR was statistically significant (P < 0.05). The AL distribution was normal in girls (P = 0.12), whereas it was not in boys (P < 0.05). For CR1, all subgroups presented normal distributions (P = 0.16 for boys; P = 0.20 for girls), but the distribution varied when the subgroups were combined (P < 0.05). CR2 presented a normal distribution (P = 0.11), whereas the AL/CR ratio was abnormal (P < 0.001). Boys exhibited a significantly longer AL, a greater CR and a greater AL/CR ratio than girls (all P < 0.001). PMID:27384307
Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael
2016-01-01
Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.
Gasquoine, Philip Gerard; Gonzalez, Cassandra Dayanira
2012-05-01
Conventional neuropsychological norms developed for monolinguals likely overestimate normal performance in bilinguals on language but not visual-perceptual format tests. This was studied by comparing neuropsychological false-positive rates using the 50th percentile of conventional norms and individual comparison standards (Picture Vocabulary or Matrix Reasoning scores) as estimates of preexisting neuropsychological skill level against the number expected from the normal distribution for a consecutive sample of 56 neurologically intact, bilingual, Hispanic Americans. Participants were tested in separate sessions in Spanish and English in the counterbalanced order on La Bateria Neuropsicologica and the original English language tests on which this battery was based. For language format measures, repeated-measures multivariate analysis of variance showed that individual estimates of preexisting skill level in English generated the mean number of false positives most approximate to that expected from the normal distribution, whereas the 50th percentile of conventional English language norms did the same for visual-perceptual format measures. When using conventional Spanish or English monolingual norms for language format neuropsychological measures with bilingual Hispanic Americans, individual estimates of preexisting skill level are recommended over the 50th percentile.
Explorations in statistics: the log transformation.
Curran-Everett, Douglas
2018-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.
Seed, Mike; van Amerom, Joshua F P; Yoo, Shi-Joon; Al Nafisi, Bahiyah; Grosse-Wortmann, Lars; Jaeggi, Edgar; Jansz, Michael S; Macgowan, Christopher K
2012-11-26
We present the first phase contrast (PC) cardiovascular magnetic resonance (CMR) measurements of the distribution of blood flow in twelve late gestation human fetuses. These were obtained using a retrospective gating technique known as metric optimised gating (MOG). A validation experiment was performed in five adult volunteers where conventional cardiac gating was compared with MOG. Linear regression and Bland Altman plots were used to compare MOG with the gold standard of conventional gating. Measurements using MOG were then made in twelve normal fetuses at a median gestational age of 37 weeks (range 30-39 weeks). Flow was measured in the major fetal vessels and indexed to the fetal weight. There was good correlation between the conventional gated and MOG measurements in the adult validation experiment (R=0.96). Mean flows in ml/min/kg with standard deviations in the major fetal vessels were as follows: combined ventricular output (CVO) 540 ± 101, main pulmonary artery (MPA) 327 ± 68, ascending aorta (AAo) 198 ± 38, superior vena cava (SVC) 147 ± 46, ductus arteriosus (DA) 220 ± 39,pulmonary blood flow (PBF) 106 ± 59,descending aorta (DAo) 273 ± 85, umbilical vein (UV) 160 ± 62, foramen ovale (FO)107 ± 54. Results expressed as mean percentages of the CVO with standard deviations were as follows: MPA 60 ± 4, AAo37 ± 4, SVC 28 ± 7, DA 41 ± 8, PBF 19 ± 10, DAo50 ± 12, UV 30 ± 9, FO 21 ± 12. This study demonstrates how PC CMR with MOG is a feasible technique for measuring the distribution of the normal human fetal circulation in late pregnancy. Our preliminary results are in keeping with findings from previous experimental work in fetal lambs.
Pulse height response of an optical particle counter to monodisperse aerosols
NASA Technical Reports Server (NTRS)
Wilmoth, R. G.; Grice, S. S.; Cuda, V.
1976-01-01
The pulse height response of a right angle scattering optical particle counter has been investigated using monodisperse aerosols of polystyrene latex spheres, di-octyl phthalate and methylene blue. The results confirm previous measurements for the variation of mean pulse height as a function of particle diameter and show good agreement with the relative response predicted by Mie scattering theory. Measured cumulative pulse height distributions were found to fit reasonably well to a log normal distribution with a minimum geometric standard deviation of about 1.4 for particle diameters greater than about 2 micrometers. The geometric standard deviation was found to increase significantly with decreasing particle diameter.
2012-01-01
Background The goals of our study are to determine the most appropriate model for alcohol consumption as an exposure for burden of disease, to analyze the effect of the chosen alcohol consumption distribution on the estimation of the alcohol Population- Attributable Fractions (PAFs), and to characterize the chosen alcohol consumption distribution by exploring if there is a global relationship within the distribution. Methods To identify the best model, the Log-Normal, Gamma, and Weibull prevalence distributions were examined using data from 41 surveys from Gender, Alcohol and Culture: An International Study (GENACIS) and from the European Comparative Alcohol Study. To assess the effect of these distributions on the estimated alcohol PAFs, we calculated the alcohol PAF for diabetes, breast cancer, and pancreatitis using the three above-named distributions and using the more traditional approach based on categories. The relationship between the mean and the standard deviation from the Gamma distribution was estimated using data from 851 datasets for 66 countries from GENACIS and from the STEPwise approach to Surveillance from the World Health Organization. Results The Log-Normal distribution provided a poor fit for the survey data, with Gamma and Weibull distributions providing better fits. Additionally, our analyses showed that there were no marked differences for the alcohol PAF estimates based on the Gamma or Weibull distributions compared to PAFs based on categorical alcohol consumption estimates. The standard deviation of the alcohol distribution was highly dependent on the mean, with a unit increase in alcohol consumption associated with a unit increase in the mean of 1.258 (95% CI: 1.223 to 1.293) (R2 = 0.9207) for women and 1.171 (95% CI: 1.144 to 1.197) (R2 = 0. 9474) for men. Conclusions Although the Gamma distribution and the Weibull distribution provided similar results, the Gamma distribution is recommended to model alcohol consumption from population surveys due to its fit, flexibility, and the ease with which it can be modified. The results showed that a large degree of variance of the standard deviation of the alcohol consumption Gamma distribution was explained by the mean alcohol consumption, allowing for alcohol consumption to be modeled through a Gamma distribution using only average consumption. PMID:22490226
42 CFR 403.734 - Condition of participation: Food services.
Code of Federal Regulations, 2010 CFR
2010-10-01
... food served or desire alternative choices. (3) Furnish meals at regular times comparable to normal..., stored, prepared, distributed, and served under sanitary conditions. (b) Standard: Meals. The RNHCI must serve meals that furnish each patient with adequate nourishment in accordance with the recommended...
42 CFR 403.734 - Condition of participation: Food services.
Code of Federal Regulations, 2011 CFR
2011-10-01
... food served or desire alternative choices. (3) Furnish meals at regular times comparable to normal..., stored, prepared, distributed, and served under sanitary conditions. (b) Standard: Meals. The RNHCI must serve meals that furnish each patient with adequate nourishment in accordance with the recommended...
Slant path L- and S-Band tree shadowing measurements
NASA Technical Reports Server (NTRS)
Vogel, Wolfhard J.; Torrence, Geoffrey W.
1994-01-01
This contribution presents selected results from simultaneous L- and S-Band slant-path fade measurements through a pecan, a cottonwood, and a pine tree employing a tower-mounted transmitter and dual-frequency receiver. A single, circularly-polarized antenna was used at each end of the link. The objective was to provide information for personal communications satellite design on the correlation of tree shadowing between frequencies near 1620 and 2500 MHz. Fades were measured along 10 m lateral distance with 5 cm spacing. Instantaneous fade differences between L- and S-Band exhibited normal distribution with means usually near 0 dB and standard deviations from 5.2 to 7.5 dB. The cottonwood tree was an exception, with 5.4 dB higher average fading at S- than at L-Band. The spatial autocorrelation reduced to near zero with lags of about 10 lambda. The fade slope in dB/MHz is normally distributed with zero mean and standard deviation increasing with fade level.
Slant path L- and S-Band tree shadowing measurements
NASA Astrophysics Data System (ADS)
Vogel, Wolfhard J.; Torrence, Geoffrey W.
1994-08-01
This contribution presents selected results from simultaneous L- and S-Band slant-path fade measurements through a pecan, a cottonwood, and a pine tree employing a tower-mounted transmitter and dual-frequency receiver. A single, circularly-polarized antenna was used at each end of the link. The objective was to provide information for personal communications satellite design on the correlation of tree shadowing between frequencies near 1620 and 2500 MHz. Fades were measured along 10 m lateral distance with 5 cm spacing. Instantaneous fade differences between L- and S-Band exhibited normal distribution with means usually near 0 dB and standard deviations from 5.2 to 7.5 dB. The cottonwood tree was an exception, with 5.4 dB higher average fading at S- than at L-Band. The spatial autocorrelation reduced to near zero with lags of about 10 lambda. The fade slope in dB/MHz is normally distributed with zero mean and standard deviation increasing with fade level.
Algae Tile Data: 2004-2007, BPA-51; Preliminary Report, October 28, 2008.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holderman, Charles
Multiple files containing 2004 through 2007 Tile Chlorophyll data for the Kootenai River sites designated as: KR1, KR2, KR3, KR4 (Downriver) and KR6, KR7, KR9, KR9.1, KR10, KR11, KR12, KR13, KR14 (Upriver) were received by SCS. For a complete description of the sites covered, please refer to http://ktoi.scsnetw.com. To maintain consistency with the previous SCS algae reports, all analyses were carried out separately for the Upriver and Downriver categories, as defined in the aforementioned paragraph. The Upriver designation, however, now includes three additional sites, KR11, KR12, and the nutrient addition site, KR9.1. Summary statistics and information on the four responses,more » chlorophyll a, chlorophyll a Accrual Rate, Total Chlorophyll, and Total Chlorophyll Accrual Rate are presented in Print Out 2. Computations were carried out separately for each river position (Upriver and Downriver) and year. For example, the Downriver position in 2004 showed an average Chlorophyll a level of 25.5 mg with a standard deviation of 21.4 and minimum and maximum values of 3.1 and 196 mg, respectively. The Upriver data in 2004 showed a lower overall average chlorophyll a level at 2.23 mg with a lower standard deviation (3.6) and minimum and maximum values of (0.13 and 28.7, respectively). A more comprehensive summary of each variable and position is given in Print Out 3. This lists the information above as well as other summary information such as the variance, standard error, various percentiles and extreme values. Using the 2004 Downriver Chlorophyll a as an example again, the variance of this data was 459.3 and the standard error of the mean was 1.55. The median value or 50th percentile was 21.3, meaning 50% of the data fell above and below this value. It should be noted that this value is somewhat different than the mean of 25.5. This is an indication that the frequency distribution of the data is not symmetrical (skewed). The skewness statistic, listed as part of the first section of each analysis, quantifies this. In a symmetric distribution, such as a Normal distribution, the skewness value would be 0. The tile chlorophyll data, however, shows larger values. Chlorophyll a, in the 2004 Downriver example, has a skewness statistic of 3.54, which is quite high. In the last section of the summary analysis, the stem and leaf plot graphically demonstrates the asymmetry, showing most of the data centered around 25 with a large value at 196. The final plot is referred to as a normal probability plot and graphically compares the data to a theoretical normal distribution. For chlorophyll a, the data (asterisks) deviate substantially from the theoretical normal distribution (diagonal reference line of pluses), indicating that the data is non-normal. Other response variables in both the Downriver and Upriver categories also indicated skewed distributions. Because the sample size and mean comparison procedures below require symmetrical, normally distributed data, each response in the data set was logarithmically transformed. The logarithmic transformation, in this case, can help mitigate skewness problems. The summary statistics for the four transformed responses (log-ChlorA, log-TotChlor, and log-accrual ) are given in Print Out 4. For the 2004 Downriver Chlorophyll a data, the logarithmic transformation reduced the skewness value to -0.36 and produced a more bell-shaped symmetric frequency distribution. Similar improvements are shown for the remaining variables and river categories. Hence, all subsequent analyses given below are based on logarithmic transformations of the original responses.« less
Nandi, Arijit; Sweet, Elizabeth; Kawachi, Ichiro; Heymann, Jody; Galea, Sandro
2014-02-01
We examined associations between macrolevel economic factors hypothesized to drive changes in distributions of weight and body mass index (BMI) in a representative sample of 200,796 men and women from 40 low- and middle-income countries. We used meta-regressions to describe ecological associations between macrolevel factors and mean BMIs across countries. Multilevel regression was used to assess the relation between macrolevel economic characteristics and individual odds of underweight and overweight relative to normal weight. In multilevel analyses adjusting for individual-level characteristics, a 1-standard-deviation increase in trade liberalization was associated with 13% (95% confidence interval [CI] = 0.76, 0.99), 17% (95% CI = 0.71, 0.96), 13% (95% CI = 0.76, 1.00), and 14% (95% CI = 0.75, 0.99) lower odds of underweight relative to normal weight among rural men, rural women, urban men, and urban women, respectively. Economic development was consistently associated with higher odds of overweight relative to normal weight. Among rural men, a 1-standard-deviation increase in foreign direct investment was associated with 17% (95% CI = 1.02, 1.35) higher odds of overweight relative to normal weight. Macrolevel economic factors may be implicated in global shifts in epidemiological patterns of weight.
Sweet, Elizabeth; Kawachi, Ichiro; Heymann, Jody; Galea, Sandro
2014-01-01
Objectives. We examined associations between macrolevel economic factors hypothesized to drive changes in distributions of weight and body mass index (BMI) in a representative sample of 200 796 men and women from 40 low- and middle-income countries. Methods. We used meta-regressions to describe ecological associations between macrolevel factors and mean BMIs across countries. Multilevel regression was used to assess the relation between macrolevel economic characteristics and individual odds of underweight and overweight relative to normal weight. Results. In multilevel analyses adjusting for individual-level characteristics, a 1–standard-deviation increase in trade liberalization was associated with 13% (95% confidence interval [CI] = 0.76, 0.99), 17% (95% CI = 0.71, 0.96), 13% (95% CI = 0.76, 1.00), and 14% (95% CI = 0.75, 0.99) lower odds of underweight relative to normal weight among rural men, rural women, urban men, and urban women, respectively. Economic development was consistently associated with higher odds of overweight relative to normal weight. Among rural men, a 1–standard-deviation increase in foreign direct investment was associated with 17% (95% CI = 1.02, 1.35) higher odds of overweight relative to normal weight. Conclusions. Macrolevel economic factors may be implicated in global shifts in epidemiological patterns of weight. PMID:24228649
An asymptotic analysis of the logrank test.
Strawderman, R L
1997-01-01
Asymptotic expansions for the null distribution of the logrank statistic and its distribution under local proportional hazards alternatives are developed in the case of iid observations. The results, which are derived from the work of Gu (1992) and Taniguchi (1992), are easy to interpret, and provide some theoretical justification for many behavioral characteristics of the logrank test that have been previously observed in simulation studies. We focus primarily upon (i) the inadequacy of the usual normal approximation under treatment group imbalance; and, (ii) the effects of treatment group imbalance on power and sample size calculations. A simple transformation of the logrank statistic is also derived based on results in Konishi (1991) and is found to substantially improve the standard normal approximation to its distribution under the null hypothesis of no survival difference when there is treatment group imbalance.
Architecture, Voltage, and Components for a Turboelectric Distributed Propulsion Electric Grid
NASA Technical Reports Server (NTRS)
Armstrong, Michael J.; Blackwelder, Mark; Bollman, Andrew; Ross, Christine; Campbell, Angela; Jones, Catherine; Norman, Patrick
2015-01-01
The development of a wholly superconducting turboelectric distributed propulsion system presents unique opportunities for the aerospace industry. However, this transition from normally conducting systems to superconducting systems significantly increases the equipment complexity necessary to manage the electrical power systems. Due to the low technology readiness level (TRL) nature of all components and systems, current Turboelectric Distributed Propulsion (TeDP) technology developments are driven by an ambiguous set of system-level electrical integration standards for an airborne microgrid system (Figure 1). While multiple decades' worth of advancements are still required for concept realization, current system-level studies are necessary to focus the technology development, target specific technological shortcomings, and enable accurate prediction of concept feasibility and viability. An understanding of the performance sensitivity to operating voltages and an early definition of advantageous voltage regulation standards for unconventional airborne microgrids will allow for more accurate targeting of technology development. Propulsive power-rated microgrid systems necessitate the introduction of new aircraft distribution system voltage standards. All protection, distribution, control, power conversion, generation, and cryocooling equipment are affected by voltage regulation standards. Information on the desired operating voltage and voltage regulation is required to determine nominal and maximum currents for sizing distribution and fault isolation equipment, developing machine topologies and machine controls, and the physical attributes of all component shielding and insulation. Voltage impacts many components and system performance.
Probabilistic Modeling and Simulation of Metal Fatigue Life Prediction
2002-09-01
distribution demonstrate the central limit theorem? Obviously not! This is much the same as materials testing. If only NBA basketball stars are...60 near the exit of a NBA locker room. There would obviously be some pseudo-normal distribution with a very small standard deviation. The mean...completed, the investigators must understand how the midgets and the NBA stars will affect the total solution. D. IT IS MUCH SIMPLER TO MODEL THE
Mean estimation in highly skewed samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pederson, S P
The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.
Individual vision and peak distribution in collective actions
NASA Astrophysics Data System (ADS)
Lu, Peng
2017-06-01
People make decisions on whether they should participate as participants or not as free riders in collective actions with heterogeneous visions. Besides of the utility heterogeneity and cost heterogeneity, this work includes and investigates the effect of vision heterogeneity by constructing a decision model, i.e. the revised peak model of participants. In this model, potential participants make decisions under the joint influence of utility, cost, and vision heterogeneities. The outcomes of simulations indicate that vision heterogeneity reduces the values of peaks, and the relative variance of peaks is stable. Under normal distributions of vision heterogeneity and other factors, the peaks of participants are normally distributed as well. Therefore, it is necessary to predict distribution traits of peaks based on distribution traits of related factors such as vision heterogeneity and so on. We predict the distribution of peaks with parameters of both mean and standard deviation, which provides the confident intervals and robust predictions of peaks. Besides, we validate the peak model of via the Yuyuan Incident, a real case in China (2014), and the model works well in explaining the dynamics and predicting the peak of real case.
Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A
2006-10-15
Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.
NASA Technical Reports Server (NTRS)
Sakuraba, K.; Tsuruda, Y.; Hanada, T.; Liou, J.-C.; Akahoshi, Y.
2007-01-01
This paper summarizes two new satellite impact tests conducted in order to investigate on the outcome of low- and hyper-velocity impacts on two identical target satellites. The first experiment was performed at a low velocity of 1.5 km/s using a 40-gram aluminum alloy sphere, whereas the second experiment was performed at a hyper-velocity of 4.4 km/s using a 4-gram aluminum alloy sphere by two-stage light gas gun in Kyushu Institute of Technology. To date, approximately 1,500 fragments from each impact test have been collected for detailed analysis. Each piece was analyzed based on the method used in the NASA Standard Breakup Model 2000 revision. The detailed analysis will conclude: 1) the similarity in mass distribution of fragments between low and hyper-velocity impacts encourages the development of a general-purpose distribution model applicable for a wide impact velocity range, and 2) the difference in area-to-mass ratio distribution between the impact experiments and the NASA standard breakup model suggests to describe the area-to-mass ratio by a bi-normal distribution.
The inclusion of capillary distribution in the adiabatic tissue homogeneity model of blood flow
NASA Astrophysics Data System (ADS)
Koh, T. S.; Zeman, V.; Darko, J.; Lee, T.-Y.; Milosevic, M. F.; Haider, M.; Warde, P.; Yeung, I. W. T.
2001-05-01
We have developed a non-invasive imaging tracer kinetic model for blood flow which takes into account the distribution of capillaries in tissue. Each individual capillary is assumed to follow the adiabatic tissue homogeneity model. The main strength of our new model is in its ability to quantify the functional distribution of capillaries by the standard deviation in the time taken by blood to pass through the tissue. We have applied our model to the human prostate and have tested two different types of distribution functions. Both distribution functions yielded very similar predictions for the various model parameters, and in particular for the standard deviation in transit time. Our motivation for developing this model is the fact that the capillary distribution in cancerous tissue is drastically different from in normal tissue. We believe that there is great potential for our model to be used as a prognostic tool in cancer treatment. For example, an accurate knowledge of the distribution in transit times might result in an accurate estimate of the degree of tumour hypoxia, which is crucial to the success of radiation therapy.
CHANGES IN FERRITIN H- AND L-CHAINS IN CANINE LENSES WITH AGE-RELATED NUCLEAR CATARACT
Goralska, Małgorzata; Nagar, Steven; Colitz, Carmen M.H.; Fleisher, Lloyd N.; McGahan, M. Christine
2014-01-01
PURPOSE To determine potential differences in the characteristics of the iron storage protein, ferritin and its heavy (H) and light (L) subunits in fiber cells from cataractous and normal lenses of older dogs. METHODS Lens fiber cell homogenates were analyzed by SDS-PAGE and ferritin chains were immunodetected with ferritin chain-specific antibodies. Ferritin concentration was measured by ELISA. Immunohistochemistry was used to localize ferritin chains in lens sections. RESULTS The concentration of assembled ferritin was comparable in normal and cataractous lenses of similarly aged dogs. The ferritin L-chain detected in both lens types was modified and was about 11 kDa larger (30 kDa) than standard L-chain (19 kDa) purified from canine liver. The H-chain identified in cataractous fiber cells (29 kDa) differed from 21 kDa standard canine H-chain and from 12 kDa modified H-chain present in fiber cells of normal lenses. Histologic analysis revealed that the H-chain was distributed differently throughout cataractous lenses when compared to normal lenses. There was also a difference in subunit makeup of assembled ferritin between the two lens types. Ferritin from cataractous lenses contained more H-chain and bound 11-fold more iron than ferritin from normal lenses. CONCLUSIONS There are significant differences in the characteristics of ferritin H-chain and its distribution in canine cataractous lenses as compared to normal lenses. The higher content of H-chain in assembled ferritin allows this molecule to sequester more iron. In addition the accumulation of H-chain in deeper fiber layers of the lens may be part of a defense mechanism by which the cataractous lens limits iron-catalyzed oxidative damage. PMID:18708625
Directional Dependence in Developmental Research
ERIC Educational Resources Information Center
von Eye, Alexander; DeShon, Richard P.
2012-01-01
In this article, we discuss and propose methods that may be of use to determine direction of dependence in non-normally distributed variables. First, it is shown that standard regression analysis is unable to distinguish between explanatory and response variables. Then, skewness and kurtosis are discussed as tools to assess deviation from…
Bayesian Estimation Supersedes the "t" Test
ERIC Educational Resources Information Center
Kruschke, John K.
2013-01-01
Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional "t" tests) when certainty in the estimate is…
Abuasbi, Falastine; Lahham, Adnan; Abdel-Raziq, Issam Rashid
2018-04-01
This study was focused on the measurement of residential exposure to power frequency (50-Hz) electric and magnetic fields in the city of Ramallah-Palestine. A group of 32 semi-randomly selected residences distributed amongst the city were under investigations of fields variations. Measurements were performed with the Spectrum Analyzer NF-5035 and were carried out at one meter above ground level in the residence's bedroom or living room under both zero and normal-power conditions. Fields' variations were recorded over 6-min and some times over few hours. Electric fields under normal-power use were relatively low; ~59% of residences experienced mean electric fields <10 V/m. The highest mean electric field of 66.9 V/m was found at residence R27. However, electric field values were log-normally distributed with geometric mean and geometric standard deviation of 9.6 and 3.5 V/m, respectively. Background electric fields measured under zero-power use, were very low; ~80% of residences experienced background electric fields <1 V/m. Under normal-power use, the highest mean magnetic field (0.45 μT) was found at residence R26 where an indoor power substation exists. However, ~81% of residences experienced mean magnetic fields <0.1 μT. Magnetic fields measured inside the 32 residences showed also a log-normal distribution with geometric mean and geometric standard deviation of 0.04 and 3.14 μT, respectively. Under zero-power conditions, ~7% of residences experienced average background magnetic field >0.1 μT. Fields from appliances showed a maximum mean electric field of 67.4 V/m from hair dryer, and maximum mean magnetic field of 13.7 μT from microwave oven. However, no single result surpassed the ICNIRP limits for general public exposures to ELF fields, but still, the interval 0.3-0.4 μT for possible non-thermal health impacts of exposure to ELF magnetic fields, was experienced in 13% of the residences.
Modeling extreme hurricane damage in the United States using generalized Pareto distribution
NASA Astrophysics Data System (ADS)
Dey, Asim Kumer
Extreme value distributions are used to understand and model natural calamities, man made catastrophes and financial collapses. Extreme value theory has been developed to study the frequency of such events and to construct a predictive model so that one can attempt to forecast the frequency of a disaster and the amount of damage from such a disaster. In this study, hurricane damages in the United States from 1900-2012 have been studied. The aim of the paper is three-fold. First, normalizing hurricane damage and fitting an appropriate model for the normalized damage data. Secondly, predicting the maximum economic damage from a hurricane in future by using the concept of return period. Finally, quantifying the uncertainty in the inference of extreme return levels of hurricane losses by using a simulated hurricane series, generated by bootstrap sampling. Normalized hurricane damage data are found to follow a generalized Pareto distribution. tion. It is demonstrated that standard deviation and coecient of variation increase with the return period which indicates an increase in uncertainty with model extrapolation.
1996-07-01
UNCLASSIFIED AD NUMBER ADB216343 NEW LIMITATION CHANGE TO Approved for public release, distribution unlimited FROM Distribution authorized to U.S...PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OF REPORT OF THIS PAGE OF...ABSTRACT ,Unclassified Unclassified Unclassified Limited NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. Z39-1 8 DISCLAIMER
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
NASA Technical Reports Server (NTRS)
Howell, W. E.
1974-01-01
The mechanical properties of a symmetrical, eight-step, titanium-boron-epoxy joint are discussed. A study of the effect of adhesive and matrix stiffnesses on the axial, normal, and shear stress distributions was made using the finite element method. The NASA Structural Analysis Program (NASTRAN) was used for the analysis. The elastic modulus of the adhesive was varied from 345 MPa to 3100 MPa with the nominal value of 1030 MPa as a standard. The nominal values were used to analyze the stability of the joint. The elastic moduli were varied to determine their effect on the stresses in the joint.
Analysis of Realized Volatility for Nikkei Stock Average on the Tokyo Stock Exchange
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya; Watanabe, Toshiaki
2016-04-01
We calculate realized volatility of the Nikkei Stock Average (Nikkei225) Index on the Tokyo Stock Exchange and investigate the return dynamics. To avoid the bias on the realized volatility from the non-trading hours issue we calculate realized volatility separately in the two trading sessions, i.e. morning and afternoon, of the Tokyo Stock Exchange and find that the microstructure noise decreases the realized volatility at small sampling frequency. Using realized volatility as a proxy of the integrated volatility we standardize returns in the morning and afternoon sessions and investigate the normality of the standardized returns by calculating variance, kurtosis and 6th moment. We find that variance, kurtosis and 6th moment are consistent with those of the standard normal distribution, which indicates that the return dynamics of the Nikkei Stock Average are well described by a Gaussian random process with time-varying volatility.
Leão, William L.; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210
Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
Konz, Ioana; Fernández, Beatriz; Fernández, M Luisa; Pereiro, Rosario; González, Héctor; Alvarez, Lydia; Coca-Prados, Miguel; Sanz-Medel, Alfredo
2013-04-01
Laser ablation coupled to inductively coupled plasma mass spectrometry has been developed for the elemental imaging of Mg, Fe and Cu distribution in histological tissue sections of fixed eyes, embedded in paraffin, from human donors (cadavers). This work presents the development of a novel internal standard correction methodology based on the deposition of a homogeneous thin gold film on the tissue surface and the use of the (197)Au(+) signal as internal standard. Sample preparation (tissue section thickness) and laser conditions were carefully optimized, and internal normalisation using (197)Au(+) was compared with (13)C(+) correction for imaging applications. (24)Mg(+), (56)Fe(+) and (63)Cu(+) distributions were investigated in histological sections of the anterior segment of the eye (including the iris, ciliary body, cornea and trabecular meshwork) and were shown to be heterogeneously distributed along those tissue structures. Reproducibility was assessed by imaging different human eye sections from the same donor and from ten different eyes from adult normal donors, which showed that similar spatial maps were obtained and therefore demonstrate the analytical potential of using (197)Au(+) as internal standard. The proposed analytical approach could offer a robust tool with great practical interest for clinical studies, e.g. to investigate trace element distribution of metals and their alterations in ocular diseases.
Heterotrophic plate count and consumer's health under special consideration of water softeners.
Hambsch, Beate; Sacré, Clara; Wagner, Ivo
2004-05-01
The phenomenon of bacterial growth in water softeners is well known since years. To upgrade the hygienic safety of water softeners, the German DIN Standard 19636 was developed, to assure that the distribution system could not be contaminated by these devices and that the drinking water to be used in the household still meets the microbiological standards according to the German drinking water guidelines, i.e. among others heterotrophic plate count (HPC) below 100 CFU/ml. Moreover, the standard for the water softeners includes a test for contamination with Pseudomonas aeruginosa which has to be disinfected during the regeneration phase. This is possible by sanitizing the resin bed during regeneration by producing chlorine. The results of the last 10 years of tests of water softeners according to DIN 19636 showed that it is possible to produce water softeners that comply with that standard. Approximately 60% of the tested models were accepted. P. aeruginosa is used as an indicator for potentially pathogenic bacteria being able to grow also in low nutrient conditions which normally prevail in drinking water. Like other heterotrophs, the numbers of P. aeruginosa increase rapidly as stagnation occurs. Normally P. aeruginosa is not present in the distributed drinking water. However, under certain conditions, P. aeruginosa can be introduced into the drinking water distribution system, for instance, during construction work. The occurrence of P. aeruginosa is shown in different cases in treatment plants, public drinking water systems and in-house installations. The compliance with DIN 19636 provides assurance that a water softener will not be a constant source of contamination, even if it is once inoculated with a potentially pathogenic bacterium like P. aeruginosa. Copyright 2003 Elsevier B.V.
Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric
2007-01-01
A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926
Generalized t-statistic for two-group classification.
Komori, Osamu; Eguchi, Shinto; Copas, John B
2015-06-01
In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.
Three New Methods for Analysis of Answer Changes
ERIC Educational Resources Information Center
Sinharay, Sandip; Johnson, Matthew S.
2017-01-01
In a pioneering research article, Wollack and colleagues suggested the "erasure detection index" (EDI) to detect test tampering. The EDI can be used with or without a continuity correction and is assumed to follow the standard normal distribution under the null hypothesis of no test tampering. When used without a continuity correction,…
NASA Astrophysics Data System (ADS)
Fukami, Christine S.; Sullivan, Amy P.; Ryan Fulgham, S.; Murschell, Trey; Borch, Thomas; Smith, James N.; Farmer, Delphine K.
2016-07-01
Particle-into-Liquid Samplers (PILS) have become a standard aerosol collection technique, and are widely used in both ground and aircraft measurements in conjunction with off-line ion chromatography (IC) measurements. Accurate and precise background samples are essential to account for gas-phase components not efficiently removed and any interference in the instrument lines, collection vials or off-line analysis procedures. For aircraft sampling with PILS, backgrounds are typically taken with in-line filters to remove particles prior to sample collection once or twice per flight with more numerous backgrounds taken on the ground. Here, we use data collected during the Front Range Air Pollution and Photochemistry Éxperiment (FRAPPÉ) to demonstrate that not only are multiple background filter samples are essential to attain a representative background, but that the chemical background signals do not follow the Gaussian statistics typically assumed. Instead, the background signals for all chemical components analyzed from 137 background samples (taken from ∼78 total sampling hours over 18 flights) follow a log-normal distribution, meaning that the typical approaches of averaging background samples and/or assuming a Gaussian distribution cause an over-estimation of background samples - and thus an underestimation of sample concentrations. Our approach of deriving backgrounds from the peak of the log-normal distribution results in detection limits of 0.25, 0.32, 3.9, 0.17, 0.75 and 0.57 μg m-3 for sub-micron aerosol nitrate (NO3-), nitrite (NO2-), ammonium (NH4+), sulfate (SO42-), potassium (K+) and calcium (Ca2+), respectively. The difference in backgrounds calculated from assuming a Gaussian distribution versus a log-normal distribution were most extreme for NH4+, resulting in a background that was 1.58× that determined from fitting a log-normal distribution.
Schlain, Brian; Amaravadi, Lakshmi; Donley, Jean; Wickramasekera, Ananda; Bennett, Donald; Subramanyam, Meena
2010-01-31
In recent years there has been growing recognition of the impact of anti-drug or anti-therapeutic antibodies (ADAs, ATAs) on the pharmacokinetic and pharmacodynamic behavior of the drug, which ultimately affects drug exposure and activity. These anti-drug antibodies can also impact safety of the therapeutic by inducing a range of reactions from hypersensitivity to neutralization of the activity of an endogenous protein. Assessments of immunogenicity, therefore, are critically dependent on the bioanalytical method used to test samples, in which a positive versus negative reactivity is determined by a statistically derived cut point based on the distribution of drug naïve samples. For non-normally distributed data, a novel gamma-fitting method for obtaining assay cut points is presented. Non-normal immunogenicity data distributions, which tend to be unimodal and positively skewed, can often be modeled by 3-parameter gamma fits. Under a gamma regime, gamma based cut points were found to be more accurate (closer to their targeted false positive rates) compared to normal or log-normal methods and more precise (smaller standard errors of cut point estimators) compared with the nonparametric percentile method. Under a gamma regime, normal theory based methods for estimating cut points targeting a 5% false positive rate were found in computer simulation experiments to have, on average, false positive rates ranging from 6.2 to 8.3% (or positive biases between +1.2 and +3.3%) with bias decreasing with the magnitude of the gamma shape parameter. The log-normal fits tended, on average, to underestimate false positive rates with negative biases as large a -2.3% with absolute bias decreasing with the shape parameter. These results were consistent with the well known fact that gamma distributions become less skewed and closer to a normal distribution as their shape parameters increase. Inflated false positive rates, especially in a screening assay, shifts the emphasis to confirm test results in a subsequent test (confirmatory assay). On the other hand, deflated false positive rates in the case of screening immunogenicity assays will not meet the minimum 5% false positive target as proposed in the immunogenicity assay guidance white papers. Copyright 2009 Elsevier B.V. All rights reserved.
The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions
NASA Astrophysics Data System (ADS)
Tzioufas, Achillefs
2018-04-01
We consider the cardinality of supercritical oriented bond percolation in two dimensions. We show that, whenever the the origin is conditioned to percolate, the process appropriately normalized converges asymptotically in distribution to the standard normal law. This resolves a longstanding open problem pointed out to in several instances in the literature. The result applies also to the continuous-time analog of the process, viz. the basic one-dimensional contact process. We also derive general random-indices central limit theorems for associated random variables as byproducts of our proof.
The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions
NASA Astrophysics Data System (ADS)
Tzioufas, Achillefs
2018-06-01
We consider the cardinality of supercritical oriented bond percolation in two dimensions. We show that, whenever the the origin is conditioned to percolate, the process appropriately normalized converges asymptotically in distribution to the standard normal law. This resolves a longstanding open problem pointed out to in several instances in the literature. The result applies also to the continuous-time analog of the process, viz. the basic one-dimensional contact process. We also derive general random-indices central limit theorems for associated random variables as byproducts of our proof.
Altered peripheral profile of blood cells in Alzheimer disease
Chen, Si-Han; Bu, Xian-Le; Jin, Wang-Sheng; Shen, Lin-Lin; Wang, Jun; Zhuang, Zheng-Qian; Zhang, Tao; Zeng, Fan; Yao, Xiu-Qing; Zhou, Hua-Dong; Wang, Yan-Jiang
2017-01-01
Abstract Alzheimer disease (AD) has been made a global priority for its multifactorial pathogenesis and lack of disease-modifying therapies. We sought to investigate the changes of profile of blood routine in AD and its correlation with the disease severity. In all, 92 AD patients and 84 age and sex-matched normal controls were enrolled and their profiles of blood routine were evaluated. Alzheimer disease patients had increased levels of mean corpuscular hemoglobin, mean corpuscular volume, red cell distribution width-standard deviation, mean platelet volume,and decreased levels of platelet distribution width, red blood cell, hematocrit, hemoglobin, lymphocyte, and basophil compared with normal controls. Alterations in quantity and quality of blood cells may be involved in the pathogenesis of AD and contribute to the disease progression. PMID:28538375
Influence of weight and body fat distribution on bone density in postmenopausal women.
Murillo-Uribe, A; Carranza-Lira, S; Martínez-Trejo, N; Santos-González, J
2000-01-01
To determine whether obesity or body fat distribution induces a greater modification on bone remodeling biochemistry (BRB) and bone density in postmenopausal women. One hundred and thirteen postmenopausal patients were studied. They were initially divided according to body mass index (BMI), and afterwards by waist-hip ratio (WHR) as well as combinations of the two factors. Hormone measurements and assessments of BRB were also done. Dual-emission X-ray absorptiometry from the lumbar column and hip was performed with Lunar DPXL equipment, and the standard deviation in relation to young adult (T) and age-matched subjects (Z) was calculated. Statistical analysis was done by the Mann-Whitney U test. The relation of BMI and WHR with the variables was calculated by simple regression analysis. When divided according to BMI, there was greater bone density in the femoral neck in those with normal weight. After dividing according to WHR, the Z scores had a trend to a lesser decrease in those with upper level body fat distribution. Divided according to BMI and WHR, obese patients with upper-level body fat distribution had greater bone density in the lumbar column than those with normal weight and lower-level body fat distribution. With the same WHR, those with normal weight had greater bone density than those who were obese. A beneficial effect of upper-level body fat distribution on bone density was found. It is greater than that from obesity alone, and obesity and upper-level body fat distribution have an additive effect on bone density.
Jefferson, Angela L; Holland, Christopher M; Tate, David F; Csapo, Istvan; Poppas, Athena; Cohen, Ronald A; Guttmann, Charles R G
2011-01-01
Reduced cardiac output is associated with increased white matter hyperintensities (WMH) and executive dysfunction in older adults, which may be secondary to relations between systemic and cerebral perfusion. This study preliminarily describes the regional distribution of cerebral WMH in the context of a normal cerebral perfusion atlas and aims to determine if these variables are associated with reduced cardiac output. Thirty-two participants (72 ± 8 years old, 38% female) with cardiovascular risk factors or disease underwent structural MRI acquisition at 1.5T using a standard imaging protocol that included FLAIR sequences. WMH distribution was examined in common anatomical space using voxel-based morphometry and as a function of normal cerebral perfusion patterns by overlaying a single photon emission computed tomography (SPECT) atlas. Doppler echocardiogram data was used to dichotomize the participants on the basis of low (n=9) and normal (n=23) cardiac output. Global WMH count and volume did not differ between the low and normal cardiac output groups; however, atlas-derived SPECT perfusion values in regions of hyperintensities were reduced in the low versus normal cardiac output group (p<0.001). Our preliminary data suggest that participants with low cardiac output have WMH in regions of relatively reduced perfusion, while normal cardiac output participants have WMH in regions with relatively higher regional perfusion. This spatial perfusion distribution difference for areas of WMH may occur in the context of reduced systemic perfusion, which subsequently impacts cerebral perfusion and contributes to subclinical or clinical microvascular damage. Copyright © 2009 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ebrahimi, R.; Zohren, S.
2018-03-01
In this paper we extend the orthogonal polynomials approach for extreme value calculations of Hermitian random matrices, developed by Nadal and Majumdar (J. Stat. Mech. P04001 arXiv:1102.0738), to normal random matrices and 2D Coulomb gases in general. Firstly, we show that this approach provides an alternative derivation of results in the literature. More precisely, we show convergence of the rescaled eigenvalue with largest modulus of a normal Gaussian ensemble to a Gumbel distribution, as well as universality for an arbitrary radially symmetric potential. Secondly, it is shown that this approach can be generalised to obtain convergence of the eigenvalue with smallest modulus and its universality for ring distributions. Most interestingly, the here presented techniques are used to compute all slowly varying finite N correction of the above distributions, which is important for practical applications, given the slow convergence. Another interesting aspect of this work is the fact that we can use standard techniques from Hermitian random matrices to obtain the extreme value statistics of non-Hermitian random matrices resembling the large N expansion used in context of the double scaling limit of Hermitian matrix models in string theory.
Exploring conservative islands using correlated and uncorrelated noise
NASA Astrophysics Data System (ADS)
da Silva, Rafael M.; Manchein, Cesar; Beims, Marcus W.
2018-02-01
In this work, noise is used to analyze the penetration of regular islands in conservative dynamical systems. For this purpose we use the standard map choosing nonlinearity parameters for which a mixed phase space is present. The random variable which simulates noise assumes three distributions, namely equally distributed, normal or Gaussian, and power law (obtained from the same standard map but for other parameters). To investigate the penetration process and explore distinct dynamical behaviors which may occur, we use recurrence time statistics (RTS), Lyapunov exponents and the occupation rate of the phase space. Our main findings are as follows: (i) the standard deviations of the distributions are the most relevant quantity to induce the penetration; (ii) the penetration of islands induce power-law decays in the RTS as a consequence of enhanced trapping; (iii) for the power-law correlated noise an algebraic decay of the RTS is observed, even though sticky motion is absent; and (iv) although strong noise intensities induce an ergodic-like behavior with exponential decays of RTS, the largest Lyapunov exponent is reminiscent of the regular islands.
ERIC Educational Resources Information Center
Nevitt, Johnathan; Hancock, Gregory R.
Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…
ERIC Educational Resources Information Center
Fouladi, Rachel T.
2000-01-01
Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…
Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models
ERIC Educational Resources Information Center
Doebler, Anna; Doebler, Philipp; Holling, Heinz
2013-01-01
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
Role of particle radiotherapy in the management of head and neck cancer.
Laramore, George E
2009-05-01
Modern imaging techniques and powerful computers allow a radiation oncologist to design treatments delivering higher doses of radiation than previously possible. Dose distributions imposed by the physics of 'standard' photon and electron beams limit further dose escalation. Hadron radiotherapy offers advantages in either dose distribution and/or improved radiobiology that may significantly improve the treatment of certain head and neck malignancies. Clinical studies support the effectiveness of fast-neutron radiotherapy in the treatment of major and minor salivary gland tumors. Data show highly favorable outcomes with proton radiotherapy for skull-base malignancies and tumors near highly critical normal tissues compared with that expected with standard radiotherapy. Heavy-ion radiotherapy clinical studies are mainly being conducted with fully stripped carbon ions, and limited data seem to indicate a possible improvement over proton radiotherapy for the same subset of radioresistant tumors where neutrons show a benefit over photons. Fast-neutron radiotherapy has different radiobiological properties compared with standard radiotherapy but similar depth dose distributions. Its role in the treatment of head and neck cancer is currently limited to salivary gland malignancies and certain radioresistant tumors such as sarcomas. Protons have the same radiobiological properties as standard radiotherapy beams but more optimal depth dose distributions, making it particularly advantageous when treating tumors adjacent to highly critical structures. Heavy ions combine the radiobiological properties of fast neutrons with the physical dose distributions of protons, and preliminary data indicate their utility for radioresistant tumors adjacent to highly critical structures.
Boltzmann-conserving classical dynamics in quantum time-correlation functions: "Matsubara dynamics".
Hele, Timothy J H; Willatt, Michael J; Muolo, Andrea; Althorpe, Stuart C
2015-04-07
We show that a single change in the derivation of the linearized semiclassical-initial value representation (LSC-IVR or "classical Wigner approximation") results in a classical dynamics which conserves the quantum Boltzmann distribution. We rederive the (standard) LSC-IVR approach by writing the (exact) quantum time-correlation function in terms of the normal modes of a free ring-polymer (i.e., a discrete imaginary-time Feynman path), taking the limit that the number of polymer beads N → ∞, such that the lowest normal-mode frequencies take their "Matsubara" values. The change we propose is to truncate the quantum Liouvillian, not explicitly in powers of ħ(2) at ħ(0) (which gives back the standard LSC-IVR approximation), but in the normal-mode derivatives corresponding to the lowest Matsubara frequencies. The resulting "Matsubara" dynamics is inherently classical (since all terms O(ħ(2)) disappear from the Matsubara Liouvillian in the limit N → ∞) and conserves the quantum Boltzmann distribution because the Matsubara Hamiltonian is symmetric with respect to imaginary-time translation. Numerical tests show that the Matsubara approximation to the quantum time-correlation function converges with respect to the number of modes and gives better agreement than LSC-IVR with the exact quantum result. Matsubara dynamics is too computationally expensive to be applied to complex systems, but its further approximation may lead to practical methods.
The Effect of General Statistical Fiber Misalignment on Predicted Damage Initiation in Composites
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.
2014-01-01
A micromechanical method is employed for the prediction of unidirectional composites in which the fiber orientation can possess various statistical misalignment distributions. The method relies on the probability-weighted averaging of the appropriate concentration tensor, which is established by the micromechanical procedure. This approach provides access to the local field quantities throughout the constituents, from which initiation of damage in the composite can be predicted. In contrast, a typical macromechanical procedure can determine the effective composite elastic properties in the presence of statistical fiber misalignment, but cannot provide the local fields. Fully random fiber distribution is presented as a special case using the proposed micromechanical method. Results are given that illustrate the effects of various amounts of fiber misalignment in terms of the standard deviations of in-plane and out-of-plane misalignment angles, where normal distributions have been employed. Damage initiation envelopes, local fields, effective moduli, and strengths are predicted for polymer and ceramic matrix composites with given normal distributions of misalignment angles, as well as fully random fiber orientation.
Guattery, Jason M; Dardas, Agnes Z; Kelly, Michael; Chamberlain, Aaron; McAndrew, Christopher; Calfee, Ryan P
2018-04-01
The Patient Reported Outcomes Measurement Information System (PROMIS) was developed to provide valid, reliable, and standardized measures to gather patient-reported outcomes for many health domains, including depression, independent of patient condition. Most studies confirming the performance of these measures were conducted with a consented, volunteer study population for testing. Using a study population that has undergone the process of informed consent may be differentiated from the validation group because they are educated specifically as to the purpose of the questions and they will not have answers recorded in their permanent health record. (1) When given as part of routine practice to an orthopaedic population, do PROMIS Physical Function and Depression item banks produce score distributions different than those produced by the populations used to calibrate and validate the item banks? (2) Does the presence of a nonnormal distribution in the PROMIS Depression scores in a clinical population reflect a deliberately hasty answering of questions by patients? (3) Are patients who are reporting minimal depressive symptoms by scoring the minimum score on the PROMIS Depression Computer Adaptive Testing (CAT) distinct from other patients according to demographic data or their scores on other PROMIS assessments? Univariate descriptive statistics and graphic histograms were used to describe the frequency distribution of scores for the Physical Function and Depression item banks for all orthopaedic patients 18 years or older who had an outpatient visit between June 2015 and December 2016. The study population was then broken into two groups based on whether they indicated a lack of depressive symptoms and scored the minimum score (34.2) on the Depression CAT assessment (Floor Group) or not (Standard Group). The distribution of Physical Function CAT scores was compared between the two groups. Finally, a time-per-question value was calculated for both the Physical Function and Depression CATs and was compared between assessments within each group as well as between the two groups. Bivariate statistics compared the demographic data between the two groups. Physical Function CAT scores in musculoskeletal patients were normally distributed like the distribution calibration population; however, the score distribution of the Depression CAT in musculoskeletal patients was nonnormal with a spike in the floor score. After excluding the floor spike, the distribution of the Depression CAT scores was not different from the population control group. Patients who scored the floor score on the Depression CAT took slightly less time per question for Physical Function CAT when compared with other musculoskeletal patients (floor patients: 11 ± 9 seconds; normally distributed patients: 12 ± 10 seconds; mean difference: 1 second [0.8-1.1]; p < 0.001 but not clinically relevant). They spent a substantially shorter amount of time per question on the Depression CAT (Floor Group: 4 ± 3 seconds; Standard Group: 7 ± 7 seconds; mean difference: 3 [2.9-3.2]; p < 0.001). Patients who scored the minimum score on the PROMIS Depression CAT were younger than other patients (Floor Group: 50 ± 18 SD; Standard Group: 55 ± 16 SD; mean difference: 4.5 [4.2-4.7]; p < 0.001) with a larger percentage of men (Floor Group: 48.8%; Standard Group 40.0%; odds ratio 0.6 [0.6-0.7]; p < 0.001) and minor differences in racial breakdown (Floor Group: white 85.2%, black 11.9%, other 0.03%; Standard Group: white 83.9%, black 13.7%, other 0.02%). In an orthopaedic surgery population that is given PROMIS CAT as part of routine practice, the Physical Function item bank had a normal performance, but there is a group of patients who hastily complete Depression questions producing a strong floor effect and calling into question the validity of those floor scores that indicate minimal depression. Level II, diagnostic study.
NASA Astrophysics Data System (ADS)
Guillon, Hervé; Mugnier, Jean-Louis; Buoncristiani, Jean-François
2016-04-01
Bedload transport is a stochastic process during which each particle hops for a random length then rests for a random duration. In recent years, this probabilistic approach was investigated by theoretical models, numerical simulations and laboratory experiments. These experiments are generally carried out on short time scales with sand, but underline the diffusive behaviour of the bedload. Conversely, marked pebbles in natural streams have mainly be used to infer about transport processes and transport time of the bedload. In this study, the stochastic characteristics of bedload transport are inferred from the radio-frequency identification (RFID) of pebbles. In particular, we provide insights for answering the following question : is the bedload transport sub-diffusive, normally diffusive or super-diffusive at the long time scale (i.e. global range)? Experiments designed to investigate the phenomenology of bedload transport have been carried out in the proglacial area of Bossons glacier. This 350 m long alluvial plain exhibits daily flood from the glacial system and is still redistributing material from catastrophic events pre-dating our investigations. From 2011 to 2014, the position of the ˜ 1000 RFID tracers have been measured by a mobile antenna and a differential GPS during 44 surveys providing ˜ 2500 tracer positions. Additionnaly, in 2014, 650 new tracers were seeded upstream from a static RFID antenna located at the outlet of the study area. For the 1 to 32 cm fraction surveyed, both mobile and static antenna results show no evidence for a significant export outside of the surveyed zone. Initial data have been maximized by using each possible campaign pairs leading to ˜700 campaign pairs and more than 18,000 displacement vectors. To our knowledge, this is one of the most extensive dataset of tracers positions measured in a natural stream using the RFID methodology. Using 152 campaigns pairs with at least 20 retrieved tracers,r standard probability distributions were tested against the observed travel distances. Regardless of the time scale, heavy- and light-tailed distributions provide a convincing statistical description of measured data. No single distribution is significantly better than the others. Conversely, the distribution of tracers positions in the system and its time evolution is best described by the normal distribution. Its standard deviation scales with time as σ ∝ t0.45±0.12 which suggests a nearly normal diffusive behaviour. The measured virtual velocities and a simple probabilistic model using the time evolution of the mean (i.e. drift) and standard deviation (i.e diffusion) show that the mean bedload transfer time is greater than 5 years. RFID tracers appear as a promising tool to investigate stochastic characteristics of bedload transport.
Chang, Jenghwa
2017-06-01
To develop a statistical model that incorporates the treatment uncertainty from the rotational error of the single isocenter for multiple targets technique, and calculates the extra PTV (planning target volume) margin required to compensate for this error. The random vector for modeling the setup (S) error in the three-dimensional (3D) patient coordinate system was assumed to follow a 3D normal distribution with a zero mean, and standard deviations of σ x , σ y , σ z . It was further assumed that the rotation of clinical target volume (CTV) about the isocenter happens randomly and follows a three-dimensional (3D) independent normal distribution with a zero mean and a uniform standard deviation of σ δ . This rotation leads to a rotational random error (R), which also has a 3D independent normal distribution with a zero mean and a uniform standard deviation of σ R equal to the product of σδπ180 and dI⇔T, the distance between the isocenter and CTV. Both (S and R) random vectors were summed, normalized, and transformed to the spherical coordinates to derive the Chi distribution with three degrees of freedom for the radial coordinate of S+R. PTV margin was determined using the critical value of this distribution for a 0.05 significance level so that 95% of the time the treatment target would be covered by the prescription dose. The additional PTV margin required to compensate for the rotational error was calculated as a function of σ R and dI⇔T. The effect of the rotational error is more pronounced for treatments that require high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2-mm PTV margin (or σ x = σ y = σ z = 0.715 mm), a σ R = 0.328 mm will decrease the CTV coverage probability from 95.0% to 90.9%, or an additional 0.2-mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σ R > 0.328 mm will lead to an extra PTV margin that cannot be ignored, and the maximal σ δ that can be ignored is 0.45° (or 0.0079 rad ) for dI⇔T = 50 mm or 0.23° (or 0.004 rad ) for dI⇔T = 100 mm. The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the isocenter and target is large. © 2017 American Association of Physicists in Medicine.
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
Distribution of Different Sized Ocular Surface Vessels in Diabetics and Normal Individuals.
Banaee, Touka; Pourreza, Hamidreza; Doosti, Hassan; Abrishami, Mojtaba; Ehsaei, Asieh; Basiry, Mohsen; Pourreza, Reza
2017-01-01
To compare the distribution of different sized vessels using digital photographs of the ocular surface of diabetic and normal individuals. In this cross-sectional study, red-free conjunctival photographs of diabetic and normal individuals, aged 30-60 years, were taken under defined conditions and analyzed using a Radon transform-based algorithm for vascular segmentation. The image areas occupied by vessels (AOV) of different diameters were calculated. The main outcome measure was the distribution curve of mean AOV of different sized vessels. Secondary outcome measures included total AOV and standard deviation (SD) of AOV of different sized vessels. Two hundred and sixty-eight diabetic patients and 297 normal (control) individuals were included, differing in age (45.50 ± 5.19 vs. 40.38 ± 6.19 years, P < 0.001), systolic (126.37 ± 20.25 vs. 119.21 ± 15.81 mmHg, P < 0.001) and diastolic (78.14 ± 14.21 vs. 67.54 ± 11.46 mmHg, P < 0.001) blood pressures. The distribution curves of mean AOV differed between patients and controls (smaller AOV for larger vessels in patients; P < 0.001) as well as between patients without retinopathy and those with non-proliferative diabetic retinopathy (NPDR); with larger AOV for smaller vessels in NPDR ( P < 0.001). Controlling for the effect of confounders, patients had a smaller total AOV, larger total SD of AOV, and a more skewed distribution curve of vessels compared to controls. Presence of diabetes mellitus is associated with contraction of larger vessels in the conjunctiva. Smaller vessels dilate with diabetic retinopathy. These findings may be useful in the photographic screening of diabetes mellitus and retinopathy.
Range and Energy Straggling in Ion Beam Transport
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tai, Hsiang
2000-01-01
A first-order approximation to the range and energy straggling of ion beams is given as a normal distribution for which the standard deviation is estimated from the fluctuations in energy loss events. The standard deviation is calculated by assuming scattering from free electrons with a long range cutoff parameter that depends on the mean excitation energy of the medium. The present formalism is derived by extrapolating Payne's formalism to low energy by systematic energy scaling and to greater depths of penetration by a second-order perturbation. Limited comparisons are made with experimental data.
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2018-01-01
A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.
The integration of FPGA TDC inside White Rabbit node
NASA Astrophysics Data System (ADS)
Li, H.; Xue, T.; Gong, G.; Li, J.
2017-04-01
White Rabbit technology is capable of delivering sub-nanosecond accuracy and picosecond precision of synchronization and normal data packets over the fiber network. Carry chain structure in FPGA is a popular way to build TDC and tens of picosecond RMS resolution has been achieved. The integration of WR technology with FPGA TDC can enhance and simplify the TDC in many aspects that includes providing a low jitter clock for TDC, a synchronized absolute UTC/TAI timestamp for coarse counter, a fancy way to calibrate the carry chain DNL and an easy to use Ethernet link for data and control information transmit. This paper presents a FPGA TDC implemented inside a normal White Rabbit node with sub-nanosecond measurement precision. The measured standard deviation reaches 50ps between two distributed TDCs. Possible applications of this distributed TDC are also discussed.
Application of survival analysis methodology to the quantitative analysis of LC-MS proteomics data.
Tekwe, Carmen D; Carroll, Raymond J; Dabney, Alan R
2012-08-01
Protein abundance in quantitative proteomics is often based on observed spectral features derived from liquid chromatography mass spectrometry (LC-MS) or LC-MS/MS experiments. Peak intensities are largely non-normal in distribution. Furthermore, LC-MS-based proteomics data frequently have large proportions of missing peak intensities due to censoring mechanisms on low-abundance spectral features. Recognizing that the observed peak intensities detected with the LC-MS method are all positive, skewed and often left-censored, we propose using survival methodology to carry out differential expression analysis of proteins. Various standard statistical techniques including non-parametric tests such as the Kolmogorov-Smirnov and Wilcoxon-Mann-Whitney rank sum tests, and the parametric survival model and accelerated failure time-model with log-normal, log-logistic and Weibull distributions were used to detect any differentially expressed proteins. The statistical operating characteristics of each method are explored using both real and simulated datasets. Survival methods generally have greater statistical power than standard differential expression methods when the proportion of missing protein level data is 5% or more. In particular, the AFT models we consider consistently achieve greater statistical power than standard testing procedures, with the discrepancy widening with increasing missingness in the proportions. The testing procedures discussed in this article can all be performed using readily available software such as R. The R codes are provided as supplemental materials. ctekwe@stat.tamu.edu.
Ordinal probability effect measures for group comparisons in multinomial cumulative link models.
Agresti, Alan; Kateri, Maria
2017-03-01
We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.
About normal distribution on SO(3) group in texture analysis
NASA Astrophysics Data System (ADS)
Savyolova, T. I.; Filatov, S. V.
2017-12-01
This article studies and compares different normal distributions (NDs) on SO(3) group, which are used in texture analysis. Those NDs are: Fisher normal distribution (FND), Bunge normal distribution (BND), central normal distribution (CND) and wrapped normal distribution (WND). All of the previously mentioned NDs are central functions on SO(3) group. CND is a subcase for normal CLT-motivated distributions on SO(3) (CLT here is Parthasarathy’s central limit theorem). WND is motivated by CLT in R 3 and mapped to SO(3) group. A Monte Carlo method for modeling normally distributed values was studied for both CND and WND. All of the NDs mentioned above are used for modeling different components of crystallites orientation distribution function in texture analysis.
Drought forecasting in Luanhe River basin involving climatic indices
NASA Astrophysics Data System (ADS)
Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.
2017-11-01
Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.
Income distribution dependence of poverty measure: A theoretical analysis
NASA Astrophysics Data System (ADS)
Chattopadhyay, Amit K.; Mallick, Sushanta K.
2007-04-01
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the ‘global’ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.
Maintaining Consistency in Distributed Systems
1991-11-01
type of 8 concurrency is readily controlled using synchronization tools such as monitors or semaphores . which are a standard part of most threads...sug- gested that these issues are often best solved using traditional synchronization constructs, such as monitors and semaphores , and that...data structures would normally arise within individual programs, and be controlled using mutual exclusion constructs, such as semaphores and monitors
ERIC Educational Resources Information Center
Sharma, Kshitij; Chavez-Demoulin, Valérie; Dillenbourg, Pierre
2017-01-01
The statistics used in education research are based on central trends such as the mean or standard deviation, discarding outliers. This paper adopts another viewpoint that has emerged in statistics, called extreme value theory (EVT). EVT claims that the bulk of normal distribution is comprised mainly of uninteresting variations while the most…
ERIC Educational Resources Information Center
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
Performance of HESCO Bastion Units Under Combined Normal and Cyclic Lateral Loading
2017-02-01
technology was not designed for residential applications, engineering standards would be needed to guide the designers of soldier contingency housing. In...public release; distribution is unlimited. The U.S. Army Engineer Research and Development Center (ERDC) solves the nation’s toughest... engineering and environmental challenges. ERDC develops innovative solutions in civil and military engineering , geospatial sciences, water resources, and
Evaluation of measurement uncertainty of glucose in clinical chemistry.
Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y
2007-04-01
The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.
Kollins, Scott H.; McClernon, F. Joseph; Epstein, Jeff N.
2009-01-01
Smoking abstinence differentially affects cognitive functioning in smokers with ADHD, compared to non-ADHD smokers. Alternative approaches for analyzing reaction time data from these tasks may further elucidate important group differences. Adults smoking ≥15 cigarettes with (n = 12) or without (n = 14) a diagnosis of ADHD completed a continuous performance task (CPT) during two sessions under two separate laboratory conditions—a ‘Satiated’ condition wherein participants smoked up to and during the session; and an ‘Abstinent’ condition, in which participants were abstinent overnight and during the session. Reaction time (RT) distributions from the CPT were modeled to fit an ex-Gaussian distribution. The indicator of central tendency for RT from the normal component of the RT distribution (mu) showed a main effect of Group (ADHD
A new method of detecting changes in corneal health in response to toxic insults.
Khan, Mohammad Faisal Jamal; Nag, Tapas C; Igathinathane, C; Osuagwu, Uchechukwu L; Rubini, Michele
2015-11-01
The size and arrangement of stromal collagen fibrils (CFs) influence the optical properties of the cornea and hence its function. The spatial arrangement of the collagen is still questionable in relation to the diameter of collagen fibril. In the present study, we introduce a new parameter, edge-fibrillar distance (EFD) to measure how two collagen fibrils are spaced with respect to their closest edges and their spatial distribution through normalized standard deviation of EFD (NSDEFD) accessed through the application of two commercially available multipurpose solutions (MPS): ReNu and Hippia. The corneal buttons were soaked separately in ReNu and Hippia MPS for five hours, fixed overnight in 2.5% glutaraldehyde containing cuprolinic blue and processed for transmission electron microscopy. The electron micrographs were processed using ImageJ user-coded plugin. Statistical analysis was performed to compare the image processed equivalent diameter (ED), inter-fibrillar distance (IFD), and EFD of the CFs of treated versus normal corneas. The ReNu-soaked cornea resulted in partly degenerated epithelium with loose hemidesmosomes and Bowman's collagen. In contrast, the epithelium of the cornea soaked in Hippia was degenerated or lost but showed closely packed Bowman's collagen. Soaking the corneas in both MPS caused a statistically significant decrease in the anterior collagen fibril, ED and a significant change in IFD, and EFD than those of the untreated corneas (p<0.05, for all comparisons). The introduction of EFD measurement in the study directly provided a sense of gap between periphery of the collagen bundles, their spatial distribution; and in combination with ED, they showed how the corneal collagen bundles are spaced in relation to their diameters. The spatial distribution parameter NSDEFD indicated that ReNu treated cornea fibrils were uniformly distributed spatially, followed by normal and Hippia. The EFD measurement with relatively lower standard deviation and NSDEFD, a characteristic of uniform CFs distribution, can be an additional parameter used in evaluating collagen organization and accessing the effects of various treatments on corneal health and transparency. Copyright © 2015 Elsevier Ltd. All rights reserved.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Evaluation and validity of a LORETA normative EEG database.
Thatcher, R W; North, D; Biver, C
2005-04-01
To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
Time Series Forecasting of the Number of Malaysia Airlines and AirAsia Passengers
NASA Astrophysics Data System (ADS)
Asrah, N. M.; Nor, M. E.; Rahim, S. N. A.; Leng, W. K.
2018-04-01
The standard practice in forecasting process involved by fitting a model and further analysis on the residuals. If we know the distributional behaviour of the time series data, it can help us to directly analyse the model identification, parameter estimation, and model checking. In this paper, we want to compare the distributional behaviour data from the number of Malaysia Airlines (MAS) and AirAsia passenger’s. From the previous research, the AirAsia passengers are govern by geometric Brownian motion (GBM). The data were normally distributed, stationary and independent. Then, GBM was used to forecast the number of AirAsia passenger’s. The same methods were applied to MAS data and the results then were compared. Unfortunately, the MAS data were not govern by GBM. Then, the standard approach in time series forecasting will be applied to MAS data. From this comparison, we can conclude that the number of AirAsia passengers are always in peak season rather than MAS passengers.
McDonald, Gene D; Storrie-Lombardi, Michael C
2006-02-01
The relative abundance of the protein amino acids has been previously investigated as a potential marker for biogenicity in meteoritic samples. However, these investigations were executed without a quantitative metric to evaluate distribution variations, and they did not account for the possibility of interdisciplinary systematic error arising from inter-laboratory differences in extraction and detection techniques. Principal component analysis (PCA), hierarchical cluster analysis (HCA), and stochastic probabilistic artificial neural networks (ANNs) were used to compare the distributions for nine protein amino acids previously reported for the Murchison carbonaceous chondrite, Mars meteorites (ALH84001, Nakhla, and EETA79001), prebiotic synthesis experiments, and terrestrial biota and sediments. These techniques allowed us (1) to identify a shift in terrestrial amino acid distributions secondary to diagenesis; (2) to detect differences in terrestrial distributions that may be systematic differences between extraction and analysis techniques in biological and geological laboratories; and (3) to determine that distributions in meteoritic samples appear more similar to prebiotic chemistry samples than they do to the terrestrial unaltered or diagenetic samples. Both diagenesis and putative interdisciplinary differences in analysis complicate interpretation of meteoritic amino acid distributions. We propose that the analysis of future samples from such diverse sources as meteoritic influx, sample return missions, and in situ exploration of Mars would be less ambiguous with adoption of standardized assay techniques, systematic inclusion of assay standards, and the use of a quantitative, probabilistic metric. We present here one such metric determined by sequential feature extraction and normalization (PCA), information-driven automated exploration of classification possibilities (HCA), and prediction of classification accuracy (ANNs).
Time series behaviour of the number of Air Asia passengers: A distributional approach
NASA Astrophysics Data System (ADS)
Asrah, Norhaidah Mohd; Djauhari, Maman Abdurachman
2013-09-01
The common practice to time series analysis is by fitting a model and then further analysis is conducted on the residuals. However, if we know the distributional behavior of time series, the analyses in model identification, parameter estimation, and model checking are more straightforward. In this paper, we show that the number of Air Asia passengers can be represented as a geometric Brownian motion process. Therefore, instead of using the standard approach in model fitting, we use an appropriate transformation to come up with a stationary, normally distributed and even independent time series. An example in forecasting the number of Air Asia passengers will be given to illustrate the advantages of the method.
Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock
NASA Technical Reports Server (NTRS)
Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.
2001-01-01
Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.
Plume particle collection and sizing from static firing of solid rocket motors
NASA Technical Reports Server (NTRS)
Sambamurthi, Jay K.
1995-01-01
A unique dart system has been designed and built at the NASA Marshall Space Flight Center to collect aluminum oxide plume particles from the plumes of large scale solid rocket motors, such as the space shuttle RSRM. The capability of this system to collect clean samples from both the vertically fired MNASA (18.3% scaled version of the RSRM) motors and the horizontally fired RSRM motor has been demonstrated. The particle mass averaged diameters, d43, measured from the samples for the different motors, ranged from 8 to 11 mu m and were independent of the dart collection surface and the motor burn time. The measured results agreed well with those calculated using the industry standard Hermsen's correlation within the standard deviation of the correlation . For each of the samples analyzed from both MNASA and RSRM motors, the distribution of the cumulative mass fraction of the plume oxide particles as a function of the particle diameter was best described by a monomodal log-normal distribution with a standard deviation of 0.13 - 0.15. This distribution agreed well with the theoretical prediction by Salita using the OD3P code for the RSRM motor at the nozzle exit plane.
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles; Beland, Sebastien
2012-01-01
This paper focuses on two likelihood-based indices of person fit, the index "l[subscript z]" and the Snijders's modified index "l[subscript z]*". The first one is commonly used in practical assessment of person fit, although its asymptotic standard normal distribution is not valid when true abilities are replaced by sample…
Non-specific filtering of beta-distributed data.
Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D
2014-06-19
Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with different characteristics, each performed well for identifying clusters of cancer and non-cancer tissue, and identifying a cancer CpG island hypermethylation phenotype. Since cluster analysis is for discovery, we would suggest trying both filters on any new data sets, evaluating the overlap of features selected and clusters discovered.
Wang, Honglei; Zhu, Bin; Shen, Lijuan; Kang, Hanqing
2012-01-01
To investigate the impact on urban air pollution by crop residual burning outside Nanjing, aerosol concentration, pollution gas concentration, mass concentration, and water-soluble ion size distribution were observed during one event of November 4-9, 2010. Results show that the size distribution of aerosol concentration is bimodal on pollution days and normal days, with peak values at 60-70 and 200-300 nm, respectively. Aerosol concentration is 10(4) cm(-3) x nm(-1) on pollution days. The peak value of spectrum distribution of aerosol concentration on pollution days is 1.5-3.3 times higher than that on a normal day. Crop residual burning has a great impact on the concentration of fine particles. Diurnal variation of aerosol concentration is trimodal on pollution days and normal days, with peak values at 03:00, 09:00 and 19:00 local standard time. The first peak is impacted by meteorological elements, while the second and third peaks are due to human activities, such as rush hour traffic. Crop residual burning has the greatest impact on SO2 concentration, followed by NO2, O3 is hardly affected. The impact of crop residual burning on fine particles (< 2.1 microm) is larger than on coarse particles (> 2.1 microm), thus ion concentration in fine particles is higher than that in coarse particles. Crop residual burning leads to similar increase in all ion components, thus it has a small impact on the water-soluble ions order. Crop residual burning has a strong impact on the size distribution of K+, Cl-, Na+, and F- and has a weak impact on the size distributions of NH4+, Ca2+, NO3- and SO4(2-).
Normal and Extreme Wind Conditions for Power at Coastal Locations in China
Gao, Meng; Ning, Jicai; Wu, Xiaoqing
2015-01-01
In this paper, the normal and extreme wind conditions for power at 12 coastal locations along China’s coastline were investigated. For this purpose, the daily meteorological data measured at the standard 10-m height above ground for periods of 40–62 years are statistically analyzed. The East Asian Monsoon that affects almost China’s entire coastal region is considered as the leading factor determining wind energy resources. For most stations, the mean wind speed is higher in winter and lower in summer. Meanwhile, the wind direction analysis indicates that the prevalent winds in summer are southerly, while those in winter are northerly. The air densities at different coastal locations differ significantly, resulting in the difference in wind power density. The Weibull and lognormal distributions are applied to fit the yearly wind speeds. The lognormal distribution performs better than the Weibull distribution at 8 coastal stations according to two judgement criteria, the Kolmogorov–Smirnov test and absolute error (AE). Regarding the annual maximum extreme wind speed, the generalized extreme value (GEV) distribution performs better than the commonly-used Gumbel distribution. At these southeastern coastal locations, strong winds usually occur in typhoon season. These 4 coastal provinces, that is, Guangdong, Fujian, Hainan, and Zhejiang, which have abundant wind resources, are also prone to typhoon disasters. PMID:26313256
Normal and Extreme Wind Conditions for Power at Coastal Locations in China.
Gao, Meng; Ning, Jicai; Wu, Xiaoqing
2015-01-01
In this paper, the normal and extreme wind conditions for power at 12 coastal locations along China's coastline were investigated. For this purpose, the daily meteorological data measured at the standard 10-m height above ground for periods of 40-62 years are statistically analyzed. The East Asian Monsoon that affects almost China's entire coastal region is considered as the leading factor determining wind energy resources. For most stations, the mean wind speed is higher in winter and lower in summer. Meanwhile, the wind direction analysis indicates that the prevalent winds in summer are southerly, while those in winter are northerly. The air densities at different coastal locations differ significantly, resulting in the difference in wind power density. The Weibull and lognormal distributions are applied to fit the yearly wind speeds. The lognormal distribution performs better than the Weibull distribution at 8 coastal stations according to two judgement criteria, the Kolmogorov-Smirnov test and absolute error (AE). Regarding the annual maximum extreme wind speed, the generalized extreme value (GEV) distribution performs better than the commonly-used Gumbel distribution. At these southeastern coastal locations, strong winds usually occur in typhoon season. These 4 coastal provinces, that is, Guangdong, Fujian, Hainan, and Zhejiang, which have abundant wind resources, are also prone to typhoon disasters.
Frison, Severine; Checchi, Francesco; Kerac, Marko; Nicholas, Jennifer
2016-01-01
Wasting is a major public health issue throughout the developing world. Out of the 6.9 million estimated deaths among children under five annually, over 800,000 deaths (11.6 %) are attributed to wasting. Wasting is quantified as low Weight-For-Height (WFH) and/or low Mid-Upper Arm Circumference (MUAC) (since 2005). Many statistical procedures are based on the assumption that the data used are normally distributed. Analyses have been conducted on the distribution of WFH but there are no equivalent studies on the distribution of MUAC. This secondary data analysis assesses the normality of the MUAC distributions of 852 nutrition cross-sectional survey datasets of children from 6 to 59 months old and examines different approaches to normalise "non-normal" distributions. The distribution of MUAC showed no departure from a normal distribution in 319 (37.7 %) distributions using the Shapiro-Wilk test. Out of the 533 surveys showing departure from a normal distribution, 183 (34.3 %) were skewed (D'Agostino test) and 196 (36.8 %) had a kurtosis different to the one observed in the normal distribution (Anscombe-Glynn test). Testing for normality can be sensitive to data quality, design effect and sample size. Out of the 533 surveys showing departure from a normal distribution, 294 (55.2 %) showed high digit preference, 164 (30.8 %) had a large design effect, and 204 (38.3 %) a large sample size. Spline and LOESS smoothing techniques were explored and both techniques work well. After Spline smoothing, 56.7 % of the MUAC distributions showing departure from normality were "normalised" and 59.7 % after LOESS. Box-Cox power transformation had similar results on distributions showing departure from normality with 57 % of distributions approximating "normal" after transformation. Applying Box-Cox transformation after Spline or Loess smoothing techniques increased that proportion to 82.4 and 82.7 % respectively. This suggests that statistical approaches relying on the normal distribution assumption can be successfully applied to MUAC. In light of this promising finding, further research is ongoing to evaluate the performance of a normal distribution based approach to estimating the prevalence of wasting using MUAC.
Statistics of baryon correlation functions in lattice QCD
NASA Astrophysics Data System (ADS)
Wagman, Michael L.; Savage, Martin J.; Nplqcd Collaboration
2017-12-01
A systematic analysis of the structure of single-baryon correlation functions calculated with lattice QCD is performed, with a particular focus on characterizing the structure of the noise associated with quantum fluctuations. The signal-to-noise problem in these correlation functions is shown, as long suspected, to result from a sign problem. The log-magnitude and complex phase are found to be approximately described by normal and wrapped normal distributions respectively. Properties of circular statistics are used to understand the emergence of a large time noise region where standard energy measurements are unreliable. Power-law tails in the distribution of baryon correlation functions, associated with stable distributions and "Lévy flights," are found to play a central role in their time evolution. A new method of analyzing correlation functions is considered for which the signal-to-noise ratio of energy measurements is constant, rather than exponentially degrading, with increasing source-sink separation time. This new method includes an additional systematic uncertainty that can be removed by performing an extrapolation, and the signal-to-noise problem reemerges in the statistics of this extrapolation. It is demonstrated that this new method allows accurate results for the nucleon mass to be extracted from the large-time noise region inaccessible to standard methods. The observations presented here are expected to apply to quantum Monte Carlo calculations more generally. Similar methods to those introduced here may lead to practical improvements in analysis of noisier systems.
Regnault, Antoine; Hamel, Jean-François; Patrick, Donald L
2015-02-01
Cultural differences and/or poor linguistic validation of patient-reported outcome (PRO) instruments may result in differences in the assessment of the targeted concept across languages. In the context of multinational clinical trials, these measurement differences may add noise and potentially measurement bias to treatment effect estimation. Our objective was to explore the potential effect on treatment effect estimation of the "contamination" of a cultural subgroup by a flawed PRO measurement. We ran a simulation exercise in which the distribution of the score in the overall sample was considered a mixture of two normal distributions: a standard normal distribution was assumed in a "main" subgroup and a normal distribution which differed either in mean (bias) or in variance (noise) in a "contaminated" subgroup (the subgroup with potential flaws in the PRO measurement). The observed power was compared to the expected power (i.e., the power that would have been observed if the subgroup had not been contaminated). Even if differences between the expected and observed power were small, some substantial differences were obtained (up to a 0.375 point drop in power). No situation was systematically protected against loss of power. The impact of poor PRO measurement in a cultural subgroup may induce a notable drop in the study power and consequently reduce the chance of showing an actual treatment effect. These results illustrate the importance of the efforts to optimize conceptual and linguistic equivalence of PRO measures when pooling data in international clinical trials.
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
Optimization of pressure gauge locations for water distribution systems using entropy theory.
Yoo, Do Guen; Chang, Dong Eil; Jun, Hwandon; Kim, Joong Hoon
2012-12-01
It is essential to select the optimal pressure gauge location for effective management and maintenance of water distribution systems. This study proposes an objective and quantified standard for selecting the optimal pressure gauge location by defining the pressure change at other nodes as a result of demand change at a specific node using entropy theory. Two cases are considered in terms of demand change: that in which demand at all nodes shows peak load by using a peak factor and that comprising the demand change of the normal distribution whose average is the base demand. The actual pressure change pattern is determined by using the emitter function of EPANET to reflect the pressure that changes practically at each node. The optimal pressure gauge location is determined by prioritizing the node that processes the largest amount of information it gives to (giving entropy) and receives from (receiving entropy) the whole system according to the entropy standard. The suggested model is applied to one virtual and one real pipe network, and the optimal pressure gauge location combination is calculated by implementing the sensitivity analysis based on the study results. These analysis results support the following two conclusions. Firstly, the installation priority of the pressure gauge in water distribution networks can be determined with a more objective standard through the entropy theory. Secondly, the model can be used as an efficient decision-making guide for gauge installation in water distribution systems.
NASA Technical Reports Server (NTRS)
Smith, O. E.
1976-01-01
The techniques are presented to derive several statistical wind models. The techniques are from the properties of the multivariate normal probability function. Assuming that the winds can be considered as bivariate normally distributed, then (1) the wind components and conditional wind components are univariate normally distributed, (2) the wind speed is Rayleigh distributed, (3) the conditional distribution of wind speed given a wind direction is Rayleigh distributed, and (4) the frequency of wind direction can be derived. All of these distributions are derived from the 5-sample parameter of wind for the bivariate normal distribution. By further assuming that the winds at two altitudes are quadravariate normally distributed, then the vector wind shear is bivariate normally distributed and the modulus of the vector wind shear is Rayleigh distributed. The conditional probability of wind component shears given a wind component is normally distributed. Examples of these and other properties of the multivariate normal probability distribution function as applied to Cape Kennedy, Florida, and Vandenberg AFB, California, wind data samples are given. A technique to develop a synthetic vector wind profile model of interest to aerospace vehicle applications is presented.
Analysis of quantitative data obtained from toxicity studies showing non-normal distribution.
Kobayashi, Katsumi
2005-05-01
The data obtained from toxicity studies are examined for homogeneity of variance, but, usually, they are not examined for normal distribution. In this study I examined the measured items of a carcinogenicity/chronic toxicity study with rats for both homogeneity of variance and normal distribution. It was observed that a lot of hematology and biochemistry items showed non-normal distribution. For testing normal distribution of the data obtained from toxicity studies, the data of the concurrent control group may be examined, and for the data that show a non-normal distribution, non-parametric tests with robustness may be applied.
On the efficacy of procedures to normalize Ex-Gaussian distributions.
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2014-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.
A survey of blood pressure in Lebanese children and adolescence
Merhi, Bassem Abou; Al-Hajj, Fatima; Al-Tannir, Mohamad; Ziade, Fouad; El-Rajab, Mariam
2011-01-01
Background: Blood pressure varies between populations due to ethnic and environmental factors. Therefore, normal blood pressure values should be determined for different populations. Aims: The aim of this survey was to produce blood pressure nomograms for Lebanese children in order to establish distribution curves of blood pressure by age and sex. Subjects and Methods: We conducted a survey of blood pressure in 5710 Lebanese schoolchildren aged 5 to 15 years (2918 boys and 2792 girls), and studied the distribution of systolic and diastolic blood pressure in these children and adolescents. Blood pressure was measured with a mercury sphygmomanometer using a standardized technique. Results: Both systolic and diastolic blood pressure had a positive correlation with weight, height, age, and body mass index (r= 0.648, 0.643, 0.582, and 0.44, respectively) (P < .001). There was no significant difference in the systolic and diastolic blood pressure in boys compared to girls of corresponding ages. However, the average annual increase in systolic blood pressure was 2.86 mm Hg in boys and 2.63 mm Hg in girls, whereas the annual increase in diastolic blood pressure was 1.72 mm Hg in boys and 1.48 mm Hg in girls. The prevalence of high and high-normal blood pressure at the upper limit of normal (between the 90th and 95th percentile, at risk of future hypertension if not managed adequately), was 10.5% in boys and 6.9% in girls, with similar distributions among the two sexes. Conclusions: We present the first age-specific reference values for blood pressure of Lebanese children aged 5 to 15 years based on a good representative sample. The use of these reference values should help pediatricians identify children with normal, high-normal and high blood pressure. PMID:22540059
A survey of blood pressure in Lebanese children and adolescence.
Merhi, Bassem Abou; Al-Hajj, Fatima; Al-Tannir, Mohamad; Ziade, Fouad; El-Rajab, Mariam
2011-01-01
Blood pressure varies between populations due to ethnic and environmental factors. Therefore, normal blood pressure values should be determined for different populations. The aim of this survey was to produce blood pressure nomograms for Lebanese children in order to establish distribution curves of blood pressure by age and sex. We conducted a survey of blood pressure in 5710 Lebanese schoolchildren aged 5 to 15 years (2918 boys and 2792 girls), and studied the distribution of systolic and diastolic blood pressure in these children and adolescents. Blood pressure was measured with a mercury sphygmomanometer using a standardized technique. Both systolic and diastolic blood pressure had a positive correlation with weight, height, age, and body mass index (r= 0.648, 0.643, 0.582, and 0.44, respectively) (P < .001). There was no significant difference in the systolic and diastolic blood pressure in boys compared to girls of corresponding ages. However, the average annual increase in systolic blood pressure was 2.86 mm Hg in boys and 2.63 mm Hg in girls, whereas the annual increase in diastolic blood pressure was 1.72 mm Hg in boys and 1.48 mm Hg in girls. The prevalence of high and high-normal blood pressure at the upper limit of normal (between the 90(th) and 95(th) percentile, at risk of future hypertension if not managed adequately), was 10.5% in boys and 6.9% in girls, with similar distributions among the two sexes. We present the first age-specific reference values for blood pressure of Lebanese children aged 5 to 15 years based on a good representative sample. The use of these reference values should help pediatricians identify children with normal, high-normal and high blood pressure.
Query Health: standards-based, cross-platform population health surveillance
Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N
2014-01-01
Objective Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Materials and methods Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. Results We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. Discussions This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Conclusions Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. PMID:24699371
Query Health: standards-based, cross-platform population health surveillance.
Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N
2014-01-01
Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Albin, Thomas J; Vink, Peter
2015-01-01
Anthropometric data are assumed to have a Gaussian (Normal) distribution, but if non-Gaussian, accommodation estimates are affected. When data are limited, users may choose to combine anthropometric elements by Combining Percentiles (CP) (adding or subtracting), despite known adverse effects. This study examined whether global anthropometric data are Gaussian distributed. It compared the Median Correlation Method (MCM) of combining anthropometric elements with unknown correlations to CP to determine if MCM provides better estimates of percentile values and accommodation. Percentile values of 604 male and female anthropometric data drawn from seven countries worldwide were expressed as standard scores. The standard scores were tested to determine if they were consistent with a Gaussian distribution. Empirical multipliers for determining percentile values were developed.In a test case, five anthropometric elements descriptive of seating were combined in addition and subtraction models. Percentile values were estimated for each model by CP, MCM with Gaussian distributed data, or MCM with empirically distributed data. The 5th and 95th percentile values of a dataset of global anthropometric data are shown to be asymmetrically distributed. MCM with empirical multipliers gave more accurate estimates of 5th and 95th percentiles values. Anthropometric data are not Gaussian distributed. The MCM method is more accurate than adding or subtracting percentiles.
Results of module electrical measurement of the DOE 46-kilowatt procurement
NASA Technical Reports Server (NTRS)
Curtis, H. B.
1978-01-01
Current-voltage measurements have been made on terrestrial solar cell modules of the DOE/JPL Low Cost Silicon Solar Array procurement. Data on short circuit current, open circuit voltage, and maximum power for the four types of modules are presented in normalized form, showing distribution of the measured values. Standard deviations from the mean values are also given. Tests of the statistical significance of the data are discussed.
Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.
Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J
2017-06-01
Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.
NASA Astrophysics Data System (ADS)
Kado, B.; Mohammad, S.; Lee, Y. H.; Shek, P. N.; Kadir, M. A. A.
2018-04-01
Standard fire test was carried out on 3 hollow steel tube and 6 foamed concrete filled steel tube columns. Temperature distribution on the columns was investigated. 1500 kg/m3 and 1800 kg/m3 foamed concrete density at 15%, 20% and 25% load level are the parameters considered. The columns investigated were 2400 mm long, 139.7 mm outer diameter and 6 mm steel tube thickness. The result shows that foamed concrete filled steel tube columns has the highest fire resistance of 43 minutes at 15% load level and low critical temperature of 671 ºC at 25% load level using 1500 kg/m3 foamed concrete density. Fire resistance of foamed concrete filled column increases with lower foamed concrete strength. Foamed concrete can be used to provide more fire resistance to hollow steel column or to replace normal weight concrete in concrete filled columns. Since filling hollow steel with foamed concrete produce column with high fire resistance than unfilled hollow steel column. Therefore normal weight concrete can be substituted with foamed concrete in concrete filled column, it will reduces the self-weight of the structure because of its light weight at the same time providing the desired fire resistance.
Montoro Bustos, Antonio R; Petersen, Elijah J; Possolo, Antonio; Winchester, Michael R
2015-09-01
Single particle inductively coupled plasma-mass spectrometry (spICP-MS) is an emerging technique that enables simultaneous measurement of nanoparticle size and number quantification of metal-containing nanoparticles at realistic environmental exposure concentrations. Such measurements are needed to understand the potential environmental and human health risks of nanoparticles. Before spICP-MS can be considered a mature methodology, additional work is needed to standardize this technique including an assessment of the reliability and variability of size distribution measurements and the transferability of the technique among laboratories. This paper presents the first post hoc interlaboratory comparison study of the spICP-MS technique. Measurement results provided by six expert laboratories for two National Institute of Standards and Technology (NIST) gold nanoparticle reference materials (RM 8012 and RM 8013) were employed. The general agreement in particle size between spICP-MS measurements and measurements by six reference techniques demonstrates the reliability of spICP-MS and validates its sizing capability. However, the precision of the spICP-MS measurement was better for the larger 60 nm gold nanoparticles and evaluation of spICP-MS precision indicates substantial variability among laboratories, with lower variability between operators within laboratories. Global particle number concentration and Au mass concentration recovery were quantitative for RM 8013 but significantly lower and with a greater variability for RM 8012. Statistical analysis did not suggest an optimal dwell time, because this parameter did not significantly affect either the measured mean particle size or the ability to count nanoparticles. Finally, the spICP-MS data were often best fit with several single non-Gaussian distributions or mixtures of Gaussian distributions, rather than the more frequently used normal or log-normal distributions.
Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco
2014-01-01
Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.
Planar Laser Imaging of Sprays for Liquid Rocket Studies
NASA Technical Reports Server (NTRS)
Lee, W.; Pal, S.; Ryan, H. M.; Strakey, P. A.; Santoro, Robert J.
1990-01-01
A planar laser imaging technique which incorporates an optical polarization ratio technique for droplet size measurement was studied. A series of pressure atomized water sprays were studied with this technique and compared with measurements obtained using a Phase Doppler Particle Analyzer. In particular, the effects of assuming a logarithmic normal distribution function for the droplet size distribution within a spray was evaluated. Reasonable agreement between the instrument was obtained for the geometric mean diameter of the droplet distribution. However, comparisons based on the Sauter mean diameter show larger discrepancies, essentially because of uncertainties in the appropriate standard deviation to be applied for the polarization ratio technique. Comparisons were also made between single laser pulse (temporally resolved) measurements with multiple laser pulse visualizations of the spray.
Chaos-assisted tunneling in the presence of Anderson localization.
Doggen, Elmer V H; Georgeot, Bertrand; Lemarié, Gabriel
2017-10-01
Tunneling between two classically disconnected regular regions can be strongly affected by the presence of a chaotic sea in between. This phenomenon, known as chaos-assisted tunneling, gives rise to large fluctuations of the tunneling rate. Here we study chaos-assisted tunneling in the presence of Anderson localization effects in the chaotic sea. Our results show that the standard tunneling rate distribution is strongly modified by localization, going from the Cauchy distribution in the ergodic regime to a log-normal distribution in the strongly localized case, for both a deterministic and a disordered model. We develop a single-parameter scaling description which accurately describes the numerical data. Several possible experimental implementations using cold atoms, photonic lattices, or microwave billiards are discussed.
NASA Astrophysics Data System (ADS)
Van doninck, Jasper; Tuomisto, Hanna
2017-06-01
Biodiversity mapping in extensive tropical forest areas poses a major challenge for the interpretation of Landsat images, because floristically clearly distinct forest types may show little difference in reflectance. In such cases, the effects of the bidirectional reflection distribution function (BRDF) can be sufficiently strong to cause erroneous image interpretation and classification. Since the opening of the Landsat archive in 2008, several BRDF normalization methods for Landsat have been developed. The simplest of these consist of an empirical view angle normalization, whereas more complex approaches apply the semi-empirical Ross-Li BRDF model and the MODIS MCD43-series of products to normalize directional Landsat reflectance to standard view and solar angles. Here we quantify the effect of surface anisotropy on Landsat TM/ETM+ images over old-growth Amazonian forests, and evaluate five angular normalization approaches. Even for the narrow swath of the Landsat sensors, we observed directional effects in all spectral bands. Those normalization methods that are based on removing the surface reflectance gradient as observed in each image were adequate to normalize TM/ETM+ imagery to nadir viewing, but were less suitable for multitemporal analysis when the solar vector varied strongly among images. Approaches based on the MODIS BRDF model parameters successfully reduced directional effects in the visible bands, but removed only half of the systematic errors in the infrared bands. The best results were obtained when the semi-empirical BRDF model was calibrated using pairs of Landsat observation. This method produces a single set of BRDF parameters, which can then be used to operationally normalize Landsat TM/ETM+ imagery over Amazonian forests to nadir viewing and a standard solar configuration.
The variance of length of stay and the optimal DRG outlier payments.
Felder, Stefan
2009-09-01
Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Daily magnesium intake and serum magnesium concentration among Japanese people.
Akizawa, Yoriko; Koizumi, Sadayuki; Itokawa, Yoshinori; Ojima, Toshiyuki; Nakamura, Yosikazu; Tamura, Tarou; Kusaka, Yukinori
2008-01-01
The vitamins and minerals that are deficient in the daily diet of a normal adult remain unknown. To answer this question, we conducted a population survey focusing on the relationship between dietary magnesium intake and serum magnesium level. The subjects were 62 individuals from Fukui Prefecture who participated in the 1998 National Nutrition Survey. The survey investigated the physical status, nutritional status, and dietary data of the subjects. Holidays and special occasions were avoided, and a day when people are most likely to be on an ordinary diet was selected as the survey date. The mean (+/-standard deviation) daily magnesium intake was 322 (+/-132), 323 (+/-163), and 322 (+/-147) mg/day for men, women, and the entire group, respectively. The mean (+/-standard deviation) serum magnesium concentration was 20.69 (+/-2.83), 20.69 (+/-2.88), and 20.69 (+/-2.83) ppm for men, women, and the entire group, respectively. The distribution of serum magnesium concentration was normal. Dietary magnesium intake showed a log-normal distribution, which was then transformed by logarithmic conversion for examining the regression coefficients. The slope of the regression line between the serum magnesium concentration (Y ppm) and daily magnesium intake (X mg) was determined using the formula Y = 4.93 (log(10)X) + 8.49. The coefficient of correlation (r) was 0.29. A regression line (Y = 14.65X + 19.31) was observed between the daily intake of magnesium (Y mg) and serum magnesium concentration (X ppm). The coefficient of correlation was 0.28. The daily magnesium intake correlated with serum magnesium concentration, and a linear regression model between them was proposed.
Ito, Y; Hasegawa, S; Yamaguchi, H; Yoshioka, J; Uehara, T; Nishimura, T
2000-01-01
Clinical studies have shown discrepancies in the distribution of thallium-201 and iodine 123-beta-methyl-iodophenylpentadecanoic acid (BMIPP) in patients with hypertrophic cardiomyopathy (HCM). Myocardial uptake of fluorine 18 deoxyglucose (FDG) is increased in the hypertrophic area in HCM. We examined whether the distribution of a Tl-201/BMIPP subtraction polar map correlates with that of an FDG polar map. We normalized to maximum count each Tl-201 and BMIPP bull's-eye polar map of 6 volunteers and obtained a standard Tl-201/BMIPP subtraction polar map by subtracting a normalized BMIPP bull's-eye polar map from a normalized Tl-201 bull's-eye polar map. The Tl-201/BMIPP subtraction polar map was then applied to 8 patients with HCM (mean age 65+/-12 years) to evaluate the discrepancy between Tl-201 and BMIPP distribution. We compared the Tl-201/BMIPP subtraction polar map with an FDG polar map. In patients with HCM, the Tl-201/BMIPP subtraction polar map showed a focal uptake pattern in the hypertrophic area similar to that of the FDG polar map. By quantitative analysis, the severity score of the Tl-201/BMIPP subtraction polar map was significantly correlated with the percent dose uptake of the FDG polar map. These results suggest that this new quantitative method may be an alternative to FDG positron emission tomography for the routine evaluation of HCM.
Moraleja, Irene; Esteban-Fernández, Diego; Lázaro, Alberto; Humanes, Blanca; Neumann, Boris; Tejedor, Alberto; Luz Mena, M; Jakubowski, Norbert; Gómez-Gómez, M Milagros
2016-03-01
The study of the distribution of the cytostatic drugs cisplatin, carboplatin, and oxaliplatin along the kidney may help to understand their different nephrotoxic behavior. Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) allows the acquisition of trace element images in biological tissues. However, results obtained are affected by several variations concerning the sample matrix and instrumental drifts. In this work, an internal standardization method based on printing an Ir-spiked ink onto the surface of the sample has been developed to evaluate the different distributions and accumulation levels of the aforementioned drugs along the kidney of a rat model. A conventional ink-jet printer was used to print fresh sagittal kidney tissue slices of 4 μm. A reproducible and homogenous deposition of the ink along the tissue was observed. The ink was partially absorbed on top of the tissue. Thus, this approach provides a pseudo-internal standardization, due to the fact that the ablation sample and internal standard take place subsequently and not simultaneously. A satisfactory normalization of LA-ICP-MS bioimages and therefore a reliable comparison of the kidney treated with different Pt-based drugs were achieved even for tissues analyzed on different days. Due to the complete ablation of the sample, the transport of the ablated internal standard and tissue to the inductively coupled plasma-mass spectrometry (ICP-MS) is practically taking place at the same time. Pt accumulation in the kidney was observed in accordance to the dosages administered for each drug. Although the accumulation rate of cisplatin and oxaliplatin is high in both cases, their Pt distributions differ. The strong nephrotoxicity observed for cisplatin and the absence of such side effect in the case of oxaliplatin could explain these distribution differences. The homogeneous distribution of oxaliplatin in the cortical and medullar areas could be related with its higher affinity for cellular transporters such as MATE2-k.
On the efficacy of procedures to normalize Ex-Gaussian distributions
Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío
2015-01-01
Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588
NASA Astrophysics Data System (ADS)
Iwata, Takaki; Yamazaki, Yoshihiro; Kuninaka, Hiroto
2013-08-01
In this study, we examine the validity of the transition of the human height distribution from the log-normal distribution to the normal distribution during puberty, as suggested in an earlier study [Kuninaka et al.: J. Phys. Soc. Jpn. 78 (2009) 125001]. Our data analysis reveals that, in late puberty, the variation in height decreases as children grow. Thus, the classification of a height dataset by age at this stage leads us to analyze a mixture of distributions with larger means and smaller variations. This mixture distribution has a negative skewness and is consequently closer to the normal distribution than to the log-normal distribution. The opposite case occurs in early puberty and the mixture distribution is positively skewed, which resembles the log-normal distribution rather than the normal distribution. Thus, this scenario mimics the transition during puberty. Additionally, our scenario is realized through a numerical simulation based on a statistical model. The present study does not support the transition suggested by the earlier study.
Xiong, Xi-Xi; Ding, Gao-Zhong; Zhao, Wen-E; Li, Xue; Ling, Yu-Ting; Sun, Li; Gong, Qing-Li; Lu, Yan
2017-07-01
Skin color is determined by the number of melanin granules produced by melanocytes that are transferred to keratinocytes. Melanin synthesis and the distribution of melanosomes to keratinocytes within the epidermal melanin unit (EMU) within the skin of vitiligo patients have been poorly studied. The ultrastructure and distribution of melanosomes in melanocytes and surrounding keratinocytes in perilesional vitiligo and normal skin were investigated using transmission electron microscopy (TEM). Furthermore, we performed a quantitative analysis of melanosome distribution within the EMUs with scatter plot. Melanosome count within keratinocytes increased significantly compared with melanocytes in perilesional stable vitiligo (P < 0.001), perilesional halo nevi (P < 0.01) and the controls (P < 0.01), but not in perilesional active vitiligo. Furthermore, melanosome counts within melanocytes and their surrounding keratinocytes in perilesional active vitiligo skin decreased significantly compared with the other groups. In addition, taking the means-standard error of melanosome count within melanocytes and keratinocytes in healthy controls as a normal lower limit, EMUs were graded into 3 stages (I-III). Perilesional active vitiligo presented a significantly different constitution in stages compared to other groups (P < 0.001). The distribution and constitution of melanosomes were normal in halo nevi. Impaired melanin synthesis and melanosome transfer are involved in the pathogenesis of vitiligo. Active vitiligo varies in stages and in stage II, EMUs are slightly impaired, but can be resuscitated, providing a golden opportunity with the potential to achieve desired repigmentation with an appropriate therapeutic choice. Adverse milieu may also contribute to the low melanosome count in keratinocytes.
WASTE HANDLING BUILDING ELECTRICAL SYSTEM DESCRIPTION DOCUMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.C. Khamamkar
2000-06-23
The Waste Handling Building Electrical System performs the function of receiving, distributing, transforming, monitoring, and controlling AC and DC power to all waste handling building electrical loads. The system distributes normal electrical power to support all loads that are within the Waste Handling Building (WHB). The system also generates and distributes emergency power to support designated emergency loads within the WHB within specified time limits. The system provides the capability to transfer between normal and emergency power. The system provides emergency power via independent and physically separated distribution feeds from the normal supply. The designated emergency electrical equipment will bemore » designed to operate during and after design basis events (DBEs). The system also provides lighting, grounding, and lightning protection for the Waste Handling Building. The system is located in the Waste Handling Building System. The system consists of a diesel generator, power distribution cables, transformers, switch gear, motor controllers, power panel boards, lighting panel boards, lighting equipment, lightning protection equipment, control cabling, and grounding system. Emergency power is generated with a diesel generator located in a QL-2 structure and connected to the QL-2 bus. The Waste Handling Building Electrical System distributes and controls primary power to acceptable industry standards, and with a dependability compatible with waste handling building reliability objectives for non-safety electrical loads. It also generates and distributes emergency power to the designated emergency loads. The Waste Handling Building Electrical System receives power from the Site Electrical Power System. The primary material handling power interfaces include the Carrier/Cask Handling System, Canister Transfer System, Assembly Transfer System, Waste Package Remediation System, and Disposal Container Handling Systems. The system interfaces with the MGR Operations Monitoring and Control System for supervisory monitoring and control signals. The system interfaces with all facility support loads such as heating, ventilation, and air conditioning, office, fire protection, monitoring and control, safeguards and security, and communications subsystems.« less
24-channel dual microcontroller-based voltage controller for ion optics remote control
NASA Astrophysics Data System (ADS)
Bengtsson, L.
2018-05-01
The design of a 24-channel voltage control instrument for Wenzel Elektronik N1130 NIM modules is described. This instrument is remote controlled from a LabVIEW GUI on a host Windows computer and is intended for ion optics control in electron affinity measurements on negative ions at the CERN-ISOLDE facility. Each channel has a resolution of 12 bits and has a normally distributed noise with a standard deviation of <1 mV. The instrument is designed as a standard 2-unit NIM module where the electronic hardware consists of a printed circuit board with two asynchronously operating microcontrollers.
Analytical probabilistic proton dose calculation and range uncertainties
NASA Astrophysics Data System (ADS)
Bangert, M.; Hennig, P.; Oelfke, U.
2014-03-01
We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.
NASA Astrophysics Data System (ADS)
Butler, Samuel D.; Marciniak, Michael A.
2014-09-01
Since the development of the Torrance-Sparrow bidirectional re ectance distribution function (BRDF) model in 1967, several BRDF models have been created. Previous attempts to categorize BRDF models have relied upon somewhat vague descriptors, such as empirical, semi-empirical, and experimental. Our approach is to instead categorize BRDF models based on functional form: microfacet normal distribution, geometric attenua- tion, directional-volumetric and Fresnel terms, and cross section conversion factor. Several popular microfacet models are compared to a standardized notation for a microfacet BRDF model. A library of microfacet model components is developed, allowing for creation of unique microfacet models driven by experimentally measured BRDFs.
Understanding a Normal Distribution of Data.
Maltenfort, Mitchell G
2015-12-01
Assuming data follow a normal distribution is essential for many common statistical tests. However, what are normal data and when can we assume that a data set follows this distribution? What can be done to analyze non-normal data?
Cho, Keunhee; Cho, Jeong-Rae; Kim, Sung Tae; Park, Sung Yong; Kim, Young-Jin; Park, Young-Hwan
2016-01-01
The recently developed smart strand can be used to measure the prestress force in the prestressed concrete (PSC) structure from the construction stage to the in-service stage. The higher cost of the smart strand compared to the conventional strand renders it unaffordable to replace all the strands by smart strands, and results in the application of only a limited number of smart strands in the PSC structure. However, the prestress forces developed in the strands of the multi-strand system frequently adopted in PSC structures differ from each other, which means that the prestress force in the multi-strand system cannot be obtained by simple proportional scaling using the measurement of the smart strand. Therefore, this study examines the prestress force distribution in the multi-strand system to find the correlation between the prestress force measured by the smart strand and the prestress force distribution in the multi-strand system. To that goal, the prestress force distribution was measured using electromagnetic sensors for various factors of the multi-strand system adopted on site in the fabrication of actual PSC girders. The results verified the possibility to assume normal distribution for the prestress force distribution per anchor head, and a method computing the mean and standard deviation defining the normal distribution is proposed. This paper presents a meaningful finding by proposing an estimation method of the prestress force based upon field-measured data of the prestress force distribution in the multi-strand system of actual PSC structures. PMID:27548172
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
NASA Technical Reports Server (NTRS)
Lienert, Barry R.
1991-01-01
Monte Carlo perturbations of synthetic tensors to evaluate the Hext/Jelinek elliptical confidence regions for anisotropy of magnetic susceptibility (AMS) eigenvectors are used. When the perturbations are 33 percent of the minimum anisotropy, both the shapes and probability densities of the resulting eigenvector distributions agree with the elliptical distributions predicted by the Hext/Jelinek equations. When the perturbation size is increased to 100 percent of the minimum eigenvalue difference, the major axis of the 95 percent confidence ellipse underestimates the observed eigenvector dispersion by about 10 deg. The observed distributions of the principal susceptibilities (eigenvalues) are close to being normal, with standard errors that agree well with the calculated Hext/Jelinek errors. The Hext/Jelinek ellipses are also able to describe the AMS dispersions due to instrumental noise and provide reasonable limits for the AMS dispersions observed in two Hawaiian basaltic dikes. It is concluded that the Hext/Jelinek method provides a satisfactory description of the errors in AMS data and should be a standard part of any AMS data analysis.
Haeckel, Rainer; Wosniok, Werner
2010-10-01
The distribution of many quantities in laboratory medicine are considered to be Gaussian if they are symmetric, although, theoretically, a Gaussian distribution is not plausible for quantities that can attain only non-negative values. If a distribution is skewed, further specification of the type is required, which may be difficult to provide. Skewed (non-Gaussian) distributions found in clinical chemistry usually show only moderately large positive skewness (e.g., log-normal- and χ(2) distribution). The degree of skewness depends on the magnitude of the empirical biological variation (CV(e)), as demonstrated using the log-normal distribution. A Gaussian distribution with a small CV(e) (e.g., for plasma sodium) is very similar to a log-normal distribution with the same CV(e). In contrast, a relatively large CV(e) (e.g., plasma aspartate aminotransferase) leads to distinct differences between a Gaussian and a log-normal distribution. If the type of an empirical distribution is unknown, it is proposed that a log-normal distribution be assumed in such cases. This avoids distributional assumptions that are not plausible and does not contradict the observation that distributions with small biological variation look very similar to a Gaussian distribution.
The semantic Stroop effect: An ex-Gaussian analysis.
White, Darcy; Risko, Evan F; Besner, Derek
2016-10-01
Previous analyses of the standard Stroop effect (which typically uses color words that form part of the response set) have documented effects on mean reaction times in hundreds of experiments in the literature. Less well known is the fact that ex-Gaussian analyses reveal that such effects are seen in (a) the mean of the normal distribution (mu), as well as in (b) the standard deviation of the normal distribution (sigma) and (c) the tail (tau). No ex-Gaussian analysis exists in the literature with respect to the semantically based Stroop effect (which contrasts incongruent color-associated words with, e.g., neutral controls). In the present experiments, we investigated whether the semantically based Stroop effect is also seen in the three ex-Gaussian parameters. Replicating previous reports, color naming was slower when the color was carried by an irrelevant (but incongruent) color-associated word (e.g., sky, tomato) than when the control items consisted of neutral words (e.g., keg, palace) in each of four experiments. An ex-Gaussian analysis revealed that this semantically based Stroop effect was restricted to the arithmetic mean and mu; no semantic Stroop effect was observed in tau. These data are consistent with the views (1) that there is a clear difference in the source of the semantic Stroop effect, as compared to the standard Stroop effect (evidenced by the presence vs. absence of an effect on tau), and (2) that interference associated with response competition on incongruent trials in tau is absent in the semantic Stroop effect.
de Faria, Clara Maria Gonçalves; Inada, Natalia Mayumi; Vollet-Filho, José Dirceu; Bagnato, Vanderlei Salvador
2018-05-01
Photodynamic therapy (PDT) is a technique with well-established principles that often demands repeated applications for sequential elimination of tumor cells. An important question concerns the way surviving cells from a treatment behave in the subsequent one. Threshold dose is a core concept in PDT dosimetry, as the minimum amount of energy to be delivered for cell destruction via PDT. Concepts of threshold distribution have shown to be an important tool for PDT results analysis in vitro. In this study, we used some of these concepts for demonstrating subsequent treatments with partial elimination of cells modify the distribution, which represents an increased resistance of the cells to the photodynamic action. HepG2 and HepaRG were used as models of tumor and normal liver cells and a protocol to induce resistance, consisted of repeated PDT sessions using Photogem® as a photosensitizer, was applied to the tumor ones. The response of these cells to PDT was assessed using a standard viability assay and the dose response curves were used for deriving the threshold distributions. The changes in the distribution revealed that the resistance protocol effectively eliminated the most sensitive cells. Nevertheless, HepaRG cell line was the most resistant one among the cells analyzed, which indicates a specificity in clinical applications that enables the use of high doses and drug concentrations with minimal damage to the surrounding normal tissue. Copyright © 2018 Elsevier B.V. All rights reserved.
Ryu, Shoraku; Hayashi, Mitsuhiro; Aikawa, Hiroaki; Okamoto, Isamu; Fujiwara, Yasuhiro; Hamada, Akinobu
2018-01-01
The penetration of the anaplastic lymphoma kinase (ALK) inhibitor alectinib in neuroblastomas and the relationship between alectinib and ALK expression are unknown. The aim of this study was to perform a quantitative investigation of the inter- and intra-tumoural distribution of alectinib in different neuroblastoma xenograft models using matrix-assisted laser desorption ionization MS imaging (MALDI-MSI). The distribution of alectinib in NB1 (ALK amplification) and SK-N-FI (ALK wild-type) xenograft tissues was analysed using MALDI-MSI. The abundance of alectinib in tumours and intra-tumoural areas was quantified using ion signal intensities from MALDI-MSI after normalization by correlation with LC-MS/MS. The distribution of alectinib was heterogeneous in neuroblastomas. The penetration of alectinib was not significantly different between ALK amplification and ALK wide-type tissues using both LC-MS/MS concentrations and MSI intensities. Normalization with an internal standard increased the quantitative property of MSI by adjusting for the ion suppression effect. The distribution of alectinib in different intra-tumoural areas can alternatively be quantified from MS images by correlation with LC-MS/MS. The penetration of alectinib into tumour tissues may not be homogenous or influenced by ALK expression in the early period after single-dose administration. MALDI-MSI may prove to be a valuable pharmaceutical method for elucidating the mechanism of action of drugs by clarifying their microscopic distribution in heterogeneous tissues. © 2017 The British Pharmacological Society.
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Astrophysics Data System (ADS)
Hendricks, R. C.; McDonald, G.
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Mcdonald, G.
1981-01-01
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
The problem of natural funnel asymmetries: a simulation analysis of meta-analysis in macroeconomics.
Callot, Laurent; Paldam, Martin
2011-06-01
Effect sizes in macroeconomic are estimated by regressions on data published by statistical agencies. Funnel plots are a representation of the distribution of the resulting regression coefficients. They are normally much wider than predicted by the t-ratio of the coefficients and often asymmetric. The standard method of meta-analysts in economics assumes that the asymmetries are because of publication bias causing censoring and adjusts the average accordingly. The paper shows that some funnel asymmetries may be 'natural' so that they occur without censoring. We investigate such asymmetries by simulating funnels by pairs of data generating processes (DGPs) and estimating models (EMs), in which the EM has the problem that it disregards a property of the DGP. The problems are data dependency, structural breaks, non-normal residuals, non-linearity, and omitted variables. We show that some of these problems generate funnel asymmetries. When they do, the standard method often fails. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.
Analysis of Spin Financial Market by GARCH Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2013-08-01
A spin model is used for simulations of financial markets. To determine return volatility in the spin financial market we use the GARCH model often used for volatility estimation in empirical finance. We apply the Bayesian inference performed by the Markov Chain Monte Carlo method to the parameter estimation of the GARCH model. It is found that volatility determined by the GARCH model exhibits "volatility clustering" also observed in the real financial markets. Using volatility determined by the GARCH model we examine the mixture-of-distribution hypothesis (MDH) suggested for the asset return dynamics. We find that the returns standardized by volatility are approximately standard normal random variables. Moreover we find that the absolute standardized returns show no significant autocorrelation. These findings are consistent with the view of the MDH for the return dynamics.
X-ray clusters from a high-resolution hydrodynamic PPM simulation of the cold dark matter universe
NASA Technical Reports Server (NTRS)
Bryan, Greg L.; Cen, Renyue; Norman, Michael L.; Ostriker, Jermemiah P.; Stone, James M.
1994-01-01
A new three-dimensional hydrodynamic code based on the piecewise parabolic method (PPM) is utilized to compute the distribution of hot gas in the standard Cosmic Background Explorer (COBE)-normalized cold dark matter (CDM) universe. Utilizing periodic boundary conditions, a box with size 85 h(exp-1) Mpc, having cell size 0.31 h(exp-1) Mpc, is followed in a simulation with 270(exp 3)=10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, Sigma(sub 8)=1.05, Omega(sub b)=0.06, we find the X-ray-emitting clusters, compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. The results, which are compared with those obtained in the preceding paper (Kang et al. 1994a), may be used in conjuction with ROSAT and other observational data sets. Overall, the results of the two computations are qualitatively very similar with regard to the trends of cluster properties, i.e., how the number density, radius, and temeprature depend on luminosity and redshift. The total luminosity from clusters is approximately a factor of 2 higher using the PPM code (as compared to the 'total variation diminishing' (TVD) code used in the previous paper) with the number of bright clusters higher by a similar factor. The primary conclusions of the prior paper, with regard to the power spectrum of the primeval density perturbations, are strengthened: the standard CDM model, normalized to the COBE microwave detection, predicts too many bright X-ray emitting clusters, by a factor probably in excess of 5. The comparison between observations and theoretical predictions for the evolution of cluster properties, luminosity functions, and size and temperature distributions should provide an important discriminator among competing scenarios for the development of structure in the universe.
Inferring climate variability from skewed proxy records
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Tingley, M.
2013-12-01
Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and compared to other proxy records. (2) a multiproxy reconstruction of temperature over the Common Era (Mann et al., 2009), where we find that about one third of the records display significant departures from normality. Accordingly, accounting for skewness in proxy predictors has a notable influence on both reconstructed global mean and spatial patterns of temperature change. Inferring climate variability from skewed proxy records thus requires cares, but can be done with relatively simple tools. References - Mann, M. E., Z. Zhang, S. Rutherford, R. S. Bradley, M. K. Hughes, D. Shindell, C. Ammann, G. Faluvegi, and F. Ni (2009), Global signatures and dynamical origins of the little ice age and medieval climate anomaly, Science, 326(5957), 1256-1260, doi:10.1126/science.1177303. - Moy, C., G. Seltzer, D. Rodbell, and D. Anderson (2002), Variability of El Niño/Southern Oscillation activ- ity at millennial timescales during the Holocene epoch, Nature, 420(6912), 162-165.
Head Circumference and Height in Autism
Lainhart, Janet E.; Bigler, Erin D.; Bocian, Maureen; Coon, Hilary; Dinh, Elena; Dawson, Geraldine; Deutsch, Curtis K.; Dunn, Michelle; Estes, Annette; Tager-Flusberg, Helen; Folstein, Susan; Hepburn, Susan; Hyman, Susan; McMahon, William; Minshew, Nancy; Munson, Jeff; Osann, Kathy; Ozonoff, Sally; Rodier, Patricia; Rogers, Sally; Sigman, Marian; Spence, M. Anne; Stodgell, Christopher J.; Volkmar, Fred
2016-01-01
Data from 10 sites of the NICHD/NIDCD Collaborative Programs of Excellence in Autism were combined to study the distribution of head circumference and relationship to demographic and clinical variables. Three hundred thirty-eight probands with autism-spectrum disorder (ASD) including 208 probands with autism were studied along with 147 parents, 149 siblings, and typically developing controls. ASDs were diagnosed, and head circumference and clinical variables measured in a standardized manner across all sites. All subjects with autism met ADI-R, ADOS-G, DSM-IV, and ICD-10 criteria. The results show the distribution of standardized head circumference in autism is normal in shape, and the mean, variance, and rate of macrocephaly but not microcephaly are increased. Head circumference tends to be large relative to height in autism. No site, gender, age, SES, verbal, or non-verbal IQ effects were present in the autism sample. In addition to autism itself, standardized height and average parental head circumference were the most important factors predicting head circumference in individuals with autism. Mean standardized head circumference and rates of macrocephaly were similar in probands with autism and their parents. Increased head circumference was associated with a higher (more severe) ADI-R social algorithm score. Macrocephaly is associated with delayed onset of language. Although mean head circumference and rates of macrocephaly are increased in autism, a high degree of variability is present, underscoring the complex clinical heterogeneity of the disorder. The wide distribution of head circumference in autism has major implications for genetic, neuroimaging, and other neurobiological research. PMID:17022081
Normalized stiffness ratios for mechanical characterization of isotropic acoustic foams.
Sahraoui, Sohbi; Brouard, Bruno; Benyahia, Lazhar; Parmentier, Damien; Geslain, Alan
2013-12-01
This paper presents a method for the mechanical characterization of isotropic foams at low frequency. The objective of this study is to determine the Young's modulus, the Poisson's ratio, and the loss factor of commercially available foam plates. The method is applied on porous samples having square and circular sections. The main idea of this work is to perform quasi-static compression tests of a single foam sample followed by two juxtaposed samples having the same dimensions. The load and displacement measurements lead to a direct extraction of the elastic constants by means of normalized stiffness and normalized stiffness ratio which depend on Poisson's ratio and shape factor. The normalized stiffness is calculated by the finite element method for different Poisson ratios. The no-slip boundary conditions imposed by the loading rigid plates create interfaces with a complex strain distribution. Beforehand, compression tests were performed by means of a standard tensile machine in order to determine the appropriate pre-compression rate for quasi-static tests.
NASA Astrophysics Data System (ADS)
Zhou, Yan; Liu, Cheng-hui; Pu, Yang; Cheng, Gangge; Zhou, Lixin; Chen, Jun; Zhu, Ke; Alfano, Robert R.
2016-03-01
Raman spectroscopy has become widely used for diagnostic purpose of breast, lung and brain cancers. This report introduced a new approach based on spatial frequency spectra analysis of the underlying tissue structure at different stages of brain tumor. Combined spatial frequency spectroscopy (SFS), Resonance Raman (RR) spectroscopic method is used to discriminate human brain metastasis of lung cancer from normal tissues for the first time. A total number of thirty-one label-free micrographic images of normal and metastatic brain cancer tissues obtained from a confocal micro- Raman spectroscopic system synchronously with examined RR spectra of the corresponding samples were collected from the identical site of tissue. The difference of the randomness of tissue structures between the micrograph images of metastatic brain tumor tissues and normal tissues can be recognized by analyzing spatial frequency. By fitting the distribution of the spatial frequency spectra of human brain tissues as a Gaussian function, the standard deviation, σ, can be obtained, which was used to generate a criterion to differentiate human brain cancerous tissues from the normal ones using Support Vector Machine (SVM) classifier. This SFS-SVM analysis on micrograph images presents good results with sensitivity (85%), specificity (75%) in comparison with gold standard reports of pathology and immunology. The dual-modal advantages of SFS combined with RR spectroscopy method may open a new way in the neuropathology applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanheusden, K.; Warren, W.L.; Devine, R.A.B.
It is shown how mobile H{sup +} ions can be generated thermally inside the oxide layer of Si/SiO{sub 2}/Si structures. The technique involves only standard silicon processing steps: the nonvolatile field effect transistor (NVFET) is based on a standard MOSFET with thermally grown SiO{sub 2} capped with a poly-silicon layer. The capped thermal oxide receives an anneal at {approximately}1100 C that enables the incorporation of the mobile protons into the gate oxide. The introduction of the protons is achieved by a subsequent 500-800 C anneal in a hydrogen-containing ambient, such as forming gas (N{sub 2}:H{sub 2} 95:5). The mobile protonsmore » are stable and entrapped inside the oxide layer, and unlike alkali ions, their space-charge distribution can be controlled and rapidly rearranged at room temperature by an applied electric field. Using this principle, a standard MOS transistor can be converted into a nonvolatile memory transistor that can be switched between normally on and normally off. Switching speed, retention, endurance, and radiation tolerance data are presented showing that this non-volatile memory technology can be competitive with existing Si-based non-volatile memory technologies such as the floating gate technologies (e.g. Flash memory).« less
Kinetics of humoral responsiveness and antigenic distribution in operated rats.
Kinnaert, P; Mahieu, A; van Geertruyden, N
1979-01-01
Wistar R/A rats were injected intravenously with 10(9) sheep red blood cells (SRBC) prior to, during or after a standard laparotomy. Stimulation of anti-SRBC antibody synthesis was already observed when the antigen was given 4 h prior to surgery and was maximal if SRBC were administered at the time of operation. The enhancing effect on the immune response lasted for 2 days after surgery. From the third post-operative day on, the injection of SRBC induced a normal humoral response. No subsequent depression was detected. Inter-organ distribution studies of 51Cr-labelled SRBC injected at various times prior, during or after the surgical procedure, showed a maximum decrease of liver uptake during operation; the depression was still present 2 h later but on the first post-operative day, no significant difference from the controls could be demonstrated. When the labelled antigen was given before surgery, organ distribution was normal. Consequently, there is no time relationship between the stimulation of antibody production and the alteration of total phagocytosis induced by surgery. Therefore, the enhanced humoral response cannot be explained only by spillover of the antigen from the liver into lymphoid organs. PMID:511217
Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.
Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar
2010-09-01
A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed.
Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models
NASA Astrophysics Data System (ADS)
Thon, Ingo
One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
Ogle, K.M.; Lee, R.W.
1994-01-01
Radon-222 activity was measured for 27 water samples from streams, an alluvial aquifer, bedrock aquifers, and a geothermal system, in and near the 510-square mile area of Owl Creek Basin, north- central Wyoming. Summary statistics of the radon- 222 activities are compiled. For 16 stream-water samples, the arithmetic mean radon-222 activity was 20 pCi/L (picocuries per liter), geometric mean activity was 7 pCi/L, harmonic mean activity was 2 pCi/L and median activity was 8 pCi/L. The standard deviation of the arithmetic mean is 29 pCi/L. The activities in the stream-water samples ranged from 0.4 to 97 pCi/L. The histogram of stream-water samples is left-skewed when compared to a normal distribution. For 11 ground-water samples, the arithmetic mean radon- 222 activity was 486 pCi/L, geometric mean activity was 280 pCi/L, harmonic mean activity was 130 pCi/L and median activity was 373 pCi/L. The standard deviation of the arithmetic mean is 500 pCi/L. The activity in the ground-water samples ranged from 25 to 1,704 pCi/L. The histogram of ground-water samples is left-skewed when compared to a normal distribution. (USGS)
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.
A novel generalized normal distribution for human longevity and other negatively skewed data.
Robertson, Henry T; Allison, David B
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.
A Novel Generalized Normal Distribution for Human Longevity and other Negatively Skewed Data
Robertson, Henry T.; Allison, David B.
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution. PMID:22623974
ERIC Educational Resources Information Center
Zimmerman, Donald W.
2011-01-01
This study investigated how population parameters representing heterogeneity of variance, skewness, kurtosis, bimodality, and outlier-proneness, drawn from normal and eleven non-normal distributions, also characterized the ranks corresponding to independent samples of scores. When the parameters of population distributions from which samples were…
Daily Magnesium Intake and Serum Magnesium Concentration among Japanese People
Akizawa, Yoriko; Koizumi, Sadayuki; Itokawa, Yoshinori; Ojima, Toshiyuki; Nakamura, Yosikazu; Tamura, Tarou; Kusaka, Yukinori
2008-01-01
Background The vitamins and minerals that are deficient in the daily diet of a normal adult remain unknown. To answer this question, we conducted a population survey focusing on the relationship between dietary magnesium intake and serum magnesium level. Methods The subjects were 62 individuals from Fukui Prefecture who participated in the 1998 National Nutrition Survey. The survey investigated the physical status, nutritional status, and dietary data of the subjects. Holidays and special occasions were avoided, and a day when people are most likely to be on an ordinary diet was selected as the survey date. Results The mean (±standard deviation) daily magnesium intake was 322 (±132), 323 (±163), and 322 (±147) mg/day for men, women, and the entire group, respectively. The mean (±standard deviation) serum magnesium concentration was 20.69 (±2.83), 20.69 (±2.88), and 20.69 (±2.83) ppm for men, women, and the entire group, respectively. The distribution of serum magnesium concentration was normal. Dietary magnesium intake showed a log-normal distribution, which was then transformed by logarithmic conversion for examining the regression coefficients. The slope of the regression line between the serum magnesium concentration (Y ppm) and daily magnesium intake (X mg) was determined using the formula Y = 4.93 (log10X) + 8.49. The coefficient of correlation (r) was 0.29. A regression line (Y = 14.65X + 19.31) was observed between the daily intake of magnesium (Y mg) and serum magnesium concentration (X ppm). The coefficient of correlation was 0.28. Conclusion The daily magnesium intake correlated with serum magnesium concentration, and a linear regression model between them was proposed. PMID:18635902
Liu, W; Mohan, R
2012-06-01
Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD Anderson Cancer Center, and MD Anderson’s cancer center support grant CA016672. © 2012 American Association of Physicists in Medicine.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
Gradually truncated log-normal in USA publicly traded firm size distribution
NASA Astrophysics Data System (ADS)
Gupta, Hari M.; Campanha, José R.; de Aguiar, Daniela R.; Queiroz, Gabriel A.; Raheja, Charu G.
2007-03-01
We study the statistical distribution of firm size for USA and Brazilian publicly traded firms through the Zipf plot technique. Sale size is used to measure firm size. The Brazilian firm size distribution is given by a log-normal distribution without any adjustable parameter. However, we also need to consider different parameters of log-normal distribution for the largest firms in the distribution, which are mostly foreign firms. The log-normal distribution has to be gradually truncated after a certain critical value for USA firms. Therefore, the original hypothesis of proportional effect proposed by Gibrat is valid with some modification for very large firms. We also consider the possible mechanisms behind this distribution.
Bandwagon effects and error bars in particle physics
NASA Astrophysics Data System (ADS)
Jeng, Monwhea
2007-02-01
We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.
A better norm-referenced grading using the standard deviation criterion.
Chan, Wing-shing
2014-01-01
The commonly used norm-referenced grading assigns grades to rank-ordered students in fixed percentiles. It has the disadvantage of ignoring the actual distance of scores among students. A simple norm-referenced grading via standard deviation is suggested for routine educational grading. The number of standard deviation of a student's score from the class mean was used as the common yardstick to measure achievement level. Cumulative probability of a normal distribution was referenced to help decide the amount of students included within a grade. RESULTS of the foremost 12 students from a medical examination were used for illustrating this grading method. Grading by standard deviation seemed to produce better cutoffs in allocating an appropriate grade to students more according to their differential achievements and had less chance in creating arbitrary cutoffs in between two similarly scored students than grading by fixed percentile. Grading by standard deviation has more advantages and is more flexible than grading by fixed percentile for norm-referenced grading.
Creating a Bimodal Drop-Size Distribution in the NASA Glenn Icing Research Tunnel
NASA Technical Reports Server (NTRS)
King-Steen, Laura E.; Ide, Robert F.
2017-01-01
The Icing Research Tunnel at NASA Glenn has demonstrated that they can create a drop-size distribution that matches the FAA Part 25 Appendix O FZDZ, MVD <40 microns normalized cumulative volume within 10%. This is done by simultaneously spraying the Standard and Mod1 nozzles at the same nozzle air pressure and different nozzle water pressures. It was also found through these tests that the distributions that are measured when the two nozzle sets are sprayed simultaneously closely matched what was found by combining the two individual distributions analytically. Additionally, distributions were compared between spraying all spraybars and also by spraying only every-other spraybar, and were found to match within 4%. The cloud liquid water content uniformity for this condition has been found to be excellent. It should be noted, however, that the liquid water content for this condition in the IRT is much higher than the requirement specified in Part 25 Appendix O.
Creating a Bimodal Drop-Size Distribution in the NASA Glenn Icing Research Tunnel
NASA Technical Reports Server (NTRS)
King-Steen, Laura E.; Ide, Robert F.
2017-01-01
The Icing Research Tunnel at NASA Glenn has demonstrated that they can create a drop-size distribution that matches the FAA Part 25 Appendix O FZDZ, MVD40 m normalized cumulative volume within 10. This is done by simultaneously spraying the Standard and Mod1 nozzles at the same nozzle air pressure and different nozzle water pressures. It was also found through these tests that the distributions that are measured when the two nozzle sets are sprayed simultaneously closely matched what was found by combining the two individual distributions analytically. Additionally, distributions were compared between spraying all spraybars and also by spraying only every-other spraybar, and were found to match within 4. The cloud liquid water content uniformity for this condition has been found to be excellent: 10. It should be noted, however, that the liquid water content for this condition in the IRT is much higher than the requirement specified in Part 25 Appendix O.
Contact angle distribution of particles at fluid interfaces.
Snoeyink, Craig; Barman, Sourav; Christopher, Gordon F
2015-01-27
Recent measurements have implied a distribution of interfacially adsorbed particles' contact angles; however, it has been impossible to measure statistically significant numbers for these contact angles noninvasively in situ. Using a new microscopy method that allows nanometer-scale resolution of particle's 3D positions on an interface, we have measured the contact angles for thousands of latex particles at an oil/water interface. Furthermore, these measurements are dynamic, allowing the observation of the particle contact angle with high temporal resolution, resulting in hundreds of thousands of individual contact angle measurements. The contact angle has been found to fit a normal distribution with a standard deviation of 19.3°, which is much larger than previously recorded. Furthermore, the technique used allows the effect of measurement error, constrained interfacial diffusion, and particle property variation on the contact angle distribution to be individually evaluated. Because of the ability to measure the contact angle noninvasively, the results provide previously unobtainable, unique data on the dynamics and distribution of the adsorbed particles' contact angle.
Multiple imputation in the presence of non-normal data.
Lee, Katherine J; Carlin, John B
2017-02-20
Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sone, Teruki; Yoshikawa, Kunihiko; Mimura, Hiroaki; Hayashida, Akihiro; Wada, Nozomi; Obase, Kikuko; Imai, Koichiro; Saito, Ken; Maehama, Tomoko; Fukunaga, Masao; Yoshida, Kiyoshi
2010-01-01
Purpose In cardiac 2-[F-18]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) examination, interpretation of myocardial viability in the low uptake region (LUR) has been difficult without additional perfusion imaging. We evaluated distribution patterns of FDG at the border zone of the LUR in the cardiac FDG-PET and established a novel parameter for diagnosing myocardial viability and for discriminating the LUR of normal variants. Materials and Methods Cardiac FDG-PET was performed in patients with a myocardial ischemic event (n = 22) and in healthy volunteers (n = 22). Whether the myocardium was not a viable myocardium (not-VM) or an ischemic but viable myocardium (isch-VM) was defined by an echocardiogram under a low dose of dobutamine infusion as the gold standard. FDG images were displayed as gray scaled-bull's eye mappings. FDG-plot profiles for LUR (= true ischemic region in the patients or normal variant region in healthy subjects) were calculated. Maximal values of FDG change at the LUR border zone (a steepness index; Smax scale/pixel) were compared among not-VM, isch-VM, and normal myocardium. Results Smax was significantly higher for n-VM compared to those with isch-VM or normal myocardium (ANOVA). A cut-off value of 0.30 in Smax demonstrated 100% sensitivity and 83% specificity for diagnosing n-VM and isch-VM. Smax less than 0.23 discriminated LUR in normal myocardium from the LUR in patients with both n-VM and isch-VM with a 94% sensitivity and a 93% specificity. Conclusion Smax of the LUR in cardiac FDG-PET is a simple and useful parameter to diagnose n-VM and isch-VM, as well as to discriminate thr LUR of normal variants. PMID:20191007
NASA Astrophysics Data System (ADS)
Cox, M.; Shirono, K.
2017-10-01
A criticism levelled at the Guide to the Expression of Uncertainty in Measurement (GUM) is that it is based on a mixture of frequentist and Bayesian thinking. In particular, the GUM’s Type A (statistical) uncertainty evaluations are frequentist, whereas the Type B evaluations, using state-of-knowledge distributions, are Bayesian. In contrast, making the GUM fully Bayesian implies, among other things, that a conventional objective Bayesian approach to Type A uncertainty evaluation for a number n of observations leads to the impractical consequence that n must be at least equal to 4, thus presenting a difficulty for many metrologists. This paper presents a Bayesian analysis of Type A uncertainty evaluation that applies for all n ≥slant 2 , as in the frequentist analysis in the current GUM. The analysis is based on assuming that the observations are drawn from a normal distribution (as in the conventional objective Bayesian analysis), but uses an informative prior based on lower and upper bounds for the standard deviation of the sampling distribution for the quantity under consideration. The main outcome of the analysis is a closed-form mathematical expression for the factor by which the standard deviation of the mean observation should be multiplied to calculate the required standard uncertainty. Metrological examples are used to illustrate the approach, which is straightforward to apply using a formula or look-up table.
March, Rod S.
2003-01-01
The 1996 measured winter snow, maximum winter snow, net, and annual balances in the Gulkana Glacier Basin were evaluated on the basis of meteorological, hydrological, and glaciological data. Averaged over the glacier, the measured winter snow balance was 0.87 meter on April 18, 1996, 1.1 standard deviation below the long-term average; the maximum winter snow balance, 1.06 meters, was reached on May 28, 1996; and the net balance (from August 30, 1995, to August 24, 1996) was -0.53 meter, 0.53 standard deviation below the long-term average. The annual balance (October 1, 1995, to September 30, 1996) was -0.37 meter. Area-averaged balances were reported using both the 1967 and 1993 area altitude distributions (the numbers previously given in this abstract use the 1993 area altitude distribution). Net balance was about 25 percent less negative using the 1993 area altitude distribution than the 1967 distribution. Annual average air temperature was 0.9 degree Celsius warmer than that recorded with the analog sensor used since 1966. Total precipitation catch for the year was 0.78 meter, 0.8 standard deviations below normal. The annual average wind speed was 3.5 meters per second in the first year of measuring wind speed. Annual runoff averaged 1.50 meters over the basin, 1.0 standard deviation below the long-term average. Glacier-surface altitude and ice-motion changes measured at three index sites document seasonal ice-speed and glacier-thickness changes. Both showed a continuation of a slowing and thinning trend present in the 1990s. The glacier terminus and lower ablation area were defined for 1996 with a handheld Global Positioning System survey of 126 locations spread out over about 4 kilometers on the lower glacier margin. From 1949 to 1996, the terminus retreated about 1,650 meters for an average retreat rate of 35 meters per year.
NASA Astrophysics Data System (ADS)
Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.
2005-03-01
Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.
Marrero, Julieta; Rebagliati, Raúl Jiménez; Gómez, Darío; Smichowski, Patricia
2005-12-15
A study was conducted to evaluate the homogeneity of the distribution of metals and metalloids deposited on glass fiber filters collected using a high-volume sampler equipped with a PM-10 sampling head. The airborne particulate matter (APM)-loaded glass fiber filters (with an active surface of about 500cm(2)) were weighed and then each filter was cut in five small discs of 6.5cm of diameter. Each disk was mineralized by acid-assisted microwave (MW) digestion using a mixture of nitric, perchloric and hydrofluoric acids. Analysis was performed by axial view inductively coupled plasma optical emission spectrometry (ICP OES) and the elements considered were: Al, As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, Sb, Ti and V. The validation of the procedure was performed by the analysis of the standard reference material NIST 1648, urban particulate matter. As a way of comparing the possible variability in trace elements distribution in a particular filter, the mean concentration for each element over the five positions (discs) was calculated and each element concentration was normalized to this mean value. Scatter plots of the normalized concentrations were examined for all elements and all sub-samples. We considered that an element was homogeneously distributed if its normalized concentrations in the 45 sub-samples were within +/-15% of the mean value ranging between 0.85 and 1.15. The study demonstrated that the 12 elements tested showed different distribution pattern. Aluminium, Cu and V showed the most homogeneous pattern while Cd and Ni exhibited the largest departures from the mean value in 13 out of the 45 discs analyzed. No preferential deposition was noticed in any sub-sample.
NASA Astrophysics Data System (ADS)
Ye, L.; Xu, X.; Luan, D.; Jiang, W.; Kang, Z.
2017-07-01
Crater-detection approaches can be divided into four categories: manual recognition, shape-profile fitting algorithms, machine-learning methods and geological information-based analysis using terrain and spectral data. The mainstream method is Shape-profile fitting algorithms. Many scholars throughout the world use the illumination gradient information to fit standard circles by least square method. Although this method has achieved good results, it is difficult to identify the craters with poor "visibility", complex structure and composition. Moreover, the accuracy of recognition is difficult to be improved due to the multiple solutions and noise interference. Aiming at the problem, we propose a method for the automatic extraction of impact craters based on spectral characteristics of the moon rocks and minerals: 1) Under the condition of sunlight, the impact craters are extracted from MI by condition matching and the positions as well as diameters of the craters are obtained. 2) Regolith is spilled while lunar is impacted and one of the elements of lunar regolith is iron. Therefore, incorrectly extracted impact craters can be removed by judging whether the crater contains "non iron" element. 3) Craters which are extracted correctly, are divided into two types: simple type and complex type according to their diameters. 4) Get the information of titanium and match the titanium distribution of the complex craters with normal distribution curve, then calculate the goodness of fit and set the threshold. The complex craters can be divided into two types: normal distribution curve type of titanium and non normal distribution curve type of titanium. We validated our proposed method with MI acquired by SELENE. Experimental results demonstrate that the proposed method has good performance in the test area.
NASA Astrophysics Data System (ADS)
Lee, Juhun; Nishikawa, Robert M.; Rohde, Gustavo K.
2018-02-01
We propose using novel imaging biomarkers for detecting mammographically-occult (MO) cancer in women with dense breast tissue. MO cancer indicates visually occluded, or very subtle, cancer that radiologists fail to recognize as a sign of cancer. We used the Radon Cumulative Distribution Transform (RCDT) as a novel image transformation to project the difference between left and right mammograms into a space, increasing the detectability of occult cancer. We used a dataset of 617 screening full-field digital mammograms (FFDMs) of 238 women with dense breast tissue. Among 238 women, 173 were normal with 2 - 4 consecutive screening mammograms, 552 normal mammograms in total, and the remaining 65 women had an MO cancer with a negative screening mammogram. We used Principal Component Analysis (PCA) to find representative patterns in normal mammograms in the RCDT space. We projected all mammograms to the space constructed by the first 30 eigenvectors of the RCDT of normal cases. Under 10-fold crossvalidation, we conducted quantitative feature analysis to classify normal mammograms and mammograms with MO cancer. We used receiver operating characteristic (ROC) analysis to evaluate the classifier's output using the area under the ROC curve (AUC) as the figure of merit. Four eigenvectors were selected via a feature selection method. The mean and standard deviation of the AUC of the trained classifier on the test set were 0.74 and 0.08, respectively. In conclusion, we utilized imaging biomarkers to highlight differences between left and right mammograms to detect MO cancer using novel imaging transformation.
An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions
ERIC Educational Resources Information Center
Radhakrishnan, R.; Choudhury, Askar
2009-01-01
Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
Liu, Xiaohang; Zhou, Liangping; Peng, Weijun; Wang, He; Zhang, Yong
2015-10-01
To compare stretched-exponential and monoexponential model diffusion-weighted imaging (DWI) in prostate cancer and normal tissues. Twenty-seven patients with prostate cancer underwent DWI exam using b-values of 0, 500, 1000, and 2000 s/mm(2) . The distributed diffusion coefficients (DDC) and α values of prostate cancer and normal tissues were obtained with stretched-exponential model and apparent diffusion coefficient (ADC) values using monoexponential model. The ADC, DDC (both in 10(-3) mm(2)/s), and α values (range, 0-1) were compared among different prostate tissues. The ADC and DDC were also compared and correlated in each tissue, and the standardized differences between DDC and ADC were compared among different tissues. Data were obtained for 31 cancers, 36 normal peripheral zone (PZ) and 26 normal central gland (CG) tissues. The ADC (0.71 ± 0.12), DDC (0.60 ± 0.18), and α value (0.64 ± 0.05) of tumor were all significantly lower than those of the normal PZ (1.41 ± 0.22, 1.47 ± 0.20, and 0.85 ± 0.09) and CG (1.25 ± 0.14, 1.32 ± 0.13, and 0.82 ± 0.06) (all P < 0.05). ADC was significantly higher than DDC in cancer, but lower than DDC in the PZ and CG (all P < 0.05). The ADC and DDC were strongly correlated (R(2) = 0.99, 0.98, 0.99, respectively, all P < 0.05) in all the tissue, and standardized difference between ADC and DDC of cancer was slight but significantly higher than that in normal tissue. The stretched-exponential model DWI provides more parameters for distinguishing prostate cancer and normal tissue and reveals slight differences between DDC and ADC values. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Jafari, Mehrnoosh; Minaei, Saeid; Safaie, Naser; Torkamani-Azar, Farah
2016-05-01
Spatial and temporal changes in surface temperature of infected and non-infected rose plant (Rosa hybrida cv. 'Angelina') leaves were visualized using digital infrared thermography. Infected areas exhibited a presymptomatic decrease in leaf temperature up to 2.3 °C. In this study, two experiments were conducted: one in the greenhouse (semi-controlled ambient conditions) and the other, in a growth chamber (controlled ambient conditions). Effect of drought stress and darkness on the thermal images were also studied in this research. It was found that thermal histograms of the infected leaves closely follow a standard normal distribution. They have a skewness near zero, kurtosis under 3, standard deviation larger than 0.6, and a Maximum Temperature Difference (MTD) more than 4. For each thermal histogram, central tendency, variability, and parameters of the best fitted Standard Normal and Laplace distributions were estimated. To classify healthy and infected leaves, feature selection was conducted and the best extracted thermal features with the largest linguistic hedge values were chosen. Among those features independent of absolute temperature measurement, MTD, SD, skewness, R2l, kurtosis and bn were selected. Then, a neuro-fuzzy classifier was trained to recognize the healthy leaves from the infected ones. The k-means clustering method was utilized to obtain the initial parameters and the fuzzy "if-then" rules. Best estimation rates of 92.55% and 92.3% were achieved in training and testing the classifier with 8 clusters. Results showed that drought stress had an adverse effect on the classification of healthy leaves. More healthy leaves under drought stress condition were classified as infected causing PPV and Specificity index values to decrease, accordingly. Image acquisition in the dark had no significant effect on the classification performance.
Numerical Aspects of Eigenvalue and Eigenfunction Computations for Chaotic Quantum Systems
NASA Astrophysics Data System (ADS)
Bäcker, A.
Summary: We give an introduction to some of the numerical aspects in quantum chaos. The classical dynamics of two-dimensional area-preserving maps on the torus is illustrated using the standard map and a perturbed cat map. The quantization of area-preserving maps given by their generating function is discussed and for the computation of the eigenvalues a computer program in Python is presented. We illustrate the eigenvalue distribution for two types of perturbed cat maps, one leading to COE and the other to CUE statistics. For the eigenfunctions of quantum maps we study the distribution of the eigenvectors and compare them with the corresponding random matrix distributions. The Husimi representation allows for a direct comparison of the localization of the eigenstates in phase space with the corresponding classical structures. Examples for a perturbed cat map and the standard map with different parameters are shown. Billiard systems and the corresponding quantum billiards are another important class of systems (which are also relevant to applications, for example in mesoscopic physics). We provide a detailed exposition of the boundary integral method, which is one important method to determine the eigenvalues and eigenfunctions of the Helmholtz equation. We discuss several methods to determine the eigenvalues from the Fredholm equation and illustrate them for the stadium billiard. The occurrence of spurious solutions is discussed in detail and illustrated for the circular billiard, the stadium billiard, and the annular sector billiard. We emphasize the role of the normal derivative function to compute the normalization of eigenfunctions, momentum representations or autocorrelation functions in a very efficient and direct way. Some examples for these quantities are given and discussed.
On the Use of Rank Tests and Estimates in the Linear Model.
1982-06-01
assumption A5, McKean and Hettmansperger (1976) show that 10 w (W(N-c) - W (c+l))/ (2Z /2) (14) where 2Z is the 1-a interpercentile range of the standard...r(.75n) - r(.25n)) (13) The window width h incorporates a resistant estimate of scale, then interquartile range of the residuals, and a normalizing...alternative estimate of i is available with the additional assumption of symmetry of the error distribution. ASSUMPTION: A5. Suppose the underlying error
On modeling pressure diffusion in non-homogeneous shear flows
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Rogers, M. M.; Durbin, P.; Lele, S. K.
1996-01-01
New models are proposed for the 'slow and 'rapid' parts of the pressure diffusive transport based on the examination of DNS databases for plane mixing layers and wakes. The model for the 'slow' part is non-local, but requires the distribution of the triple-velocity correlation as a local source. The latter can be computed accurately for the normal component from standard gradient diffusion models, but such models are inadequate for the cross component. More work is required to remedy this situation.
NASA Technical Reports Server (NTRS)
1992-01-01
The NASA Equipment Management Manual (NHB 4200.1) is issued pursuant to Section 203(c)(1) of the National Aeronautics and Space Act of 1958, as amended (42 USC 2473), and sets forth policy, uniform performance standards, and procedural guidance to NASA personnel for the acquisition, management, and use of NASA-owned equipment. This revision is effective upon receipt. This is a controlled manual, issued in loose-leaf form, and revised through page changes. Additional copies for internal use may be obtained through normal distribution.
Becker, J Sabine; Matusch, Andreas; Palm, Christoph; Salber, Dagmar; Morton, Kathryn A; Becker, J Susanne
2010-02-01
Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) has been developed and established as an emerging technique in the generation of quantitative images of metal distributions in thin tissue sections of brain samples (such as human, rat and mouse brain), with applications in research related to neurodegenerative disorders. A new analytical protocol is described which includes sample preparation by cryo-cutting of thin tissue sections and matrix-matched laboratory standards, mass spectrometric measurements, data acquisition, and quantitative analysis. Specific examples of the bioimaging of metal distributions in normal rodent brains are provided. Differences to the normal were assessed in a Parkinson's disease and a stroke brain model. Furthermore, changes during normal aging were studied. Powerful analytical techniques are also required for the determination and characterization of metal-containing proteins within a large pool of proteins, e.g., after denaturing or non-denaturing electrophoretic separation of proteins in one-dimensional and two-dimensional gels. LA-ICP-MS can be employed to detect metalloproteins in protein bands or spots separated after gel electrophoresis. MALDI-MS can then be used to identify specific metal-containing proteins in these bands or spots. The combination of these techniques is described in the second section.
Escalante, Agustín; Haas, Roy W; del Rincón, Inmaculada
2004-01-01
Outcome assessment in patients with rheumatoid arthritis (RA) includes measurement of physical function. We derived a scale to quantify global physical function in RA, using three performance-based rheumatology function tests (RFTs). We measured grip strength, walking velocity, and shirt button speed in consecutive RA patients attending scheduled appointments at six rheumatology clinics, repeating these measurements after a median interval of 1 year. We extracted the underlying latent variable using principal component factor analysis. We used the Bayesian information criterion to assess the global physical function scale's cross-sectional fit to criterion standards. The criteria were joint tenderness, swelling, and deformity, pain, physical disability, current work status, and vital status at 6 years after study enrolment. We computed Guyatt's responsiveness statistic for improvement according to the American College of Rheumatology (ACR) definition. Baseline functional performance data were available for 777 patients, and follow-up data were available for 681. Mean ± standard deviation for each RFT at baseline were: grip strength, 14 ± 10 kg; walking velocity, 194 ± 82 ft/min; and shirt button speed, 7.1 ± 3.8 buttons/min. Grip strength and walking velocity departed significantly from normality. The three RFTs loaded strongly on a single factor that explained ≥70% of their combined variance. We rescaled the factor to vary from 0 to 100. Its mean ± standard deviation was 41 ± 20, with a normal distribution. The new global scale had a stronger fit than the primary RFT to most of the criterion standards. It correlated more strongly with physical disability at follow-up and was more responsive to improvement defined according to the ACR20 and ACR50 definitions. We conclude that a performance-based physical function scale extracted from three RFTs has acceptable distributional and measurement properties and is responsive to clinically meaningful change. It provides a parsimonious scale to measure global physical function in RA. PMID:15225367
Lainhart, Janet E; Bigler, Erin D; Bocian, Maureen; Coon, Hilary; Dinh, Elena; Dawson, Geraldine; Deutsch, Curtis K; Dunn, Michelle; Estes, Annette; Tager-Flusberg, Helen; Folstein, Susan; Hepburn, Susan; Hyman, Susan; McMahon, William; Minshew, Nancy; Munson, Jeff; Osann, Kathy; Ozonoff, Sally; Rodier, Patricia; Rogers, Sally; Sigman, Marian; Spence, M Anne; Stodgell, Christopher J; Volkmar, Fred
2006-11-01
Data from 10 sites of the NICHD/NIDCD Collaborative Programs of Excellence in Autism were combined to study the distribution of head circumference and relationship to demographic and clinical variables. Three hundred thirty-eight probands with autism-spectrum disorder (ASD) including 208 probands with autism were studied along with 147 parents, 149 siblings, and typically developing controls. ASDs were diagnosed, and head circumference and clinical variables measured in a standardized manner across all sites. All subjects with autism met ADI-R, ADOS-G, DSM-IV, and ICD-10 criteria. The results show the distribution of standardized head circumference in autism is normal in shape, and the mean, variance, and rate of macrocephaly but not microcephaly are increased. Head circumference tends to be large relative to height in autism. No site, gender, age, SES, verbal, or non-verbal IQ effects were present in the autism sample. In addition to autism itself, standardized height and average parental head circumference were the most important factors predicting head circumference in individuals with autism. Mean standardized head circumference and rates of macrocephaly were similar in probands with autism and their parents. Increased head circumference was associated with a higher (more severe) ADI-R social algorithm score. Macrocephaly is associated with delayed onset of language. Although mean head circumference and rates of macrocephaly are increased in autism, a high degree of variability is present, underscoring the complex clinical heterogeneity of the disorder. The wide distribution of head circumference in autism has major implications for genetic, neuroimaging, and other neurobiological research.
Evaluation of Kurtosis into the product of two normally distributed variables
NASA Astrophysics Data System (ADS)
Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio
2016-06-01
Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.
Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows
NASA Technical Reports Server (NTRS)
McKenzie, D.; Savage, S.
2011-01-01
The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.
ERIC Educational Resources Information Center
Sass, D. A.; Schmitt, T. A.; Walker, C. M.
2008-01-01
Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…
NASA Astrophysics Data System (ADS)
Wang, W.
2017-12-01
Theory resultsWang wanli left-skew L distribution density function is formula below, its interval is from -∞ to +1 , x indicates center pressure of hurricane, xA represents its long term mean, [(x-xA)/x] is standard random variable on boundary condition f(+1) =0 and f(-∞) =0 Standard variable is negative when x is less than xA ;standard variable is positive when x is more than xA : the standard variable is equal to zero when x is just xA; thus, standard variable is just -∞ if x is zero ,standard variable is also +1 if x is +∞ , finally standard random variable fall into interval of - ∞ 1 to +1 Application in table "-" signal presents individual hurricane center pressure is less than the hurricane long term averaged value; "+" signal presents individual hurricane center pressure is more than the hurricane its mean of long term, of course the mean (xA) is also substituted by other "standard" or "expected value" Tab multi-levels of hurricane strength or intense Index of Hurricane [(X-XA)/X]% XA / X Categories Descriptions X/ XA Probabilities Formula -∞ +∞ → 0 → 0 …… …… …… …… …… …… < -900 > 10.0 < -15 > extreme ( Ⅵ ) < 0.10 -800, -900 9.0, 10.0 -15 extreme ( Ⅵ ) 0.11, 0.10 -700, -800 8.0, 9.0 -14 extreme ( Ⅴ ) 0.13, 0.11 -600, -700 7.0, 8.0 -13 extreme ( Ⅳ ) 0.14, 0.13 -500, -600 6.0, 7.0 -12 extreme ( Ⅲ ) 0.17, 0.14 0.05287 % L(-5.0)- L(-6.0) -400, -500 5.0, 6.0 -11 extreme ( Ⅱ ) 0.20, 0.17 0.003 % L(-4.0)- L(-5.0) -300, -400 4.0, 5.0 -10 extreme ( Ⅰ ) 0.25, 0.20 0.132 % L(-3.0)- L(-4.0) -267, -300 3.67, 4.00 -9 strongest ( Ⅲ )-superior 0.27, 0.25 0.24 % L(-2.67)-L(-3.00) -233, -267 3.33, 3.67 -8 strongest ( Ⅱ )-medium 0.30, 0.27 0.61 % L(-2.33)-L(-2.67) -200, -233 3.00, 3.33 -7 strongest ( Ⅰ )-inferior 0.33, 0.30 1.28 % L(-2.00)- L(-2.33) -167, -200 2.67, 3.00 -6 strong ( Ⅲ )-superior 0.37, 0.33 2.47 % L(-1.67)-L(-2.00) -133, -167 2.33, 2.67 -5 strong ( Ⅱ )-medium 0.43, 0.37 4.43 % L(-1.33)- L(-1.67) -100, -133 2.00, 2.33 -4 strong ( Ⅰ )-inferior 0.50, 0.43 6.69 % L(-1.00) -L(-1.33) -67, -100 1.67, 2.00 -3 normal ( Ⅲ ) -superior 0.60, 0.50 9.27 % L(-0.67)-L(-1.00) -33, -67 1.33, 1.67 -2 normal ( Ⅱ )-medium 0.75, 0.60 11.93 % L(-0.33)-L(-0.67) 00, -33 1.00, 1.33 -1 normal ( Ⅰ )-inferior 1.0, 0.75 12.93 % L(0.00)-L(-0.33) 33, 00 0.67, 1.00 +1 normal 1.49, 1.00 34.79 % L(0.33)-L(0.00) 67, 33 0.33, 0.67 +2 weak 3.03, 1.49 12.12 % L(0.67)-L(0.33) 100, 67 0.00, 0.33 +3 more weaker ∞, 3.03 3.08 % L(1.00)-L(0.67)
Quantiles for Finite Mixtures of Normal Distributions
ERIC Educational Resources Information Center
Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.
2006-01-01
Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)
NASA Astrophysics Data System (ADS)
Grova, C.; Jannin, P.; Biraben, A.; Buvat, I.; Benali, H.; Bernard, A. M.; Scarabin, J. M.; Gibaud, B.
2003-12-01
Quantitative evaluation of brain MRI/SPECT fusion methods for normal and in particular pathological datasets is difficult, due to the frequent lack of relevant ground truth. We propose a methodology to generate MRI and SPECT datasets dedicated to the evaluation of MRI/SPECT fusion methods and illustrate the method when dealing with ictal SPECT. The method consists in generating normal or pathological SPECT data perfectly aligned with a high-resolution 3D T1-weighted MRI using realistic Monte Carlo simulations that closely reproduce the response of a SPECT imaging system. Anatomical input data for the SPECT simulations are obtained from this 3D T1-weighted MRI, while functional input data result from an inter-individual analysis of anatomically standardized SPECT data. The method makes it possible to control the 'brain perfusion' function by proposing a theoretical model of brain perfusion from measurements performed on real SPECT images. Our method provides an absolute gold standard for assessing MRI/SPECT registration method accuracy since, by construction, the SPECT data are perfectly registered with the MRI data. The proposed methodology has been applied to create a theoretical model of normal brain perfusion and ictal brain perfusion characteristic of mesial temporal lobe epilepsy. To approach realistic and unbiased perfusion models, real SPECT data were corrected for uniform attenuation, scatter and partial volume effect. An anatomic standardization was used to account for anatomic variability between subjects. Realistic simulations of normal and ictal SPECT deduced from these perfusion models are presented. The comparison of real and simulated SPECT images showed relative differences in regional activity concentration of less than 20% in most anatomical structures, for both normal and ictal data, suggesting realistic models of perfusion distributions for evaluation purposes. Inter-hemispheric asymmetry coefficients measured on simulated data were found within the range of asymmetry coefficients measured on corresponding real data. The features of the proposed approach are compared with those of other methods previously described to obtain datasets appropriate for the assessment of fusion methods.
Statistical properties of the normalized ice particle size distribution
NASA Astrophysics Data System (ADS)
Delanoë, Julien; Protat, Alain; Testud, Jacques; Bouniol, Dominique; Heymsfield, A. J.; Bansemer, A.; Brown, P. R. A.; Forbes, R. M.
2005-05-01
Testud et al. (2001) have recently developed a formalism, known as the "normalized particle size distribution (PSD)", which consists in scaling the diameter and concentration axes in such a way that the normalized PSDs are independent of water content and mean volume-weighted diameter. In this paper we investigate the statistical properties of the normalized PSD for the particular case of ice clouds, which are known to play a crucial role in the Earth's radiation balance. To do so, an extensive database of airborne in situ microphysical measurements has been constructed. A remarkable stability in shape of the normalized PSD is obtained. The impact of using a single analytical shape to represent all PSDs in the database is estimated through an error analysis on the instrumental (radar reflectivity and attenuation) and cloud (ice water content, effective radius, terminal fall velocity of ice crystals, visible extinction) properties. This resulted in a roughly unbiased estimate of the instrumental and cloud parameters, with small standard deviations ranging from 5 to 12%. This error is found to be roughly independent of the temperature range. This stability in shape and its single analytical approximation implies that two parameters are now sufficient to describe any normalized PSD in ice clouds: the intercept parameter N*0 and the mean volume-weighted diameter Dm. Statistical relationships (parameterizations) between N*0 and Dm have then been evaluated in order to reduce again the number of unknowns. It has been shown that a parameterization of N*0 and Dm by temperature could not be envisaged to retrieve the cloud parameters. Nevertheless, Dm-T and mean maximum dimension diameter -T parameterizations have been derived and compared to the parameterization of Kristjánsson et al. (2000) currently used to characterize particle size in climate models. The new parameterization generally produces larger particle sizes at any temperature than the Kristjánsson et al. (2000) parameterization. These new parameterizations are believed to better represent particle size at global scale, owing to a better representativity of the in situ microphysical database used to derive it. We then evaluated the potential of a direct N*0-Dm relationship. While the model parameterized by temperature produces strong errors on the cloud parameters, the N*0-Dm model parameterized by radar reflectivity produces accurate cloud parameters (less than 3% bias and 16% standard deviation). This result implies that the cloud parameters can be estimated from the estimate of only one parameter of the normalized PSD (N*0 or Dm) and a radar reflectivity measurement.
Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.
2009-01-01
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043
Neti, Prasad V.S.V.; Howell, Roger W.
2008-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316
Statistical Data Editing in Scientific Articles.
Habibzadeh, Farrokh
2017-07-01
Scientific journals are important scholarly forums for sharing research findings. Editors have important roles in safeguarding standards of scientific publication and should be familiar with correct presentation of results, among other core competencies. Editors do not have access to the raw data and should thus rely on clues in the submitted manuscripts. To identify probable errors, they should look for inconsistencies in presented results. Common statistical problems that can be picked up by a knowledgeable manuscript editor are discussed in this article. Manuscripts should contain a detailed section on statistical analyses of the data. Numbers should be reported with appropriate precisions. Standard error of the mean (SEM) should not be reported as an index of data dispersion. Mean (standard deviation [SD]) and median (interquartile range [IQR]) should be used for description of normally and non-normally distributed data, respectively. If possible, it is better to report 95% confidence interval (CI) for statistics, at least for main outcome variables. And, P values should be presented, and interpreted with caution, if there is a hypothesis. To advance knowledge and skills of their members, associations of journal editors are better to develop training courses on basic statistics and research methodology for non-experts. This would in turn improve research reporting and safeguard the body of scientific evidence. © 2017 The Korean Academy of Medical Sciences.
A Maximum Likelihood Ensemble Data Assimilation Method Tailored to the Inner Radiation Belt
NASA Astrophysics Data System (ADS)
Guild, T. B.; O'Brien, T. P., III; Mazur, J. E.
2014-12-01
The Earth's radiation belts are composed of energetic protons and electrons whose fluxes span many orders of magnitude, whose distributions are log-normal, and where data-model differences can be large and also log-normal. This physical system thus challenges standard data assimilation methods relying on underlying assumptions of Gaussian distributions of measurements and data-model differences, where innovations to the model are small. We have therefore developed a data assimilation method tailored to these properties of the inner radiation belt, analogous to the ensemble Kalman filter but for the unique cases of non-Gaussian model and measurement errors, and non-linear model and measurement distributions. We apply this method to the inner radiation belt proton populations, using the SIZM inner belt model [Selesnick et al., 2007] and SAMPEX/PET and HEO proton observations to select the most likely ensemble members contributing to the state of the inner belt. We will describe the algorithm, the method of generating ensemble members, our choice of minimizing the difference between instrument counts not phase space densities, and demonstrate the method with our reanalysis of the inner radiation belt throughout solar cycle 23. We will report on progress to continue our assimilation into solar cycle 24 using the Van Allen Probes/RPS observations.
Abuasbi, Falastine; Lahham, Adnan; Abdel-Raziq, Issam Rashid
2018-05-01
In this study, levels of extremely low-frequency electric and magnetic fields originated from overhead power lines were investigated in the outdoor environment in Ramallah city, Palestine. Spot measurements were applied to record fields intensities over 6-min period. The Spectrum Analyzer NF-5035 was used to perform measurements at 1 m above ground level and directly underneath 40 randomly selected power lines distributed fairly within the city. Levels of electric fields varied depending on the line's category (power line, transformer or distributor), a minimum mean electric field of 3.9 V/m was found under a distributor line, and a maximum of 769.4 V/m under a high-voltage power line (66 kV). However, results of electric fields showed a log-normal distribution with the geometric mean and the geometric standard deviation of 35.9 and 2.8 V/m, respectively. Magnetic fields measured at power lines, on contrast, were not log-normally distributed; the minimum and maximum mean magnetic fields under power lines were 0.89 and 3.5 μT, respectively. As a result, none of the measured fields exceeded the ICNIRP's guidelines recommended for general public exposures to extremely low-frequency fields.
Studies of the 3D surface roughness height
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avisane, Anita; Rudzitis, Janis; Kumermanis, Maris
2013-12-16
Nowadays nano-coatings occupy more and more significant place in technology. Innovative, functional coatings acquire new aspects from the point of view of modern technologies, considering the aggregate of physical properties that can be achieved manipulating in the production process with the properties of coatings’ surfaces on micro- and nano-level. Nano-coatings are applied on machine parts, friction surfaces, contacting parts, corrosion surfaces, transparent conducting films (TCF), etc. The equipment available at present for the production of transparent conducting oxide (TCO) coatings with highest quality is based on expensive indium tin oxide (ITO) material; therefore cheaper alternatives are being searched for. Onemore » such offered alternative is zink oxide (ZnO) nano-coatings. Evaluating the TCF physical and mechanical properties and in view of the new ISO standard (EN ISO 25178) on the introduction of surface texture (3D surface roughness) in the engineering calculations, it is necessary to examine the height of 3D surface roughness, which is one of the most significant roughness parameters. The given paper studies the average values of 3D surface roughness height and the most often applied distribution laws are as follows: the normal distribution and Rayleigh distribution. The 3D surface is simulated by a normal random field.« less
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
[Do we always correctly interpret the results of statistical nonparametric tests].
Moczko, Jerzy A
2014-01-01
Mann-Whitney, Wilcoxon, Kruskal-Wallis and Friedman tests create a group of commonly used tests to analyze the results of clinical and laboratory data. These tests are considered to be extremely flexible and their asymptotic relative efficiency exceeds 95 percent. Compared with the corresponding parametric tests they do not require checking the fulfillment of the conditions such as the normality of data distribution, homogeneity of variance, the lack of correlation means and standard deviations, etc. They can be used both in the interval and or-dinal scales. The article presents an example Mann-Whitney test, that does not in any case the choice of these four nonparametric tests treated as a kind of gold standard leads to correct inference.
A reference tristimulus colorimeter
NASA Astrophysics Data System (ADS)
Eppeldauer, George P.
2002-06-01
A reference tristimulus colorimeter has been developed at NIST with a transmission-type silicon trap detector (1) and four temperature-controlled filter packages to realize the Commission Internationale de l'Eclairage (CIE) x(λ), y(λ) and z(λ) color matching functions (2). Instead of lamp standards, high accuracy detector standards are used for the colorimeter calibration. A detector-based calibration procedure is being suggested for tristimulus colorimeters wehre the absolute spectral responsivity of the tristimulus channels is determined. Then, color (spectral) correct and peak (amplitude) normalization are applied to minimize uncertainties caused by the imperfect realizations of the CIE functions. As a result of the corrections, the chromaticity coordinates of stable light sources with different spectral power distributions can be measured with uncertainties less than 0.0005 (k=1).
NASA Astrophysics Data System (ADS)
Ivanova, T. M.; Serebryany, V. N.
2017-12-01
The component fit method in quantitative texture analysis assumes that the texture of the polycrystalline sample can be represented by a superposition of weighted standard distributions those are characterized by position in the orientation space, shape and sharpness of the scattering. The components of the peak and axial shapes are usually used. It is known that an axial texture develops in materials subjected to direct pressing. In this paper we considered the possibility of modelling a texture of a magnesium sample subjected to equal-channel angular pressing with axial components only. The results obtained make it possible to conclude that ECAP is also a process leading to the appearance of an axial texture in magnesium alloys.
Measurement of top quark polarization in t t ¯ lepton + jets final states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abazov, V. M.; Abbott, B.; Acharya, B. S.
We present a study of top quark polarization inmore » $$t \\overline{t}$$ events produced in $$p \\overline{p}$$ collisions at $$\\sqrt{s}=1.96$$ TeV. Data correspond to 9.7 fb$$^{-1}$$ collected with the D0 detector at the Tevatron. We use final states containing a lepton and at least three jets. The polarization is measured using the distribution of leptons along the beam and helicity axes, and the axis normal to the production plane. This is the first measurement of top quark polarization at the Tevatron in $$\\ell$$+jets final states, and first measurement of transverse polarization in $$t \\overline{t}$$ production. The observed distributions are consistent with the standard model.« less
Measurement of top quark polarization in t t ¯ lepton + jets final states
Abazov, V. M.; Abbott, B.; Acharya, B. S.; ...
2017-01-09
We present a study of top quark polarization inmore » $$t \\overline{t}$$ events produced in $$p \\overline{p}$$ collisions at $$\\sqrt{s}=1.96$$ TeV. Data correspond to 9.7 fb$$^{-1}$$ collected with the D0 detector at the Tevatron. We use final states containing a lepton and at least three jets. The polarization is measured using the distribution of leptons along the beam and helicity axes, and the axis normal to the production plane. This is the first measurement of top quark polarization at the Tevatron in $$\\ell$$+jets final states, and first measurement of transverse polarization in $$t \\overline{t}$$ production. The observed distributions are consistent with the standard model.« less
Body mass index, immune status, and virological control in HIV-infected men who have sex with men.
Blashill, Aaron J; Mayer, Kenneth H; Crane, Heidi M; Grasso, Chris; Safren, Steven A
2013-01-01
Prior cross-sectional studies have found inconsistent relationships between body mass index (BMI) and disease progression in HIV-infected individuals. Cross-sectional and longitudinal analyses were conducted on data from a sample of 864 HIV-infected men who have sex with men (MSM) obtained from a large, nationally distributed HIV clinical cohort. Of the 864 HIV-infected MSM, 394 (46%) were of normal weight, 363 (42%) were overweight, and 107 (12%) were obese at baseline. The baseline CD4 count was 493 (standard error [SE] = 9), with viral load (log10) = 2.4 (SE = .04), and 561 (65%) were virologically suppressed. Over time, controlling for viral load, highly active antiretroviral therapy (HAART) adherence, age, and race/ethnicity, overweight and obese HIV-infected men possessed higher CD4 counts than that of normal weight HIV-infected men. Further, overweight and obese men possessed lower viral loads than that of normal weight HIV-infected men. For HIV-infected MSM, in this longitudinal cohort study, possessing a heavier than normal BMI is longitudinally associated with improved immunological health.
NASA Astrophysics Data System (ADS)
Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.
2009-05-01
The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.
Multilevel Sequential Monte Carlo Samplers for Normalizing Constants
Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...
2017-08-24
This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less
MRI-guided fluorescence tomography of the breast: a phantom study
NASA Astrophysics Data System (ADS)
Davis, Scott C.; Pogue, Brian W.; Dehghani, Hamid; Paulsen, Keith D.
2009-02-01
Tissue phantoms simulating the human breast were used to demonstrate the imaging capabilities of an MRI-coupled fluorescence molecular tomography (FMT) imaging system. Specifically, phantoms with low tumor-to-normal drug contrast and complex internal structure were imaged with the MR-coupled FMT system. Images of indocyanine green (ICG) fluorescence yield were recovered using a diffusion model-based approach capable of estimating the distribution of fluorescence activity in a tissue volume from tissue-boundary measurements of transmitted light. Tissue structural information, which can be determined from standard T1 and T2 MR images, was used to guide the recovery of fluorescence activity. The study revealed that this spatial guidance is critical for recovering images of fluorescence yield in tissue with low tumor-to-normal drug contrast.
Development of evaluation technique of GMAW welding quality based on statistical analysis
NASA Astrophysics Data System (ADS)
Feng, Shengqiang; Terasaki, Hidenri; Komizo, Yuichi; Hu, Shengsun; Chen, Donggao; Ma, Zhihua
2014-11-01
Nondestructive techniques for appraising gas metal arc welding(GMAW) faults plays a very important role in on-line quality controllability and prediction of the GMAW process. On-line welding quality controllability and prediction have several disadvantages such as high cost, low efficiency, complication and greatly being affected by the environment. An enhanced, efficient evaluation technique for evaluating welding faults based on Mahalanobis distance(MD) and normal distribution is presented. In addition, a new piece of equipment, designated the weld quality tester(WQT), is developed based on the proposed evaluation technique. MD is superior to other multidimensional distances such as Euclidean distance because the covariance matrix used for calculating MD takes into account correlations in the data and scaling. The values of MD obtained from welding current and arc voltage are assumed to follow a normal distribution. The normal distribution has two parameters: the mean µ and standard deviation σ of the data. In the proposed evaluation technique used by the WQT, values of MD located in the range from zero to µ+3 σ are regarded as "good". Two experiments which involve changing the flow of shielding gas and smearing paint on the surface of the substrate are conducted in order to verify the sensitivity of the proposed evaluation technique and the feasibility of using WQT. The experimental results demonstrate the usefulness of the WQT for evaluating welding quality. The proposed technique can be applied to implement the on-line welding quality controllability and prediction, which is of great importance to design some novel equipment for weld quality detection.
van Albada, S J; Robinson, P A
2007-04-15
Many variables in the social, physical, and biosciences, including neuroscience, are non-normally distributed. To improve the statistical properties of such data, or to allow parametric testing, logarithmic or logit transformations are often used. Box-Cox transformations or ad hoc methods are sometimes used for parameters for which no transformation is known to approximate normality. However, these methods do not always give good agreement with the Gaussian. A transformation is discussed that maps probability distributions as closely as possible to the normal distribution, with exact agreement for continuous distributions. To illustrate, the transformation is applied to a theoretical distribution, and to quantitative electroencephalographic (qEEG) measures from repeat recordings of 32 subjects which are highly non-normal. Agreement with the Gaussian was better than using logarithmic, logit, or Box-Cox transformations. Since normal data have previously been shown to have better test-retest reliability than non-normal data under fairly general circumstances, the implications of our transformation for the test-retest reliability of parameters were investigated. Reliability was shown to improve with the transformation, where the improvement was comparable to that using Box-Cox. An advantage of the general transformation is that it does not require laborious optimization over a range of parameters or a case-specific choice of form.
Mean and Fluctuating Force Distribution in a Random Array of Spheres
NASA Astrophysics Data System (ADS)
Akiki, Georges; Jackson, Thomas; Balachandar, Sivaramakrishnan
2015-11-01
This study presents a numerical study of the force distribution within a cluster of mono-disperse spherical particles. A direct forcing immersed boundary method is used to calculate the forces on individual particles for a volume fraction range of [0.1, 0.4] and a Reynolds number range of [10, 625]. The overall drag is compared to several drag laws found in the literature. As for the fluctuation of the hydrodynamic streamwise force among individual particles, it is shown to have a normal distribution with a standard deviation that varies with the volume fraction only. The standard deviation remains approximately 25% of the mean streamwise force on a single sphere. The force distribution shows a good correlation between the location of two to three nearest upstream and downstream neighbors and the magnitude of the forces. A detailed analysis of the pressure and shear forces contributions calculated on a ghost sphere in the vicinity of a single particle in a uniform flow reveals a mapping of those contributions. The combination of the mapping and number of nearest neighbors leads to a first order correction of the force distribution within a cluster which can be used in Lagrangian-Eulerian techniques. We also explore the possibility of a binary force model that systematically accounts for the effect of the nearest neighbors. This work was supported by the National Science Foundation (NSF OISE-0968313) under Partnership for International Research and Education (PIRE) in Multiphase Flows at the University of Florida.
An approach for the semantic interoperability of ISO EN 13606 and OpenEHR archetypes.
Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás
2010-10-01
The communication between health information systems of hospitals and primary care organizations is currently an important challenge to improve the quality of clinical practice and patient safety. However, clinical information is usually distributed among several independent systems that may be syntactically or semantically incompatible. This fact prevents healthcare professionals from accessing clinical information of patients in an understandable and normalized way. In this work, we address the semantic interoperability of two EHR standards: OpenEHR and ISO EN 13606. Both standards follow the dual model approach which distinguishes information and knowledge, this being represented through archetypes. The solution presented here is capable of transforming OpenEHR archetypes into ISO EN 13606 and vice versa by combining Semantic Web and Model-driven Engineering technologies. The resulting software implementation has been tested using publicly available collections of archetypes for both standards.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
Size distribution of radon daughter particles in uranium mine atmospheres.
George, A C; Hinchliffe, L; Sladowski, R
1975-06-01
The size distribution of radon daughters was measured in several uranium mines using four compact diffusion batteries and a round jet cascade impactor. Simultaneously, measurements were made of uncombined fractions of radon daughters, radon concentration, working level and particle concentration. The size distributions found for radon daughters were log normal. The activity median diameters ranged from 0.09 mum to 0.3 mum with a mean value of 0.17 mum. Geometric standard deviations were in the range from 1.3 to 4 with a mean value of 2.7. Uncombined fractions expressed in accordance with the ICRP definition ranged from 0.004 to 0.16 with a mean value of 0.04. The radon daughter sizes in these mines are greater than the sizes assumed by various authors in calculating respiratory tract dose. The disparity may reflect the widening use of diesel-powered equipment in large uranium mines.
Method development estimating ambient mercury concentration from monitored mercury wet deposition
NASA Astrophysics Data System (ADS)
Chen, S. M.; Qiu, X.; Zhang, L.; Yang, F.; Blanchard, P.
2013-05-01
Speciated atmospheric mercury data have recently been monitored at multiple locations in North America; but the spatial coverage is far less than the long-established mercury wet deposition network. The present study describes a first attempt linking ambient concentration with wet deposition using Beta distribution fitting of a ratio estimate. The mean, median, mode, standard deviation, and skewness of the fitted Beta distribution parameters were generated using data collected in 2009 at 11 monitoring stations. Comparing the normalized histogram and the fitted density function, the empirical and fitted Beta distribution of the ratio shows a close fit. The estimated ambient mercury concentration was further partitioned into reactive gaseous mercury and particulate bound mercury using linear regression model developed by Amos et al. (2012). The method presented here can be used to roughly estimate mercury ambient concentration at locations and/or times where such measurement is not available but where wet deposition is monitored.
Understanding poisson regression.
Hayat, Matthew J; Higgins, Melinda
2014-04-01
Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.
Combining uncertainty factors in deriving human exposure levels of noncarcinogenic toxicants.
Kodell, R L; Gaylor, D W
1999-01-01
Acceptable levels of human exposure to noncarcinogenic toxicants in environmental and occupational settings generally are derived by reducing experimental no-observed-adverse-effect levels (NOAELs) or benchmark doses (BDs) by a product of uncertainty factors (Barnes and Dourson, Ref. 1). These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10. This paper shows how estimates of means and standard deviations of the approximately log-normal distributions of individual uncertainty factors can be used to estimate percentiles of the distribution of the product of uncertainty factors. An appropriately selected upper percentile, for example, 95th or 99th, of the distribution of the product can be used as a combined uncertainty factor to replace the conventional product of default factors.
Nicolás, R O
1987-09-15
Different optical analysis of cylindrical-parabolic concentrators were made by utilizing four models of intensity distribution of the solar disk, i.e., square, uniform, real, and Gaussian. In this paper, the validity conditions using such distributions are determined by calculating each model of the intensity distribution on the receiver plane of perfect and nonperfect cylindrical-parabolic concentrators. We call nonperfect concentrators those in which the normal to each differential element of the specular surface departs from its correct position by an angle sigma(epsilon), the possible values of which follow a Gaussian distribution of mean value epsilon and standard deviation sigma(epsilon). In particular, the results obtained with the models considered for a concentrator with an aperture half-angle of 45 degrees are shown and compared. An important conclusion is that for sigma(epsilon) greater, similar 4 mrad, in some cases for sigma(epsilon) greater, similar 2 mrad, the results obtained are practically independent of the model used.
Improved Results for Route Planning in Stochastic Transportation Networks
NASA Technical Reports Server (NTRS)
Boyan, Justin; Mitzenmacher, Michael
2000-01-01
In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.
Association of auricular pressing and heart rate variability in pre-exam anxiety students.
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-03-25
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety.
Association of auricular pressing and heart rate variability in pre-exam anxiety students
Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong
2013-01-01
A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety. PMID:25206734
Simplified Approach Charts Improve Data Retrieval Performance
Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.
2016-01-01
The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009
NASA Technical Reports Server (NTRS)
Press, Harry; Mazelsky, Bernard
1954-01-01
The applicability of some results from the theory of generalized harmonic analysis (or power-spectral analysis) to the analysis of gust loads on airplanes in continuous rough air is examined. The general relations for linear systems between power spectrums of a random input disturbance and an output response are used to relate the spectrum of airplane load in rough air to the spectrum of atmospheric gust velocity. The power spectrum of loads is shown to provide a measure of the load intensity in terms of the standard deviation (root mean square) of the load distribution for an airplane in flight through continuous rough air. For the case of a load output having a normal distribution, which appears from experimental evidence to apply to homogeneous rough air, the standard deviation is shown to describe the probability distribution of loads or the proportion of total time that the load has given values. Thus, for airplane in flight through homogeneous rough air, the probability distribution of loads may be determined from a power-spectral analysis. In order to illustrate the application of power-spectral analysis to gust-load analysis and to obtain an insight into the relations between loads and airplane gust-response characteristics, two selected series of calculations are presented. The results indicate that both methods of analysis yield results that are consistent to a first approximation.
Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu
2009-02-01
The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.
NASA Astrophysics Data System (ADS)
Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha
2017-11-01
Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.
Briolant, Sébastien; Baragatti, Meili; Parola, Philippe; Simon, Fabrice; Tall, Adama; Sokhna, Cheikh; Hovette, Philippe; Mamfoumbi, Modeste Mabika; Koeck, Jean-Louis; Delmont, Jean; Spiegel, André; Castello, Jacky; Gardair, Jean Pierre; Trape, Jean Francois; Kombila, Maryvonne; Minodier, Philippe; Fusai, Thierry; Rogier, Christophe; Pradines, Bruno
2009-01-01
The distribution and range of 50% inhibitory concentrations (IC50s) of doxycycline were determined for 747 isolates obtained between 1997 and 2006 from patients living in Senegal, Republic of the Congo, and Gabon and patients hospitalized in France for imported malaria. The statistical analysis was designed to answer the specific question of whether Plasmodium falciparum has different phenotypes of susceptibility to doxycycline. A triple normal distribution was fitted to the data using a Bayesian mixture modeling approach. The IC50 geometric mean ranged from 6.2 μM to 11.1 μM according to the geographical origin, with a mean of 9.3 μM for all 747 parasites. The values for all 747 isolates were classified into three components: component A, with an IC50 mean of 4.9 μM (±2.1 μM [standard deviation]); component B, with an IC50 mean of 7.7 μM (±1.2 μM); and component C, with an IC50 mean of 17.9 μM (±1.4 μM). According to the origin of the P. falciparum isolates, the triple normal distribution was found in each subgroup. However, the proportion of isolates predicted to belong to component B was most important in isolates from Gabon and Congo and in isolates imported from Africa (from 46 to 56%). In Senegal, 55% of the P. falciparum isolates were predicted to be classified as component C. The cutoff of reduced susceptibility to doxycycline in vitro was estimated to be 35 μM. PMID:19047651
40 CFR Appendix III to Part 92 - Smoke Standards for Non-Normalized Measurements
Code of Federal Regulations, 2013 CFR
2013-07-01
...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Pt. 92, App. III Appendix III to Part 92—Smoke Standards for Non-Normalized Measurements Table III-1—Equivalent... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Smoke Standards for Non-Normalized...
40 CFR Appendix III to Part 92 - Smoke Standards for Non-Normalized Measurements
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Pt. 92, App. III Appendix III to Part 92—Smoke Standards for Non-Normalized Measurements Table III-1—Equivalent... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Smoke Standards for Non-Normalized...
40 CFR Appendix III to Part 92 - Smoke Standards for Non-Normalized Measurements
Code of Federal Regulations, 2012 CFR
2012-07-01
...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Pt. 92, App. III Appendix III to Part 92—Smoke Standards for Non-Normalized Measurements Table III-1—Equivalent... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Smoke Standards for Non-Normalized...
40 CFR Appendix III to Part 92 - Smoke Standards for Non-Normalized Measurements
Code of Federal Regulations, 2014 CFR
2014-07-01
...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Pt. 92, App. III Appendix III to Part 92—Smoke Standards for Non-Normalized Measurements Table III-1—Equivalent... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Smoke Standards for Non-Normalized...
40 CFR Appendix III to Part 92 - Smoke Standards for Non-Normalized Measurements
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AIR PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Pt. 92, App. III Appendix III to Part 92—Smoke Standards for Non-Normalized Measurements Table III-1—Equivalent... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Smoke Standards for Non-Normalized...
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
Chen, T M; Chen, Q P; Liu, R C; Szot, A; Chen, S L; Zhao, J; Zhou, S S
2017-02-01
Hundreds of small-scale influenza outbreaks in schools are reported in mainland China every year, leading to a heavy disease burden which seriously impacts the operation of affected schools. Knowing the transmissibility of each outbreak in the early stage has become a major concern for public health policy-makers and primary healthcare providers. In this study, we collected all the small-scale outbreaks in Changsha (a large city in south central China with ~7·04 million population) from January 2005 to December 2013. Four simple and popularly used models were employed to calculate the reproduction number (R) of these outbreaks. Given that the duration of a generation interval Tc = 2·7 and the standard deviation (s.d.) σ = 1·1, the mean R estimated by an epidemic model, normal distribution and delta distribution were 2·51 (s.d. = 0·73), 4·11 (s.d. = 2·20) and 5·88 (s.d. = 5·00), respectively. When Tc = 2·9 and σ = 1·4, the mean R estimated by the three models were 2·62 (s.d. = 0·78), 4·72 (s.d. = 2·82) and 6·86 (s.d. = 6·34), respectively. The mean R estimated by gamma distribution was 4·32 (s.d. = 2·47). We found that the values of R in small-scale outbreaks in schools were higher than in large-scale outbreaks in a neighbourhood, city or province. Normal distribution, delta distribution, and gamma distribution models seem to more easily overestimate the R of influenza outbreaks compared to the epidemic model.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
A histopathological study of bulbar conjunctival flaps occurring in 2 contact lens wearers.
Markoulli, Maria; Francis, Ian C; Yong, Jim; Jalbert, Isabelle; Carnt, Nicole; Cole, Nerida; Papas, Eric
2011-09-01
To study the histopathology of paralimbal bulbar conjunctival flaps occurring secondary to soft contact lens wear. Slit-lamp biomicroscopy using sodium fluorescein, cobalt blue light, and a Wratten filter was used to observe the presence, location, and dimensions of bulbar conjunctival flaps presenting in a cohort of contact lens wearers. Two subjects who exhibited such flaps agreed to undergo conjunctival biopsy. Tissue samples, obtained from the region of the flap, and an adjacent unaffected area were processed by standard histopathological methods. In the first subject, analysis of the flap tissue showed even collagen distribution and overall normal histology. The flap of the second subject displayed a mild focal increase in collagen and mild degeneration of collagen, but no increase in elastic tissue. Conjunctival epithelium was normal in both cases. In these 2 subjects, conjunctival flap tissue either was normal or showed only minimal abnormality. There is insufficient evidence for significant pathological change on the time scale of this study.
Dichotomisation using a distributional approach when the outcome is skewed.
Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L
2015-04-24
Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
New spatial upscaling methods for multi-point measurements: From normal to p-normal
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
40 CFR 190.10 - Standards for normal operations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Standards for normal operations. 190.10 Section 190.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards...
NASA Astrophysics Data System (ADS)
Selvam, A. M.
2017-01-01
Dynamical systems in nature exhibit self-similar fractal space-time fluctuations on all scales indicating long-range correlations and, therefore, the statistical normal distribution with implicit assumption of independence, fixed mean and standard deviation cannot be used for description and quantification of fractal data sets. The author has developed a general systems theory based on classical statistical physics for fractal fluctuations which predicts the following. (1) The fractal fluctuations signify an underlying eddy continuum, the larger eddies being the integrated mean of enclosed smaller-scale fluctuations. (2) The probability distribution of eddy amplitudes and the variance (square of eddy amplitude) spectrum of fractal fluctuations follow the universal Boltzmann inverse power law expressed as a function of the golden mean. (3) Fractal fluctuations are signatures of quantum-like chaos since the additive amplitudes of eddies when squared represent probability densities analogous to the sub-atomic dynamics of quantum systems such as the photon or electron. (4) The model predicted distribution is very close to statistical normal distribution for moderate events within two standard deviations from the mean but exhibits a fat long tail that are associated with hazardous extreme events. Continuous periodogram power spectral analyses of available GHCN annual total rainfall time series for the period 1900-2008 for Indian and USA stations show that the power spectra and the corresponding probability distributions follow model predicted universal inverse power law form signifying an eddy continuum structure underlying the observed inter-annual variability of rainfall. On a global scale, man-made greenhouse gas related atmospheric warming would result in intensification of natural climate variability, seen immediately in high frequency fluctuations such as QBO and ENSO and even shorter timescales. Model concepts and results of analyses are discussed with reference to possible prediction of climate change. Model concepts, if correct, rule out unambiguously, linear trends in climate. Climate change will only be manifested as increase or decrease in the natural variability. However, more stringent tests of model concepts and predictions are required before applications to such an important issue as climate change. Observations and simulations with climate models show that precipitation extremes intensify in response to a warming climate (O'Gorman in Curr Clim Change Rep 1:49-59, 2015).
Engineering Design Handbook. Maintainability Engineering Theory and Practice
1976-01-01
5—46 5—8.4.1.1 Human Body Measurement ( Anthropometry ) . 5—46 5-8.4.1.2 Man’s Sensory Capability and Psychological Makeup 5-46 5—8.4.1.3...Availability of System With Maintenance Time Ratio 1:4 2-32 2—9 Average and Pointwise Availability 2—34 2—10 Hypothetical...density function ( pdf ) of the normal distribution (Ref. 22, Chapter 10, and Ref. 23, Chapter 1) has the equation where cr is the standard deviation of
Multiwavelength Studies of Rotating Radio Transients
NASA Astrophysics Data System (ADS)
Miller, Joshua J.
Seven years ago, a new class of pulsars called the Rotating Radio Transients (RRATs) was discovered with the Parkes radio telescope in Australia (McLaughlin et al., 2006). These neutron stars are characterized by strong radio bursts at repeatable dispersion measures, but not detectable using standard periodicity-search algorithms. We now know of roughly 100 of these objects, discovered in new surveys and re-analysis of archival survey data. They generally have longer periods than those of the normal pulsar population, and several have high magnetic fields, similar to those other neutron star populations like the X-ray bright magnetars. However, some of the RRATs have spin-down properties very similar to those of normal pulsars, making it difficult to determine the cause of their unusual emission and possible evolutionary relationships between them and other classes of neutron stars. We have calculated single-pulse flux densities for eight RRAT sources observed using the Parkes radio telescope. Like normal pulsars, the pulse amplitude distributions are well described by log-normal probability distribution functions, though two show evidence for an additional power-law tail. Spectral indices are calculated for the seven RRATs which were detected at multiple frequencies. These RRATs have a mean spectral index of
A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.
2011-11-02
Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less
Is Coefficient Alpha Robust to Non-Normal Data?
Sheng, Yanyan; Sheng, Zhaohui
2011-01-01
Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
On Nonequivalence of Several Procedures of Structural Equation Modeling
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Chan, Wai
2005-01-01
The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure…
Jogenfors, Jonathan; Elhassan, Ashraf Mohamed; Ahrens, Johan; Bourennane, Mohamed; Larsson, Jan-Åke
2015-12-01
Photonic systems based on energy-time entanglement have been proposed to test local realism using the Bell inequality. A violation of this inequality normally also certifies security of device-independent quantum key distribution (QKD) so that an attacker cannot eavesdrop or control the system. We show how this security test can be circumvented in energy-time entangled systems when using standard avalanche photodetectors, allowing an attacker to compromise the system without leaving a trace. We reach Bell values up to 3.63 at 97.6% faked detector efficiency using tailored pulses of classical light, which exceeds even the quantum prediction. This is the first demonstration of a violation-faking source that gives both tunable violation and high faked detector efficiency. The implications are severe: the standard Clauser-Horne-Shimony-Holt inequality cannot be used to show device-independent security for energy-time entanglement setups based on Franson's configuration. However, device-independent security can be reestablished, and we conclude by listing a number of improved tests and experimental setups that would protect against all current and future attacks of this type.
Robustness of location estimators under t-distributions: a literature review
NASA Astrophysics Data System (ADS)
Sumarni, C.; Sadik, K.; Notodiputro, K. A.; Sartono, B.
2017-03-01
The assumption of normality is commonly used in estimation of parameters in statistical modelling, but this assumption is very sensitive to outliers. The t-distribution is more robust than the normal distribution since the t-distributions have longer tails. The robustness measures of location estimators under t-distributions are reviewed and discussed in this paper. For the purpose of illustration we use the onion yield data which includes outliers as a case study and showed that the t model produces better fit than the normal model.
Lee, Kathy E.; Barber, Larry B.; Furlong, Edward T.; Cahill, Jeffery D.; Kolpin, Dana W.; Meyer, Michael T.; Zaugg, Steven D.
2004-01-01
Results of this study indicate ubiquitous distribution of measured OWCs in the environment that originate from numerous sources and pathways. During this reconnaissance of OWCs in Minnesota it was not possible to determine the specific sources of OWCs to surface, ground, or drinking waters. The data indicate WWTP effluent is a major pathway of OWCs to surface waters and that landfill leachate at selected facilities is a potential source of OWCs to WWTPs. Aquatic organism or human exposure to some OWCs is likely based on OWC distribution. Few aquatic or human health standards or criteria exist for the OWCs analyzed, and the risks to humans or aquatic wildlife are not known. Some OWCs detected in this study are endocrine disrupters and have been found to disrupt or influence endocrine function in fish. Thirteen endocrine disrupters, 3-tert-butyl-4-hydoxyanisole (BHA), 4- cumylphenol, 4-normal-octylphenol, 4-tert-octylphenol, acetyl-hexamethyl-tetrahydro-naphthalene (AHTN), benzo[α]pyrene, beta-sitosterol, bisphenol-A, diazinon, nonylphenol diethoxylate (NP2EO), octyphenol diethoxylate (OP2EO), octylphenol monoethoxylate (OP1EO), and total para-nonylphenol (NP) were detected. Results of reconnaissance studies may help regulators who set water-quality standards begin to prioritize which OWCs to focus upon for given categories of water use.
Pore Size Distributions Inferred from Modified Inversion Percolation Modeling of Drainage Curves
NASA Astrophysics Data System (ADS)
Dralus, D. E.; Wang, H. F.; Strand, T. E.; Glass, R. J.; Detwiler, R. L.
2005-12-01
Experiments have been conducted of drainage in sand packs. At equilibrium, the interface between the fluids forms a saturation transition fringe where the saturation decreases monotonically with height. This behavior was observed in a 1-inch thick pack of 20-30 sand contained front and back within two thin, 12-inch-by-24-inch glass plates. The translucent chamber was illuminated from behind by a bank of fluorescent bulbs. Acquired data were in the form of images captured by a CCD camera with resolution on the grain scale. The measured intensity of the transmitted light was used to calculate the average saturation at each point in the chamber. This study used a modified invasion percolation (MIP) model to simulate the drainage experiments to evaluate the relationship between the saturation-versus-height curve at equilibrium and the pore size distribution associated with the granular medium. The simplest interpretation of a drainage curve is in terms of a distribution of capillary tubes whose radii reproduce the the observed distribution of rise heights. However, this apparent radius distribution obtained from direct inversion of the saturation profile did not yield the assumed radius distribution. Further investigation demonstrated that the equilibrium height distribution is controlled primarily by the Bond number (ratio of gravity to capillary forces) with some influence from the width of the pore radius distribution. The width of the equilibrium fringe is quantified in terms of the ratio of Bond number to the standard deviation of the pore throat distribution. The normalized saturation-vs-height curves exhibit a power-law scaling behavior consistent with both Brooks-Corey and Van Genuchten type curves. Fundamental tenets of percolation theory were used to quantify the relationship between the apparent and actual radius distributions as a function of the mean coordination number and of the ratio of Bond number to standard deviation, which was supported by both MIP simulations and corresponding drainage experiments.
A short note on the maximal point-biserial correlation under non-normality.
Cheng, Ying; Liu, Haiyan
2016-11-01
The aim of this paper is to derive the maximal point-biserial correlation under non-normality. Several widely used non-normal distributions are considered, namely the uniform distribution, t-distribution, exponential distribution, and a mixture of two normal distributions. Results show that the maximal point-biserial correlation, depending on the non-normal continuous variable underlying the binary manifest variable, may not be a function of p (the probability that the dichotomous variable takes the value 1), can be symmetric or non-symmetric around p = .5, and may still lie in the range from -1.0 to 1.0. Therefore researchers should exercise caution when they interpret their sample point-biserial correlation coefficients based on popular beliefs that the maximal point-biserial correlation is always smaller than 1, and that the size of the correlation is always further restricted as p deviates from .5. © 2016 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Selvadurai, Paul A.; Glaser, Steven D.; Parker, Jessica M.
2017-03-01
Spatial variations in frictional properties on natural faults are believed to be a factor influencing the presence of slow slip events (SSEs). This effect was tested on a laboratory frictional interface between two polymethyl methacrylate (PMMA) bodies. We studied the evolution of slip and slip rates that varied systematically based on the application of both high and low normal stress (σ0=0.8 or 0.4 MPa) and the far-field loading rate (VLP). A spontaneous, frictional rupture expanded from the central, weaker, and more compliant section of the fault that had fewer asperities. Slow rupture propagated at speeds Vslow˜0.8 to 26 mm s-1 with slip rates from 0.01 to 0.2 μm s-1, resulting in stress drops around 100 kPa. During certain nucleation sequences, the fault experienced a partial stress drop, referred to as precursor detachment fronts in tribology. Only at the higher level of normal stress did these fronts exist, and the slip and slip rates mimicked the moment and moment release rates during the 2013-2014 Boso SSE in Japan. The laboratory detachment fronts showed rupture propagation speeds Vslow/VR∈ (5 to 172) × 10-7 and stress drops ˜ 100 kPa, which both scaled to the aforementioned SSE. Distributions of asperities, measured using a pressure sensitive film, increased in complexity with additional normal stress—an increase in normal stress caused added complexity by increasing both the mean size and standard deviation of asperity distributions, and this appeared to control the presence of the detachment front.
Chwiej, Joanna; Skoczen, Agnieszka; Janeczko, Krzysztof; Kutorasinska, Justyna; Matusiak, Katarzyna; Figiel, Henryk; Dumas, Paul; Sandt, Christophe; Setkowicz, Zuzanna
2015-04-07
In this study, ketogenic diet-induced biochemical changes occurring in normal and epileptic hippocampal formations were compared. Four groups of rats were analyzed, namely seizure experiencing animals and normal rats previously fed with ketogenic (KSE and K groups respectively) or standard laboratory diet (NSE and N groups respectively). Synchrotron radiation based Fourier-transform infrared microspectroscopy was used for the analysis of distributions of the main organic components (proteins, lipids, compounds containing phosphate group(s)) and their structural modifications as well as anomalies in creatine accumulation with micrometer spatial resolution. Infrared spectra recorded in the molecular layers of the dentate gyrus (DG) areas of normal rats on a ketogenic diet (K) presented increased intensity of the 1740 cm(-1) absorption band. This originates from the stretching vibrations of carbonyl groups and probably reflects increased accumulation of ketone bodies occurring in animals on a high fat diet compared to those fed with a standard laboratory diet (N). The comparison of K and N groups showed, moreover, elevated ratios of absorbance at 1634 and 1658 cm(-1) for DG internal layers and increased accumulation of creatine deposits in sector 3 of the Ammon's horn (CA3) hippocampal area of ketogenic diet fed rats. In multiform and internal layers of CA3, seizure experiencing animals on ketogenic diet (KSE) presented a lower ratio of absorbance at 1634 and 1658 cm(-1) compared to rats on standard laboratory diet (NSE). Moreover, in some of the examined cellular layers, the increased intensity of the 2924 cm(-1) lipid band as well as the massifs of 2800-3000 cm(-1) and 1360-1480 cm(-1), was found in KSE compared to NSE animals. The intensity of the 1740 cm(-1) band was diminished in DG molecular layers of KSE rats. The ketogenic diet did not modify the seizure induced anomalies in the unsaturation level of lipids or the number of creatine deposits.
Empirical analysis on the runners' velocity distribution in city marathons
NASA Astrophysics Data System (ADS)
Lin, Zhenquan; Meng, Fan
2018-01-01
In recent decades, much researches have been performed on human temporal activity and mobility patterns, while few investigations have been made to examine the features of the velocity distributions of human mobility patterns. In this paper, we investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon. By statistical analyses on the datasets of the finish time records, we captured some statistical features of human behaviors in marathons: (1) The velocity distributions of all finishers and of partial finishers in the fastest age group both follow log-normal distribution; (2) In the New York City marathon, the velocity distribution of all male runners in eight 5-kilometer internal timing courses undergoes two transitions: from log-normal distribution at the initial stage (several initial courses) to the Gaussian distribution at the middle stage (several middle courses), and to log-normal distribution at the last stage (several last courses); (3) The intensity of the competition, which is described by the root-mean-square value of the rank changes of all runners, goes weaker from initial stage to the middle stage corresponding to the transition of the velocity distribution from log-normal distribution to Gaussian distribution, and when the competition gets stronger in the last course of the middle stage, there will come a transition from Gaussian distribution to log-normal one at last stage. This study may enrich the researches on human mobility patterns and attract attentions on the velocity features of human mobility.
Anorexia Nervosa: Analysis of Trabecular Texture with CT
Tabari, Azadeh; Torriani, Martin; Miller, Karen K.; Klibanski, Anne; Kalra, Mannudeep K.
2017-01-01
Purpose To determine indexes of skeletal integrity by using computed tomographic (CT) trabecular texture analysis of the lumbar spine in patients with anorexia nervosa and normal-weight control subjects and to determine body composition predictors of trabecular texture. Materials and Methods This cross-sectional study was approved by the institutional review board and compliant with HIPAA. Written informed consent was obtained. The study included 30 women with anorexia nervosa (mean age ± standard deviation, 26 years ± 6) and 30 normal-weight age-matched women (control group). All participants underwent low-dose single-section quantitative CT of the L4 vertebral body with use of a calibration phantom. Trabecular texture analysis was performed by using software. Skewness (asymmetry of gray-level pixel distribution), kurtosis (pointiness of pixel distribution), entropy (inhomogeneity of pixel distribution), and mean value of positive pixels (MPP) were assessed. Bone mineral density and abdominal fat and paraspinal muscle areas were quantified with quantitative CT. Women with anorexia nervosa and normal-weight control subjects were compared by using the Student t test. Linear regression analyses were performed to determine associations between trabecular texture and body composition. Results Women with anorexia nervosa had higher skewness and kurtosis, lower MPP (P < .001), and a trend toward lower entropy (P = .07) compared with control subjects. Bone mineral density, abdominal fat area, and paraspinal muscle area were inversely associated with skewness and kurtosis and positively associated with MPP and entropy. Texture parameters, but not bone mineral density, were associated with lowest lifetime weight and duration of amenorrhea in anorexia nervosa. Conclusion Patients with anorexia nervosa had increased skewness and kurtosis and decreased entropy and MPP compared with normal-weight control subjects. These parameters were associated with lowest lifetime weight and duration of amenorrhea, but there were no such associations with bone mineral density. These findings suggest that trabecular texture analysis might contribute information about bone health in anorexia nervosa that is independent of that provided with bone mineral density. © RSNA, 2016 PMID:27797678
Anorexia Nervosa: Analysis of Trabecular Texture with CT.
Tabari, Azadeh; Torriani, Martin; Miller, Karen K; Klibanski, Anne; Kalra, Mannudeep K; Bredella, Miriam A
2017-04-01
Purpose To determine indexes of skeletal integrity by using computed tomographic (CT) trabecular texture analysis of the lumbar spine in patients with anorexia nervosa and normal-weight control subjects and to determine body composition predictors of trabecular texture. Materials and Methods This cross-sectional study was approved by the institutional review board and compliant with HIPAA. Written informed consent was obtained. The study included 30 women with anorexia nervosa (mean age ± standard deviation, 26 years ± 6) and 30 normal-weight age-matched women (control group). All participants underwent low-dose single-section quantitative CT of the L4 vertebral body with use of a calibration phantom. Trabecular texture analysis was performed by using software. Skewness (asymmetry of gray-level pixel distribution), kurtosis (pointiness of pixel distribution), entropy (inhomogeneity of pixel distribution), and mean value of positive pixels (MPP) were assessed. Bone mineral density and abdominal fat and paraspinal muscle areas were quantified with quantitative CT. Women with anorexia nervosa and normal-weight control subjects were compared by using the Student t test. Linear regression analyses were performed to determine associations between trabecular texture and body composition. Results Women with anorexia nervosa had higher skewness and kurtosis, lower MPP (P < .001), and a trend toward lower entropy (P = .07) compared with control subjects. Bone mineral density, abdominal fat area, and paraspinal muscle area were inversely associated with skewness and kurtosis and positively associated with MPP and entropy. Texture parameters, but not bone mineral density, were associated with lowest lifetime weight and duration of amenorrhea in anorexia nervosa. Conclusion Patients with anorexia nervosa had increased skewness and kurtosis and decreased entropy and MPP compared with normal-weight control subjects. These parameters were associated with lowest lifetime weight and duration of amenorrhea, but there were no such associations with bone mineral density. These findings suggest that trabecular texture analysis might contribute information about bone health in anorexia nervosa that is independent of that provided with bone mineral density. © RSNA, 2016.
On the Seasonality of Sudden Stratospheric Warmings
NASA Astrophysics Data System (ADS)
Reichler, T.; Horan, M.
2017-12-01
The downward influence of sudden stratospheric warmings (SSWs) creates significant tropospheric circulation anomalies that last for weeks. It is therefore of theoretical and practical interest to understand the time when SSWs are most likely to occur and the controlling factors for the temporal distribution of SSWs. Conceivably, the distribution between mid-winter and late-winter is controlled by the interplay between decreasing eddy convergence in the region of the polar vortex and the weakening strength of the polar vortex. General circulation models (GCMs) tend to produce SSW maxima later in winter than observations, which has been considered as a model deficiency. However, the observed record is short, suggesting that under-sampling of SSWs may contribute to this discrepancy. Here, we study the climatological frequency distribution of SSWs and related events in a long control simulation with a stratosphere resolving GCM. We also create a simple statistical model to determine the primary factors controlling the SSW distribution. The statistical model is based on the daily climatological mean, standard deviation, and autocorrelation of stratospheric winds, and assumes that the winds follow a normal distribution. We find that the null hypothesis, that model and observations stem from the same distribution, cannot be rejected, suggesting that the mid-winter SSW maximum seen in the observations is due to sampling uncertainty. We also find that the statistical model faithfully reproduces the seasonal distribution of SSWs, and that the decreasing climatological strength of the polar vortex is the primary factor for it. We conclude that the late-winter SSW maximum seen in most models is realistic and that late events will be more prominent in future observations. We further conclude that SSWs simply form the tail of normally distributed stratospheric winds, suggesting that there is a continuum of weak polar vortex states and that statistically there is nothing special about the zero-threshold used to define SSWs.
Evaluation of portfolio credit risk based on survival analysis for progressive censored data
NASA Astrophysics Data System (ADS)
Jaber, Jamil J.; Ismail, Noriszura; Ramli, Siti Norafidah Mohd
2017-04-01
In credit risk management, the Basel committee provides a choice of three approaches to the financial institutions for calculating the required capital: the standardized approach, the Internal Ratings-Based (IRB) approach, and the Advanced IRB approach. The IRB approach is usually preferred compared to the standard approach due to its higher accuracy and lower capital charges. This paper use several parametric models (Exponential, log-normal, Gamma, Weibull, Log-logistic, Gompertz) to evaluate the credit risk of the corporate portfolio in the Jordanian banks based on the monthly sample collected from January 2010 to December 2015. The best model is selected using several goodness-of-fit criteria (MSE, AIC, BIC). The results indicate that the Gompertz distribution is the best model parametric model for the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augsten, Kamil
We present a measurement of top quark polarization inmore » $$t\\bar{t}$$ pair production in $$p\\bar{p}$$ collisions at $$\\sqrt{s}=1.96$$ TeV using data corresponding to 9.7 fb$$^{-1}$$ of integrated luminosity recorded with the D0 detector at the Fermilab Tevatron Collider. We consider final states containing a lepton and at least three jets. The polarization is measured through the distribution of lepton angles along three axes: the beam axis, the helicity axis, and the transverse axis normal to the $$t\\bar{t}$$ production plane. This is the first measurement of top quark polarization at Tevatron using lepton+jet final states and the first measurement of the transverse polarization in $$t\\bar{t}$$ production. The observed distributions are consistent with standard model predictions of nearly no polarization.« less
Martens, Jürgen
2005-01-01
The hygienic performance of biowaste composting plants to ensure the quality of compost is of high importance. Existing compost quality assurance systems reflect this importance through intensive testing of hygienic parameters. In many countries, compost quality assurance systems are under construction and it is necessary to check and to optimize the methods to state the hygienic performance of composting plants. A set of indicator methods to evaluate the hygienic performance of normal operating biowaste composting plants was developed. The indicator methods were developed by investigating temperature measurements from indirect process tests from 23 composting plants belonging to 11 design types of the Hygiene Design Type Testing System of the German Compost Quality Association (BGK e.V.). The presented indicator methods are the grade of hygienization, the basic curve shape, and the hygienic risk area. The temperature courses of single plants are not distributed normally, but they were grouped by cluster analysis in normal distributed subgroups. That was a precondition to develop the mentioned indicator methods. For each plant the grade of hygienization was calculated through transformation into the standard normal distribution. It shows the part in percent of the entire data set which meet the legal temperature requirements. The hygienization grade differs widely within the design types and falls below 50% for about one fourth of the plants. The subgroups are divided visually into basic curve shapes which stand for different process courses. For each plant the composition of the entire data set out of the various basic curve shapes can be used as an indicator for the basic process conditions. Some basic curve shapes indicate abnormal process courses which can be emended through process optimization. A hygienic risk area concept using the 90% range of variation of the normal temperature courses was introduced. Comparing the design type range of variation with the legal temperature defaults showed hygienic risk areas over the temperature courses which could be minimized through process optimization. The hygienic risk area of four design types shows a suboptimal hygienic performance.
Polynomial Chaos Based Acoustic Uncertainty Predictions from Ocean Forecast Ensembles
NASA Astrophysics Data System (ADS)
Dennis, S.
2016-02-01
Most significant ocean acoustic propagation occurs at tens of kilometers, at scales small compared basin and to most fine scale ocean modeling. To address the increased emphasis on uncertainty quantification, for example transmission loss (TL) probability density functions (PDF) within some radius, a polynomial chaos (PC) based method is utilized. In order to capture uncertainty in ocean modeling, Navy Coastal Ocean Model (NCOM) now includes ensembles distributed to reflect the ocean analysis statistics. Since the ensembles are included in the data assimilation for the new forecast ensembles, the acoustic modeling uses the ensemble predictions in a similar fashion for creating sound speed distribution over an acoustically relevant domain. Within an acoustic domain, singular value decomposition over the combined time-space structure of the sound speeds can be used to create Karhunen-Loève expansions of sound speed, subject to multivariate normality testing. These sound speed expansions serve as a basis for Hermite polynomial chaos expansions of derived quantities, in particular TL. The PC expansion coefficients result from so-called non-intrusive methods, involving evaluation of TL at multi-dimensional Gauss-Hermite quadrature collocation points. Traditional TL calculation from standard acoustic propagation modeling could be prohibitively time consuming at all multi-dimensional collocation points. This method employs Smolyak order and gridding methods to allow adaptive sub-sampling of the collocation points to determine only the most significant PC expansion coefficients to within a preset tolerance. Practically, the Smolyak order and grid sizes grow only polynomially in the number of Karhunen-Loève terms, alleviating the curse of dimensionality. The resulting TL PC coefficients allow the determination of TL PDF normality and its mean and standard deviation. In the non-normal case, PC Monte Carlo methods are used to rapidly establish the PDF. This work was sponsored by the Office of Naval Research
R/S analysis of reaction time in Neuron Type Test for human activity in civil aviation
NASA Astrophysics Data System (ADS)
Zhang, Hong-Yan; Kang, Ming-Cui; Li, Jing-Qiang; Liu, Hai-Tao
2017-03-01
Human factors become the most serious problem leading to accidents of civil aviation, which stimulates the design and analysis of Neuron Type Test (NTT) system to explore the intrinsic properties and patterns behind the behaviors of professionals and students in civil aviation. In the experiment, normal practitioners' reaction time sequences, collected from NTT, exhibit log-normal distribution approximately. We apply the χ2 test to compute the goodness-of-fit by transforming the time sequence with Box-Cox transformation to cluster practitioners. The long-term correlation of different individual practitioner's time sequence is represented by the Hurst exponent via Rescaled Range Analysis, also named by Range/Standard deviation (R/S) Analysis. The different Hurst exponent suggests the existence of different collective behavior and different intrinsic patterns of human factors in civil aviation.
Differences in word associations to pictures and words.
Saffran, Eleanor M; Coslett, H Branch; Keener, Matthew T
2003-01-01
Normal subjects were asked to produce the "first word that comes to mind" in response to pictures or words that differed with respect to manipulability and animacy. In separate analyses across subjects and items, normal subjects produced a significantly higher proportion of action words (that is, verbs) to pictures as compared to words, to manipulable as compared to non-manipulable stimuli and to inanimate as compared to animate stimuli. The largest proportion of action words was elicited by pictures of non-living, manipulable objects. Furthermore, associates to words matched standard word associates significantly more often than those elicited by pictures. These data suggest that pictures and words initially contact different forms of conceptual information and are consistent with an account of semantic organization that assumes that information is distributed across different domains reflecting the mode of acquisition of that knowledge.
Notes on power of normality tests of error terms in regression models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Střelec, Luboš
2015-03-10
Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importancemore » of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.« less
[Rare earth elements contents and distribution characteristics in nasopharyngeal carcinoma tissue].
Zhang, Xiangmin; Lan, Xiaolin; Zhang, Lingzhen; Xiao, Fufu; Zhong, Zhaoming; Ye, Guilin; Li, Zong; Li, Shaojin
2016-03-01
To investigate the rare earth elements(REEs) contents and distribution characteristics in nasopharyngeal carcinoma( NPC) tissue in Gannan region. Thirty patients of NPC in Gannan region were included in this study. The REEs contents were measured by tandem mass spectrometer inductively coupled plasma(ICP-MS/MS) in 30 patients, and the REEs contents and distribution were analyzed. The average standard deviation value of REEs in lung cancer and normal lung tissues was the minimum mostly. Light REEs content was higher than the medium REEs, and medium REEs content was higher than the heavy REEs content. REEs contents changes in nasopharyngeal carcinoma were variable obviously, the absolute value of Nd, Ce, Pr, Gd and other light rare earth elements were variable widely. The degree of changes on Yb, Tb, Ho and other heavy rare earth elements were variable widely, and there was presence of Eu, Ce negative anomaly(δEu=0. 385 5, δCe= 0. 523 4). The distribution characteristic of REEs contents in NPC patients is consistent with the parity distribution. With increasing atomic sequence, the content is decline wavy. Their distribution patterns were a lack of heavy REEs and enrichment of light REEs, and there was Eu , Ce negative anomaly.
Modeling Error Distributions of Growth Curve Models through Bayesian Methods
ERIC Educational Resources Information Center
Zhang, Zhiyong
2016-01-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…
ASYMPTOTIC DISTRIBUTION OF ΔAUC, NRIs, AND IDI BASED ON THEORY OF U-STATISTICS
Demler, Olga V.; Pencina, Michael J.; Cook, Nancy R.; D’Agostino, Ralph B.
2017-01-01
The change in AUC (ΔAUC), the IDI, and NRI are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues we unite the ΔAUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ΔAUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ΔAUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ΔAUC, NRIs, or IDI. In the former case SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ΔAUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ΔAUC. PMID:28627112
Asymptotic distribution of ∆AUC, NRIs, and IDI based on theory of U-statistics.
Demler, Olga V; Pencina, Michael J; Cook, Nancy R; D'Agostino, Ralph B
2017-09-20
The change in area under the curve (∆AUC), the integrated discrimination improvement (IDI), and net reclassification index (NRI) are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues, we unite the ∆AUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ∆AUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ∆AUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ∆AUC, NRIs, or IDI. In the former case, SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ∆AUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ∆AUC. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Gough, Albert H; Chen, Ning; Shun, Tong Ying; Lezon, Timothy R; Boltz, Robert C; Reese, Celeste E; Wagner, Jacob; Vernetti, Lawrence A; Grandis, Jennifer R; Lee, Adrian V; Stern, Andrew M; Schurdak, Mark E; Taylor, D Lansing
2014-01-01
One of the greatest challenges in biomedical research, drug discovery and diagnostics is understanding how seemingly identical cells can respond differently to perturbagens including drugs for disease treatment. Although heterogeneity has become an accepted characteristic of a population of cells, in drug discovery it is not routinely evaluated or reported. The standard practice for cell-based, high content assays has been to assume a normal distribution and to report a well-to-well average value with a standard deviation. To address this important issue we sought to define a method that could be readily implemented to identify, quantify and characterize heterogeneity in cellular and small organism assays to guide decisions during drug discovery and experimental cell/tissue profiling. Our study revealed that heterogeneity can be effectively identified and quantified with three indices that indicate diversity, non-normality and percent outliers. The indices were evaluated using the induction and inhibition of STAT3 activation in five cell lines where the systems response including sample preparation and instrument performance were well characterized and controlled. These heterogeneity indices provide a standardized method that can easily be integrated into small and large scale screening or profiling projects to guide interpretation of the biology, as well as the development of therapeutics and diagnostics. Understanding the heterogeneity in the response to perturbagens will become a critical factor in designing strategies for the development of therapeutics including targeted polypharmacology.
Statistical Considerations of Data Processing in Giovanni Online Tool
NASA Technical Reports Server (NTRS)
Suhung, Shen; Leptoukh, G.; Acker, J.; Berrick, S.
2005-01-01
The GES DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni) is a web-based interface for the rapid visualization and analysis of gridded data from a number of remote sensing instruments. The GES DISC currently employs several Giovanni instances to analyze various products, such as Ocean-Giovanni for ocean products from SeaWiFS and MODIS-Aqua; TOMS & OM1 Giovanni for atmospheric chemical trace gases from TOMS and OMI, and MOVAS for aerosols from MODIS, etc. (http://giovanni.gsfc.nasa.gov) Foremost among the Giovanni statistical functions is data averaging. Two aspects of this function are addressed here. The first deals with the accuracy of averaging gridded mapped products vs. averaging from the ungridded Level 2 data. Some mapped products contain mean values only; others contain additional statistics, such as number of pixels (NP) for each grid, standard deviation, etc. Since NP varies spatially and temporally, averaging with or without weighting by NP will be different. In this paper, we address differences of various weighting algorithms for some datasets utilized in Giovanni. The second aspect is related to different averaging methods affecting data quality and interpretation for data with non-normal distribution. The present study demonstrates results of different spatial averaging methods using gridded SeaWiFS Level 3 mapped monthly chlorophyll a data. Spatial averages were calculated using three different methods: arithmetic mean (AVG), geometric mean (GEO), and maximum likelihood estimator (MLE). Biogeochemical data, such as chlorophyll a, are usually considered to have a log-normal distribution. The study determined that differences between methods tend to increase with increasing size of a selected coastal area, with no significant differences in most open oceans. The GEO method consistently produces values lower than AVG and MLE. The AVG method produces values larger than MLE in some cases, but smaller in other cases. Further studies indicated that significant differences between AVG and MLE methods occurred in coastal areas where data have large spatial variations and a log-bimodal distribution instead of log-normal distribution.
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
PCAN: Probabilistic Correlation Analysis of Two Non-normal Data Sets
Zoh, Roger S.; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S.; Lampe, Johanna W.; Carroll, Raymond J.
2016-01-01
Summary Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. PMID:27037601
PCAN: Probabilistic correlation analysis of two non-normal data sets.
Zoh, Roger S; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S; Lampe, Johanna W; Carroll, Raymond J
2016-12-01
Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. © 2016, The International Biometric Society.
Stawarczyk, Bogna; Ozcan, Mutlu; Hämmerle, Christoph H F; Roos, Malgorzata
2012-05-01
The aim of this study was to compare the fracture load of veneered anterior zirconia crowns using normal and Weibull distribution of complete and censored data. Standardized zirconia frameworks for maxillary canines were milled using a CAD/CAM system and randomly divided into 3 groups (N=90, n=30 per group). They were veneered with three veneering ceramics, namely GC Initial ZR, Vita VM9, IPS e.max Ceram using layering technique. The crowns were cemented with glass ionomer cement on metal abutments. The specimens were then loaded to fracture (1 mm/min) in a Universal Testing Machine. The data were analyzed using classical method (normal data distribution (μ, σ); Levene test and one-way ANOVA) and according to the Weibull statistics (s, m). In addition, fracture load results were analyzed depending on complete and censored failure types (only chipping vs. total fracture together with chipping). When computed with complete data, significantly higher mean fracture loads (N) were observed for GC Initial ZR (μ=978, σ=157; s=1043, m=7.2) and VITA VM9 (μ=1074, σ=179; s=1139; m=7.8) than that of IPS e.max Ceram (μ=798, σ=174; s=859, m=5.8) (p<0.05) by classical and Weibull statistics, respectively. When the data were censored for only total fracture, IPS e.max Ceram presented the lowest fracture load for chipping with both classical distribution (μ=790, σ=160) and Weibull statistics (s=836, m=6.5). When total fracture with chipping (classical distribution) was considered as failure, IPS e.max Ceram did not show significant fracture load for total fracture (μ=1054, σ=110) compared to other groups (GC Initial ZR: μ=1039, σ=152, VITA VM9: μ=1170, σ=166). According to Weibull distributed data, VITA VM9 showed significantly higher fracture load (s=1228, m=9.4) than those of other groups. Both classical distribution and Weibull statistics for complete data yielded similar outcomes. Censored data analysis of all ceramic systems based on failure types is essential and brings additional information regarding the susceptibility to chipping or total fracture. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).
Thatcher, R W; North, D; Biver, C
2005-01-01
This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.
de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff
2016-09-01
The Pearson product–moment correlation coefficient ( r p ) and the Spearman rank correlation coefficient ( r s ) are widely used in psychological research. We compare r p and r s on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, r p and r s have similar expected values but r s is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, r p is more variable than r s . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, r p had lower variability than r s in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, r s had lower variability than r p , and often corresponded more accurately to the population Pearson correlation coefficient ( R p ) than r p did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing r s instead of r p . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of r s and r p . In conclusion, r p is suitable for light-tailed distributions, whereas r s is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. PsycINFO Database Record (c) 2016 APA, all rights reserved
Bono, Roser; Blanca, María J.; Arnau, Jaume; Gómez-Benito, Juana
2017-01-01
Statistical analysis is crucial for research and the choice of analytical technique should take into account the specific distribution of data. Although the data obtained from health, educational, and social sciences research are often not normally distributed, there are very few studies detailing which distributions are most likely to represent data in these disciplines. The aim of this systematic review was to determine the frequency of appearance of the most common non-normal distributions in the health, educational, and social sciences. The search was carried out in the Web of Science database, from which we retrieved the abstracts of papers published between 2010 and 2015. The selection was made on the basis of the title and the abstract, and was performed independently by two reviewers. The inter-rater reliability for article selection was high (Cohen’s kappa = 0.84), and agreement regarding the type of distribution reached 96.5%. A total of 262 abstracts were included in the final review. The distribution of the response variable was reported in 231 of these abstracts, while in the remaining 31 it was merely stated that the distribution was non-normal. In terms of their frequency of appearance, the most-common non-normal distributions can be ranked in descending order as follows: gamma, negative binomial, multinomial, binomial, lognormal, and exponential. In addition to identifying the distributions most commonly used in empirical studies these results will help researchers to decide which distributions should be included in simulation studies examining statistical procedures. PMID:28959227
Log-Normal Distribution of Cosmic Voids in Simulations and Mocks
NASA Astrophysics Data System (ADS)
Russell, E.; Pycke, J.-R.
2017-01-01
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of these data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.
Non-operative management (NOM) of blunt hepatic trauma: 80 cases.
Özoğul, Bünyami; Kısaoğlu, Abdullah; Aydınlı, Bülent; Öztürk, Gürkan; Bayramoğlu, Atıf; Sarıtemur, Murat; Aköz, Ayhan; Bulut, Özgür Hakan; Atamanalp, Sabri Selçuk
2014-03-01
Liver is the most frequently injured organ upon abdominal trauma. We present a group of patients with blunt hepatic trauma who were managed without any invasive diagnostic tools and/or surgical intervention. A total of 80 patients with blunt liver injury who were hospitalized to the general surgery clinic or other clinics due to the concomitant injuries were followed non-operatively. The normally distributed numeric variables were evaluated by Student's t-test or one way analysis of variance, while non-normally distributed variables were analyzed by Mann-Whitney U-test or Kruskal-Wallis variance analysis. Chi-square test was also employed for the comparison of categorical variables. Statistical significance was assumed for p<0.05. There was no significant relationship between patients' Hgb level and liver injury grade, outcome, and mechanism of injury. Also, there was no statistical relationship between liver injury grade, outcome, and mechanism of injury and ALT levels as well as AST level. There was no mortality in any of the patients. During the last quarter of century, changes in the diagnosis and treatment of liver injury were associated with increased survival. NOM of liver injury in patients with stable hemodynamics and hepatic trauma seems to be the gold standard.
A Simple Model of Cirrus Horizontal Inhomogeneity and Cloud Fraction
NASA Technical Reports Server (NTRS)
Smith, Samantha A.; DelGenio, Anthony D.
1998-01-01
A simple model of horizontal inhomogeneity and cloud fraction in cirrus clouds has been formulated on the basis that all internal horizontal inhomogeneity in the ice mixing ratio is due to variations in the cloud depth, which are assumed to be Gaussian. The use of such a model was justified by the observed relationship between the normalized variability of the ice water mixing ratio (and extinction) and the normalized variability of cloud depth. Using radar cloud depth data as input, the model reproduced well the in-cloud ice water mixing ratio histograms obtained from horizontal runs during the FIRE2 cirrus campaign. For totally overcast cases the histograms were almost Gaussian, but changed as cloud fraction decreased to exponential distributions which peaked at the lowest nonzero ice value for cloud fractions below 90%. Cloud fractions predicted by the model were always within 28% of the observed value. The predicted average ice water mixing ratios were within 34% of the observed values. This model could be used in a GCM to produce the ice mixing ratio probability distribution function and to estimate cloud fraction. It only requires basic meteorological parameters, the depth of the saturated layer and the standard deviation of cloud depth as input.
Effects of Sampling and Spatio/Temporal Granularity in Traffic Monitoring on Anomaly Detectability
NASA Astrophysics Data System (ADS)
Ishibashi, Keisuke; Kawahara, Ryoichi; Mori, Tatsuya; Kondoh, Tsuyoshi; Asano, Shoichiro
We quantitatively evaluate how sampling and spatio/temporal granularity in traffic monitoring affect the detectability of anomalous traffic. Those parameters also affect the monitoring burden, so network operators face a trade-off between the monitoring burden and detectability and need to know which are the optimal paramter values. We derive equations to calculate the false positive ratio and false negative ratio for given values of the sampling rate, granularity, statistics of normal traffic, and volume of anomalies to be detected. Specifically, assuming that the normal traffic has a Gaussian distribution, which is parameterized by its mean and standard deviation, we analyze how sampling and monitoring granularity change these distribution parameters. This analysis is based on observation of the backbone traffic, which exhibits spatially uncorrelated and temporally long-range dependence. Then we derive the equations for detectability. With those equations, we can answer the practical questions that arise in actual network operations: what sampling rate to set to find the given volume of anomaly, or, if the sampling is too high for actual operation, what granularity is optimal to find the anomaly for a given lower limit of sampling rate.
Rolland, Y; Bézy-Wendling, J; Duvauferrier, R; Coatrieux, J L
1999-03-01
To demonstrate the usefulness of a model of the parenchymous vascularization to evaluate texture analysis methods. Slices with thickness varying from 1 to 4 mm were reformatted from a 3D vascular model corresponding to either normal tissue perfusion or local hypervascularization. Parameters of statistical methods were measured on 16128x128 regions of interest, and mean values and standard deviation were calculated. For each parameter, the performances (discrimination power and stability) were evaluated. Among 11 calculated statistical parameters, three (homogeneity, entropy, mean of gradients) were found to have a good discriminating power to differentiate normal perfusion from hypervascularization, but only the gradient mean was found to have a good stability with respect to the thickness. Five parameters (run percentage, run length distribution, long run emphasis, contrast, and gray level distribution) were found to have intermediate results. In the remaining three, curtosis and correlation was found to have little discrimination power, skewness none. This 3D vascular model, which allows the generation of various examples of vascular textures, is a powerful tool to assess the performance of texture analysis methods. This improves our knowledge of the methods and should contribute to their a priori choice when designing clinical studies.
Normal theory procedures for calculating upper confidence limits (UCL) on the risk function for continuous responses work well when the data come from a normal distribution. However, if the data come from an alternative distribution, the application of the normal theory procedure...
Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya
2002-04-01
In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.
Ho, Andrew D; Yu, Carol C
2015-06-01
Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Marko, Nicholas F.; Weil, Robert J.
2012-01-01
Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863
Estimating sales and sales market share from sales rank data for consumer appliances
NASA Astrophysics Data System (ADS)
Touzani, Samir; Van Buskirk, Robert
2016-06-01
Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.
Bioelectrical impedance vector distribution in the first year of life.
Savino, Francesco; Grasso, Giulia; Cresi, Francesco; Oggero, Roberto; Silvestro, Leandra
2003-06-01
We assessed the bioelectrical impedance vector distribution in a sample of healthy infants in the first year of life, which is not available in literature. The study was conducted as a cross-sectional study in 153 healthy Caucasian infants (90 male and 63 female) younger than 1 y, born at full term, adequate for gestational age, free from chronic diseases or growth problems, and not feverish. Z scores for weight, length, cranial circumference, and body mass index for the study population were within the range of +/-1.5 standard deviations according to the Euro-Growth Study references. Concurrent anthropometrics (weight, length, and cranial circumference), body mass index, and bioelectrical impedance (resistance and reactance) measurements were made by the same operator. Whole-body (hand to foot) tetrapolar measurements were performed with a single-frequency (50 kHz), phase-sensitive impedance analyzer. The study population was subdivided into three classes of age for statistical analysis: 0 to 3.99 mo, 4 to 7.99 mo, and 8 to 11.99 mo. Using the bivariate normal distribution of resistance and reactance components standardized by the infant's length, the bivariate 95% confidence limits for the mean impedance vector separated by sex and age groups were calculated and plotted. Further, the bivariate 95%, 75%, and 50% tolerance intervals for individual vector measurements in the first year of life were plotted. Resistance and reactance values often fluctuated during the first year of life, particularly as raw measurements (without normalization by subject's length). However, 95% confidence ellipses of mean vectors from the three age groups overlapped each other, as did confidence ellipses by sex for each age class, indicating no significant vector migration during the first year of life. We obtained an estimate of mean impedance vector in a sample of healthy infants in the first year of life and calculated the bivariate values for an individual vector (95%, 75%, and 50% tolerance ellipses).
Copy number variability of expression plasmids determined by cell sorting and Droplet Digital PCR.
Jahn, Michael; Vorpahl, Carsten; Hübschmann, Thomas; Harms, Hauke; Müller, Susann
2016-12-19
Plasmids are widely used for molecular cloning or production of proteins in laboratory and industrial settings. Constant modification has brought forth countless plasmid vectors whose characteristics in terms of average plasmid copy number (PCN) and stability are rarely known. The crucial factor determining the PCN is the replication system; most replication systems in use today belong to a small number of different classes and are available through repositories like the Standard European Vector Architecture (SEVA). In this study, the PCN was determined in a set of seven SEVA-based expression plasmids only differing in the replication system. The average PCN for all constructs was determined by Droplet Digital PCR and ranged between 2 and 40 per chromosome in the host organism Escherichia coli. Furthermore, a plasmid-encoded EGFP reporter protein served as a means to assess variability in reporter gene expression on the single cell level. Only cells with one type of plasmid (RSF1010 replication system) showed a high degree of heterogeneity with a clear bimodal distribution of EGFP intensity while the others showed a normal distribution. The heterogeneous RSF1010-carrying cell population and one normally distributed population (ColE1 replication system) were further analyzed by sorting cells of sub-populations selected according to EGFP intensity. For both plasmids, low and highly fluorescent sub-populations showed a remarkable difference in PCN, ranging from 9.2 to 123.4 for ColE1 and from 0.5 to 11.8 for RSF1010, respectively. The average PCN determined here for a set of standardized plasmids was generally at the lower end of previously reported ranges and not related to the degree of heterogeneity. Further characterization of a heterogeneous and a homogeneous population demonstrated considerable differences in the PCN of sub-populations. We therefore present direct molecular evidence that the average PCN does not represent the true number of plasmid molecules in individual cells.
Franco, Marcia Rodrigues; Pinto, Rafael Zambelli; Delbaere, Kim; Eto, Bianca Yumie; Faria, Maíra Sgobbi; Aoyagi, Giovana Ayumi; Steffens, Daniel; Pastre, Carlos Marcelo
2018-02-14
The Iconographical Falls Efficacy Scale (Icon-FES) is an innovative tool to assess concern of falling that uses pictures as visual cues to provide more complete environmental contexts. Advantages of Icon-FES over previous scales include the addition of more demanding balance-related activities, ability to assess concern about falling in highly functioning older people, and its normal distribution. To perform a cross-cultural adaptation and to assess the measurement properties of the 30-item and 10-item Icon-FES in a community-dwelling Brazilian older population. The cross-cultural adaptation followed the recommendations of international guidelines. We evaluated the measurement properties (i.e. internal consistency, test-retest reproducibility, standard error of the measurement, minimal detectable change, construct validity, ceiling/floor effect, data distribution and discriminative validity), in 100 community-dwelling people aged ≥60 years. The 30-item and 10-item Icon-FES-Brazil showed good internal consistency (alpha and omega >0.70) and excellent intra-rater reproducibility (ICC 2,1 =0.96 and 0.93, respectively). According to the standard error of the measurement and minimal detectable change, the magnitude of change needed to exceed the measurement error and variability were 7.2 and 3.4 points for the 30-item and 10-item Icon-FES, respectively. We observed an excellent correlation between both versions of the Icon-FES and Falls Efficacy Scale - International (rho=0.83, p<0.001 [30-item version]; 0.76, p<0.001 [10-item version]). Icon-FES versions showed normal distribution, no floor/ceiling effects and were able to discriminate between groups relating to fall risk factors. Icon-FES-Brazil is a semantically and linguistically appropriate tool with acceptable measurement properties to evaluate concern about falling among the community-dwelling older population. Copyright © 2018 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
1977-01-01
Wind vector change with respect to time at Cape Kennedy, Florida, is examined according to the theory of multivariate normality. The joint distribution of the four variables represented by the components of the wind vector at an initial time and after a specified elapsed time is hypothesized to be quadravariate normal; the fourteen statistics of this distribution, calculated from fifteen years of twice daily Rawinsonde data are presented by monthly reference periods for each month from 0 to 27 km. The hypotheses that the wind component changes with respect to time is univariate normal, the joint distribution of wind component changes is bivariate normal, and the modulus of vector wind change is Rayleigh, has been tested by comparison with observed distributions. Statistics of the conditional bivariate normal distributions of vector wind at a future time given the vector wind at an initial time are derived. Wind changes over time periods from one to five hours, calculated from Jimsphere data, are presented.
Jian, Yutao; He, Zi-Hua; Dao, Li; Swain, Michael V; Zhang, Xin-Ping; Zhao, Ke
2017-04-01
To investigate and characterize the distribution of fabrication defects in bilayered lithium disilicate glass-ceramic (LDG) crowns using micro-CT and 3D reconstruction. Ten standardized molar crowns (IPS e.max Press; Ivoclar Vivadent) were fabricated by heat-pressing on a core and subsequent manual veneering. All crowns were scanned by micro-CT and 3D reconstructed. Volume, position and sphericity of each defect was measured in every crown. Each crown was divided into four regions-central fossa (CF), occlusal fossa (OF), cusp (C) and axial wall (AW). Porosity and number density of each region were calculated. Statistical analyses were performed using Welch two sample t-test, Friedman one-way rank sum test and Nemenyi post-hoc test. The defect volume distribution type was determined based on Akaike information criterion (AIC). The core ceramic contained fewer defects (p<0.001) than the veneer layer. The size of smaller defects, which were 95% of the total, obeyed a logarithmic normal distribution. Region CF showed higher porosity (p<0.001) than the other regions. Defect number density of region CF was higher than region C (p<0.001) and region AW (p=0.029), but no difference was found between region CF and OF (p>0.05). Four of ten specimens contained the largest pores in region CF, while for the remaining six specimens the largest pore was in region OF. LDG core ceramic contained fewer defects than the veneer ceramic. LDG strength estimated from pore size was comparable to literature values. Large defects were more likely to appear at the core-veneer interface of occlusal fossa, while small defects also distributed in every region of the crowns but tended to aggregate in the central fossa region. Size distribution of small defects in veneer obeyed a logarithmic normal distribution. Copyright © 2017. Published by Elsevier Ltd.
A New Bond Albedo for Performing Orbital Debris Brightness to Size Transformations
NASA Technical Reports Server (NTRS)
Mulrooney, Mark K.; Matney, Mark J.
2008-01-01
We have developed a technique for estimating the intrinsic size distribution of orbital debris objects via optical measurements alone. The process is predicated on the empirically observed power-law size distribution of debris (as indicated by radar RCS measurements) and the log-normal probability distribution of optical albedos as ascertained from phase (Lambertian) and range-corrected telescopic brightness measurements. Since the observed distribution of optical brightness is the product integral of the size distribution of the parent [debris] population with the albedo probability distribution, it is a straightforward matter to transform a given distribution of optical brightness back to a size distribution by the appropriate choice of a single albedo value. This is true because the integration of a powerlaw with a log-normal distribution (Fredholm Integral of the First Kind) yields a Gaussian-blurred power-law distribution with identical power-law exponent. Application of a single albedo to this distribution recovers a simple power-law [in size] which is linearly offset from the original distribution by a constant whose value depends on the choice of the albedo. Significantly, there exists a unique Bond albedo which, when applied to an observed brightness distribution, yields zero offset and therefore recovers the original size distribution. For physically realistic powerlaws of negative slope, the proper choice of albedo recovers the parent size distribution by compensating for the observational bias caused by the large number of small objects that appear anomalously large (bright) - and thereby skew the small population upward by rising above the detection threshold - and the lower number of large objects that appear anomalously small (dim). Based on this comprehensive analysis, a value of 0.13 should be applied to all orbital debris albedo-based brightness-to-size transformations regardless of data source. Its prima fascia genesis, derived and constructed from the current RCS to size conversion methodology (SiBAM Size-Based Estimation Model) and optical data reduction standards, assures consistency in application with the prior canonical value of 0.1. Herein we present the empirical and mathematical arguments for this approach and by example apply it to a comprehensive set of photometric data acquired via NASA's Liquid Mirror Telescopes during the 2000-2001 observing season.
NASA Technical Reports Server (NTRS)
Divinskiy, M. L.; Kolchinskiy, I. G.
1974-01-01
The distribution of deviations from mean star trail directions was studied on the basis of 105 star trails. It was found that about 93% of the trails yield a distribution in agreement with the normal law. About 4% of the star trails agree with the Charlier distribution.
40 CFR 190.10 - Standards for normal operations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Standards for normal operations. 190.10 Section 190.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) RADIATION PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental...
Linn, Kristin A; Gaonkar, Bilwaj; Satterthwaite, Theodore D; Doshi, Jimit; Davatzikos, Christos; Shinohara, Russell T
2016-05-15
Normalization of feature vector values is a common practice in machine learning. Generally, each feature value is standardized to the unit hypercube or by normalizing to zero mean and unit variance. Classification decisions based on support vector machines (SVMs) or by other methods are sensitive to the specific normalization used on the features. In the context of multivariate pattern analysis using neuroimaging data, standardization effectively up- and down-weights features based on their individual variability. Since the standard approach uses the entire data set to guide the normalization, it utilizes the total variability of these features. This total variation is inevitably dependent on the amount of marginal separation between groups. Thus, such a normalization may attenuate the separability of the data in high dimensional space. In this work we propose an alternate approach that uses an estimate of the control-group standard deviation to normalize features before training. We study our proposed approach in the context of group classification using structural MRI data. We show that control-based normalization leads to better reproducibility of estimated multivariate disease patterns and improves the classifier performance in many cases. Copyright © 2016 Elsevier Inc. All rights reserved.
Loce, R P; Jodoin, R E
1990-09-10
Using the tools of Fourier analysis, a sampling requirement is derived that assures that sufficient information is contained within the samples of a distribution to calculate accurately geometric moments of that distribution. The derivation follows the standard textbook derivation of the Whittaker-Shannon sampling theorem, which is used for reconstruction, but further insight leads to a coarser minimum sampling interval for moment determination. The need for fewer samples to determine moments agrees with intuition since less information should be required to determine a characteristic of a distribution compared with that required to construct the distribution. A formula for calculation of the moments from these samples is also derived. A numerical analysis is performed to quantify the accuracy of the calculated first moment for practical nonideal sampling conditions. The theory is applied to a high speed laser beam position detector, which uses the normalized first moment to measure raster line positional accuracy in a laser printer. The effects of the laser irradiance profile, sampling aperture, number of samples acquired, quantization, and noise are taken into account.
Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring
Gharavi, Hamid; Hu, Bin
2018-01-01
With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network. PMID:29503505
Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring.
Gharavi, Hamid; Hu, Bin
2017-01-01
With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network.
Profeta, Gerson S.; Pereira, Jessica A. S.; Costa, Samara G.; Azambuja, Patricia; Garcia, Eloi S.; Moraes, Caroline da Silva; Genta, Fernando A.
2017-01-01
Glycoside Hydrolases (GHs) are enzymes able to recognize and cleave glycosidic bonds. Insect GHs play decisive roles in digestion, in plant-herbivore, and host-pathogen interactions. GH activity is normally measured by the detection of a release from the substrate of products as sugars units, colored, or fluorescent groups. In most cases, the conditions for product release and detection differ, resulting in discontinuous assays. The current protocols result in using large amounts of reaction mixtures for the obtainment of time points in each experimental replica. These procedures restrain the analysis of biological materials with limited amounts of protein and, in the case of studies regarding small insects, implies in the pooling of samples from several individuals. In this respect, most studies do not assess the variability of GH activities across the population of individuals from the same species. The aim of this work is to approach this technical problem and have a deeper understanding of the variation of GH activities in insect populations, using as models the disease vectors Rhodnius prolixus (Hemiptera: Triatominae) and Lutzomyia longipalpis (Diptera: Phlebotominae). Here we standardized continuous assays using 4-methylumbelliferyl derived substrates for the detection of α-Glucosidase, β-Glucosidase, α-Mannosidase, N-acetyl-hexosaminidase, β-Galactosidase, and α-Fucosidase in the midgut of R. prolixus and L. longipalpis with results similar to the traditional discontinuous protocol. The continuous assays allowed us to measure GH activities using minimal sample amounts with a higher number of measurements, resulting in data that are more reliable and less time and reagent consumption. The continuous assay also allows the high-throughput screening of GH activities in small insect samples, which would be not applicable to the previous discontinuous protocol. We applied continuous GH measurements to 90 individual samples of R. prolixus anterior midgut homogenates using a high-throughput protocol. α-Glucosidase and α-Mannosidase activities showed the normal distribution in the population. β-Glucosidase, β-Galactosidase, N-acetyl-hexosaminidase, and α-Fucosidase activities showed non-normal distributions. These results indicate that GHs fluorescent-based high-throughput assays apply to insect samples and that the frequency distribution of digestive activities should be considered in data analysis, especially if a small number of samples is used. PMID:28553236
NASA Astrophysics Data System (ADS)
Liu, Yu; Qin, Shengwei; Hao, Qingguo; Chen, Nailu; Zuo, Xunwei; Rong, Yonghua
2017-03-01
The study of internal stress in quenched AISI 4140 medium carbon steel is of importance in engineering. In this work, the finite element simulation (FES) was employed to predict the distribution of internal stress in quenched AISI 4140 cylinders with two sizes of diameter based on exponent-modified (Ex-Modified) normalized function. The results indicate that the FES based on Ex-Modified normalized function proposed is better consistent with X-ray diffraction measurements of the stress distribution than FES based on normalized function proposed by Abrassart, Desalos and Leblond, respectively, which is attributed that Ex-Modified normalized function better describes transformation plasticity. Effect of temperature distribution on the phase formation, the origin of residual stress distribution and effect of transformation plasticity function on the residual stress distribution were further discussed.
A new paradigm of oral cancer detection using digital infrared thermal imaging
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Mukhopadhyay, S.; Dasgupta, A.; Banerjee, S.; Mukhopadhyay, S.; Patsa, S.; Ray, J. G.; Chaudhuri, K.
2016-03-01
Histopathology is considered the gold standard for oral cancer detection. But a major fraction of patient pop- ulation is incapable of accessing such healthcare facilities due to poverty. Moreover, such analysis may report false negatives when test tissue is not collected from exact cancerous location. The proposed work introduces a pioneering computer aided paradigm of fast, non-invasive and non-ionizing modality for oral cancer detection us- ing Digital Infrared Thermal Imaging (DITI). Due to aberrant metabolic activities in carcinogenic facial regions, heat signatures of patients are different from that of normal subjects. The proposed work utilizes asymmetry of temperature distribution of facial regions as principle cue for cancer detection. Three views of a subject, viz. front, left and right are acquired using long infrared (7:5 - 13μm) camera for analysing distribution of temperature. We study asymmetry of facial temperature distribution between: a) left and right profile faces and b) left and right half of frontal face. Comparison of temperature distribution suggests that patients manifest greater asymmetry compared to normal subjects. For classification, we initially use k-means and fuzzy k-means for unsupervised clustering followed by cluster class prototype assignment based on majority voting. Average classification accuracy of 91:5% and 92:8% are achieved by k-mean and fuzzy k-mean framework for frontal face. The corresponding metrics for profile face are 93:4% and 95%. Combining features of frontal and profile faces, average accuracies are increased to 96:2% and 97:6% respectively for k-means and fuzzy k-means framework.
Smooth quantile normalization.
Hicks, Stephanie C; Okrah, Kwame; Paulson, Joseph N; Quackenbush, John; Irizarry, Rafael A; Bravo, Héctor Corrada
2018-04-01
Between-sample normalization is a critical step in genomic data analysis to remove systematic bias and unwanted technical variation in high-throughput data. Global normalization methods are based on the assumption that observed variability in global properties is due to technical reasons and are unrelated to the biology of interest. For example, some methods correct for differences in sequencing read counts by scaling features to have similar median values across samples, but these fail to reduce other forms of unwanted technical variation. Methods such as quantile normalization transform the statistical distributions across samples to be the same and assume global differences in the distribution are induced by only technical variation. However, it remains unclear how to proceed with normalization if these assumptions are violated, for example, if there are global differences in the statistical distributions between biological conditions or groups, and external information, such as negative or control features, is not available. Here, we introduce a generalization of quantile normalization, referred to as smooth quantile normalization (qsmooth), which is based on the assumption that the statistical distribution of each sample should be the same (or have the same distributional shape) within biological groups or conditions, but allowing that they may differ between groups. We illustrate the advantages of our method on several high-throughput datasets with global differences in distributions corresponding to different biological conditions. We also perform a Monte Carlo simulation study to illustrate the bias-variance tradeoff and root mean squared error of qsmooth compared to other global normalization methods. A software implementation is available from https://github.com/stephaniehicks/qsmooth.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... Conservation Standards for Distribution Transformers: Public Meeting and Availability of the Preliminary... the amendment of energy conservation standards for distribution transformers; the analytical framework..._standards/commercial/distribution_transformers.html . [[Page 11397
NASA Astrophysics Data System (ADS)
Gacal, G. F. B.; Lagrosas, N.
2016-12-01
Nowadays, cameras are commonly used by students. In this study, we use this instrument to look at moon signals and relate these signals to Gaussian functions. To implement this as a classroom activity, students need computers, computer software to visualize signals, and moon images. A normalized Gaussian function is often used to represent probability density functions of normal distribution. It is described by its mean m and standard deviation s. The smaller standard deviation implies less spread from the mean. For the 2-dimensional Gaussian function, the mean can be described by coordinates (x0, y0), while the standard deviations can be described by sx and sy. In modelling moon signals obtained from sky-cameras, the position of the mean (x0, y0) is solved by locating the coordinates of the maximum signal of the moon. The two standard deviations are the mean square weighted deviation based from the sum of total pixel values of all rows/columns. If visualized in three dimensions, the 2D Gaussian function appears as a 3D bell surface (Fig. 1a). This shape is similar to the pixel value distribution of moon signals as captured by a sky-camera. An example of this is illustrated in Fig 1b taken around 22:20 (local time) of January 31, 2015. The local time is 8 hours ahead of coordinated universal time (UTC). This image is produced by a commercial camera (Canon Powershot A2300) with 1s exposure time, f-stop of f/2.8, and 5mm focal length. One has to chose a camera with high sensitivity when operated at nighttime to effectively detect these signals. Fig. 1b is obtained by converting the red-green-blue (RGB) photo to grayscale values. The grayscale values are then converted to a double data type matrix. The last conversion process is implemented for the purpose of having the same scales for both Gaussian model and pixel distribution of raw signals. Subtraction of the Gaussian model from the raw data produces a moonless image as shown in Fig. 1c. This moonless image can be used for quantifying cloud cover as captured by ordinary cameras (Gacal et al, 2016). Cloud cover can be defined as the ratio of number of pixels whose values exceeds 0.07 and the total number of pixels. In this particular image, cloud cover value is 0.67.
Rolling Bearing Life Prediction-Past, Present, and Future
NASA Technical Reports Server (NTRS)
Zaretsky, E V; Poplawski, J. V.; Miller, C. R.
2000-01-01
Comparisons were made between the life prediction formulas of Lundberg and Palmgren, Ioannides and Harris, and Zaretsky and full-scale ball and roller bearing life data. The effect of Weibull slope on bearing life prediction was determined. Life factors are proposed to adjust the respective life formulas to the normalized statistical life distribution of each bearing type. The Lundberg-Palmgren method resulted in the most conservative life predictions compared to Ioannides and Harris, and Zaretsky methods which produced statistically similar results. Roller profile can have significant effects on bearing life prediction results. Roller edge loading can reduce life by as much as 98 percent. The resultant predicted life not only depends on the life equation used but on the Weibull slope assumed, the least variation occurring with the Zaretsky equation. The load-life exponent p of 10/3 used in the American National Standards Institute (ANSI)/American Bearing Manufacturers Association (ABMA)/International Organization for Standardization (ISO) standards is inconsistent with the majority roller bearings designed and used today.
Superdiffusive Dispersals Impart the Geometry of Underlying Random Walks
NASA Astrophysics Data System (ADS)
Zaburdaev, V.; Fouxon, I.; Denisov, S.; Barkai, E.
2016-12-01
It is recognized now that a variety of real-life phenomena ranging from diffusion of cold atoms to the motion of humans exhibit dispersal faster than normal diffusion. Lévy walks is a model that excelled in describing such superdiffusive behaviors albeit in one dimension. Here we show that, in contrast to standard random walks, the microscopic geometry of planar superdiffusive Lévy walks is imprinted in the asymptotic distribution of the walkers. The geometry of the underlying walk can be inferred from trajectories of the walkers by calculating the analogue of the Pearson coefficient.
Symposium on Information Processing in Organizations.
1982-04-01
components of the equation for V are of the form: J+l 14 Ij+l flj (V1. Uj + l , e) , (2) j+1 i i Y J+l f2j (Y¥ Z e, ej) . (3) Furthermore, it will...given by , wj~ ( -m -u I , - q , (8) J P tD -ii+ l wj4 I -mI -1jot where q; Is the pdf of the standard normal distribution and mj =0 or x according as U...function of frequency of mention, but also the linguistic qualifiers employed ! td the structure of the overall explanation. Still, a simple count of the
Viscoelastic analysis of adhesively bonded joints
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.
1980-01-01
An adhesively bonded lap joint is analyzed by assuming that the adherends are elastic and the adhesive is linearly viscoelastic. After formulating the general problem a specific example for two identical adherends bonded through a three parameter viscoelastic solid adhesive is considered. The standard Laplace transform technique is used to solve the problem. The stress distribution in the adhesive layer is calculated for three different external loads, namely, membrane loading, bending, and transverse shear loading. The results indicate that the peak value of the normal stress in the adhesive is not only consistently higher than the corresponding shear stress but also decays slower.
Central Limit Theorems for Linear Statistics of Heavy Tailed Random Matrices
NASA Astrophysics Data System (ADS)
Benaych-Georges, Florent; Guionnet, Alice; Male, Camille
2014-07-01
We show central limit theorems (CLT) for the linear statistics of symmetric matrices with independent heavy tailed entries, including entries in the domain of attraction of α-stable laws and entries with moments exploding with the dimension, as in the adjacency matrices of Erdös-Rényi graphs. For the second model, we also prove a central limit theorem of the moments of its empirical eigenvalues distribution. The limit laws are Gaussian, but unlike the case of standard Wigner matrices, the normalization is the one of the classical CLT for independent random variables.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
LOG-NORMAL DISTRIBUTION OF COSMIC VOIDS IN SIMULATIONS AND MOCKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, E.; Pycke, J.-R., E-mail: er111@nyu.edu, E-mail: jrp15@nyu.edu
2017-01-20
Following up on previous studies, we complete here a full analysis of the void size distributions of the Cosmic Void Catalog based on three different simulation and mock catalogs: dark matter (DM), haloes, and galaxies. Based on this analysis, we attempt to answer two questions: Is a three-parameter log-normal distribution a good candidate to satisfy the void size distributions obtained from different types of environments? Is there a direct relation between the shape parameters of the void size distribution and the environmental effects? In an attempt to answer these questions, we find here that all void size distributions of thesemore » data samples satisfy the three-parameter log-normal distribution whether the environment is dominated by DM, haloes, or galaxies. In addition, the shape parameters of the three-parameter log-normal void size distribution seem highly affected by environment, particularly existing substructures. Therefore, we show two quantitative relations given by linear equations between the skewness and the maximum tree depth, and between the variance of the void size distribution and the maximum tree depth, directly from the simulated data. In addition to this, we find that the percentage of voids with nonzero central density in the data sets has a critical importance. If the number of voids with nonzero central density reaches ≥3.84% in a simulation/mock sample, then a second population is observed in the void size distributions. This second population emerges as a second peak in the log-normal void size distribution at larger radius.« less
Expected outcomes from topical haemoglobin spray in non-healing and worsening venous leg ulcers.
Arenberger, P; Elg, F; Petyt, J; Cutting, K
2015-05-01
To evaluate the effect of topical haemoglobin spray on treatment response and wound-closure rates in patients with chronic venous leg ulcers. A linear regression model was used to forecast healing outcomes over a 12-month period. Simulated data were taken from normal distributions based on post-hoc analysis of a 72-patient study in non-healing and worsening wounds (36 patients receiving standard care and 36 receiving standard care plus topical haemoglobin spray). Using a simulated 25,000 'patients' from each group, the proportion of wound closure over time was projected. Simulation results predicted a 55% wound closure rate at six months in the haemoglobin group, compared with 4% in the standard care group. Over a 12-month simulation period, a 43% overall reduction in wound burden was predicted. With the haemoglobin spray, 85% of wounds were expected to heal in 12 months, compared with 13% in the standard care group. Topical haemoglobin spray promises a more effective treatment for chronic venous leg ulcers than standard care alone in wounds that are non-healing or worsening. Further research is required to validate these predictions and to identify achievable outcomes in other chronic wound types.
The Impact of Heterogeneous Thresholds on Social Contagion with Multiple Initiators
Karampourniotis, Panagiotis D.; Sreenivasan, Sameet; Szymanski, Boleslaw K.; Korniss, Gyorgy
2015-01-01
The threshold model is a simple but classic model of contagion spreading in complex social systems. To capture the complex nature of social influencing we investigate numerically and analytically the transition in the behavior of threshold-limited cascades in the presence of multiple initiators as the distribution of thresholds is varied between the two extreme cases of identical thresholds and a uniform distribution. We accomplish this by employing a truncated normal distribution of the nodes’ thresholds and observe a non-monotonic change in the cascade size as we vary the standard deviation. Further, for a sufficiently large spread in the threshold distribution, the tipping-point behavior of the social influencing process disappears and is replaced by a smooth crossover governed by the size of initiator set. We demonstrate that for a given size of the initiator set, there is a specific variance of the threshold distribution for which an opinion spreads optimally. Furthermore, in the case of synthetic graphs we show that the spread asymptotically becomes independent of the system size, and that global cascades can arise just by the addition of a single node to the initiator set. PMID:26571486
Adaptive linear rank tests for eQTL studies
Szymczak, Silke; Scheinhardt, Markus O.; Zeller, Tanja; Wild, Philipp S.; Blankenberg, Stefan; Ziegler, Andreas
2013-01-01
Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal–Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. PMID:22933317
Adaptive linear rank tests for eQTL studies.
Szymczak, Silke; Scheinhardt, Markus O; Zeller, Tanja; Wild, Philipp S; Blankenberg, Stefan; Ziegler, Andreas
2013-02-10
Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. Copyright © 2012 John Wiley & Sons, Ltd.
New reference materials for nitrogen-isotope-ratio measurements
Böhlke, John Karl; Gwinn, C. J.; Coplen, T. B.
1993-01-01
Three new reference materials were manufactured for calibration of relative stable nitrogen-isotope-ratio measurements: USGS25 (ammonium sulfate) d15N' = -30 per mil; USGS26 (ammonium sulfate) d15N' = +54 per mil; USGS32 (potassium nitrate) d15N' = +180 per mil, where d15N', relative to atmospheric nitrogen, is an approximate value subject to change following interlaboratory comparisons. These materials are isotopically homogeneous in aliquots at least as small as 10 µmol N2 (or about 1-2 mg of salt). The new reference materials greatly extend the range of d15N values of internationally distributed standards, and they allow normalization of d15N measurements over almost the full range of known natural isotope variation on Earth. The methods used to produce these materials may be adapted to produce homogeneous local laboratory standards for routine use.
Measurement of top quark polarization in t t ¯ lepton +jets final states
NASA Astrophysics Data System (ADS)
Abazov, V. M.; Abbott, B.; Acharya, B. S.; Adams, M.; Adams, T.; Agnew, J. P.; Alexeev, G. D.; Alkhazov, G.; Alton, A.; Askew, A.; Atkins, S.; Augsten, K.; Aushev, V.; Aushev, Y.; Avila, C.; Badaud, F.; Bagby, L.; Baldin, B.; Bandurin, D. V.; Banerjee, S.; Barberis, E.; Baringer, P.; Bartlett, J. F.; Bassler, U.; Bazterra, V.; Bean, A.; Begalli, M.; Bellantoni, L.; Beri, S. B.; Bernardi, G.; Bernhard, R.; Bertram, I.; Besançon, M.; Beuselinck, R.; Bhat, P. C.; Bhatia, S.; Bhatnagar, V.; Blazey, G.; Blessing, S.; Bloom, K.; Boehnlein, A.; Boline, D.; Boos, E. E.; Borissov, G.; Borysova, M.; Brandt, A.; Brandt, O.; Brochmann, M.; Brock, R.; Bross, A.; Brown, D.; Bu, X. B.; Buehler, M.; Buescher, V.; Bunichev, V.; Burdin, S.; Buszello, C. P.; Camacho-Pérez, E.; Casey, B. C. K.; Castilla-Valdez, H.; Caughron, S.; Chakrabarti, S.; Chan, K. M.; Chandra, A.; Chapon, E.; Chen, G.; Cho, S. W.; Choi, S.; Choudhary, B.; Cihangir, S.; Claes, D.; Clutter, J.; Cooke, M.; Cooper, W. E.; Corcoran, M.; Couderc, F.; Cousinou, M.-C.; Cuth, J.; Cutts, D.; Das, A.; Davies, G.; de Jong, S. J.; De La Cruz-Burelo, E.; Déliot, F.; Demina, R.; Denisov, D.; Denisov, S. P.; Desai, S.; Deterre, C.; DeVaughan, K.; Diehl, H. T.; Diesburg, M.; Ding, P. F.; Dominguez, A.; Dubey, A.; Dudko, L. V.; Duperrin, A.; Dutt, S.; Eads, M.; Edmunds, D.; Ellison, J.; Elvira, V. D.; Enari, Y.; Evans, H.; Evdokimov, A.; Evdokimov, V. N.; Fauré, A.; Feng, L.; Ferbel, T.; Fiedler, F.; Filthaut, F.; Fisher, W.; Fisk, H. E.; Fortner, M.; Fox, H.; Franc, J.; Fuess, S.; Garbincius, P. H.; Garcia-Bellido, A.; García-González, J. A.; Gavrilov, V.; Geng, W.; Gerber, C. E.; Gershtein, Y.; Ginther, G.; Gogota, O.; Golovanov, G.; Grannis, P. D.; Greder, S.; Greenlee, H.; Grenier, G.; Gris, Ph.; Grivaz, J.-F.; Grohsjean, A.; Grünendahl, S.; Grünewald, M. W.; Guillemin, T.; Gutierrez, G.; Gutierrez, P.; Haley, J.; Han, L.; Harder, K.; Harel, A.; Hauptman, J. M.; Hays, J.; Head, T.; Hebbeker, T.; Hedin, D.; Hegab, H.; Heinson, A. P.; Heintz, U.; Hensel, C.; Heredia-De La Cruz, I.; Herner, K.; Hesketh, G.; Hildreth, M. D.; Hirosky, R.; Hoang, T.; Hobbs, J. D.; Hoeneisen, B.; Hogan, J.; Hohlfeld, M.; Holzbauer, J. L.; Howley, I.; Hubacek, Z.; Hynek, V.; Iashvili, I.; Ilchenko, Y.; Illingworth, R.; Ito, A. S.; Jabeen, S.; Jaffré, M.; Jayasinghe, A.; Jeong, M. S.; Jesik, R.; Jiang, P.; Johns, K.; Johnson, E.; Johnson, M.; Jonckheere, A.; Jonsson, P.; Joshi, J.; Jung, A. W.; Juste, A.; Kajfasz, E.; Karmanov, D.; Katsanos, I.; Kaur, M.; Kehoe, R.; Kermiche, S.; Khalatyan, N.; Khanov, A.; Kharchilava, A.; Kharzheev, Y. N.; Kiselevich, I.; Kohli, J. M.; Kozelov, A. V.; Kraus, J.; Kumar, A.; Kupco, A.; Kurča, T.; Kuzmin, V. A.; Lammers, S.; Lebrun, P.; Lee, H. S.; Lee, S. W.; Lee, W. M.; Lei, X.; Lellouch, J.; Li, D.; Li, H.; Li, L.; Li, Q. Z.; Lim, J. K.; Lincoln, D.; Linnemann, J.; Lipaev, V. V.; Lipton, R.; Liu, H.; Liu, Y.; Lobodenko, A.; Lokajicek, M.; Lopes de Sa, R.; Luna-Garcia, R.; Lyon, A. L.; Maciel, A. K. A.; Madar, R.; Magaña-Villalba, R.; Malik, S.; Malyshev, V. L.; Mansour, J.; Martínez-Ortega, J.; McCarthy, R.; McGivern, C. L.; Meijer, M. M.; Melnitchouk, A.; Menezes, D.; Mercadante, P. G.; Merkin, M.; Meyer, A.; Meyer, J.; Miconi, F.; Mondal, N. K.; Mulhearn, M.; Nagy, E.; Narain, M.; Nayyar, R.; Neal, H. A.; Negret, J. P.; Neustroev, P.; Nguyen, H. T.; Nunnemann, T.; Orduna, J.; Osman, N.; Pal, A.; Parashar, N.; Parihar, V.; Park, S. K.; Partridge, R.; Parua, N.; Patwa, A.; Penning, B.; Perfilov, M.; Peters, Y.; Petridis, K.; Petrillo, G.; Pétroff, P.; Pleier, M.-A.; Podstavkov, V. M.; Popov, A. V.; Prewitt, M.; Price, D.; Prokopenko, N.; Qian, J.; Quadt, A.; Quinn, B.; Ratoff, P. N.; Razumov, I.; Ripp-Baudot, I.; Rizatdinova, F.; Rominsky, M.; Ross, A.; Royon, C.; Rubinov, P.; Ruchti, R.; Sajot, G.; Sánchez-Hernández, A.; Sanders, M. P.; Santos, A. S.; Savage, G.; Savitskyi, M.; Sawyer, L.; Scanlon, T.; Schamberger, R. D.; Scheglov, Y.; Schellman, H.; Schott, M.; Schwanenberger, C.; Schwienhorst, R.; Sekaric, J.; Severini, H.; Shabalina, E.; Shary, V.; Shaw, S.; Shchukin, A. A.; Shkola, O.; Simak, V.; Skubic, P.; Slattery, P.; Snow, G. R.; Snow, J.; Snyder, S.; Söldner-Rembold, S.; Sonnenschein, L.; Soustruznik, K.; Stark, J.; Stefaniuk, N.; Stoyanova, D. A.; Strauss, M.; Suter, L.; Svoisky, P.; Titov, M.; Tokmenin, V. V.; Tsai, Y.-T.; Tsybychev, D.; Tuchming, B.; Tully, C.; Uvarov, L.; Uvarov, S.; Uzunyan, S.; Van Kooten, R.; van Leeuwen, W. M.; Varelas, N.; Varnes, E. W.; Vasilyev, I. A.; Verkheev, A. Y.; Vertogradov, L. S.; Verzocchi, M.; Vesterinen, M.; Vilanova, D.; Vokac, P.; Wahl, H. D.; Wang, M. H. L. S.; Warchol, J.; Watts, G.; Wayne, M.; Weichert, J.; Welty-Rieger, L.; Williams, M. R. J.; Wilson, G. W.; Wobisch, M.; Wood, D. R.; Wyatt, T. R.; Xie, Y.; Yamada, R.; Yang, S.; Yasuda, T.; Yatsunenko, Y. A.; Ye, W.; Ye, Z.; Yin, H.; Yip, K.; Youn, S. W.; Yu, J. M.; Zennamo, J.; Zhao, T. G.; Zhou, B.; Zhu, J.; Zielinski, M.; Zieminska, D.; Zivkovic, L.; D0 Collaboration
2017-01-01
We present a measurement of top quark polarization in t t ¯ pair production in p p ¯ collisions at √{s }=1.96 TeV using data corresponding to 9.7 fb-1 of integrated luminosity recorded with the D0 detector at the Fermilab Tevatron Collider. We consider final states containing a lepton and at least three jets. The polarization is measured through the distribution of lepton angles along three axes: the beam axis, the helicity axis, and the transverse axis normal to the t t ¯ production plane. This is the first measurement of top quark polarization at the Tevatron using lepton +jet final states and the first measurement of the transverse polarization in t t ¯ production. The observed distributions are consistent with standard model predictions of nearly no polarization.
A non-Gaussian approach to risk measures
NASA Astrophysics Data System (ADS)
Bormetti, Giacomo; Cisana, Enrica; Montagna, Guido; Nicrosini, Oreste
2007-03-01
Reliable calculations of financial risk require that the fat-tailed nature of prices changes is included in risk measures. To this end, a non-Gaussian approach to financial risk management is presented, modelling the power-law tails of the returns distribution in terms of a Student- t distribution. Non-Gaussian closed-form solutions for value-at-risk and expected shortfall are obtained and standard formulae known in the literature under the normality assumption are recovered as a special case. The implications of the approach for risk management are demonstrated through an empirical analysis of financial time series from the Italian stock market and in comparison with the results of the most widely used procedures of quantitative finance. Particular attention is paid to quantify the size of the errors affecting the market risk measures obtained according to different methodologies, by employing a bootstrap technique.
Blood platelets: computerized morphometry applied on optical images
NASA Astrophysics Data System (ADS)
Korobova, Farida V.; Ivanova, Tatyana V.; Gusev, Alexander A.; Shmarov, Dmitry A.; Kozinets, Gennady I.
2000-11-01
The new technology of computerized morphometric image analysis of platelets on blood smears was developed. In a basis of the device is included analysis of cytophotometric and morphometric parameters of platelets. Geometrical and optical parameters of platelets on 35 donors, platelet concentrates and 15 patients with haemorrhagic thrombocythaemia were investigated, average meanings for the area, diameter, its logarithms and optical density of platelets in norm were received. Distribution of the areas, diameters and optical densities of platelets of patients with haemorrhagic thrombocythaemia differed from those at the healthy people. After a course of treatment these meanings came nearer to normal. The important characteristics of platelets in platelet concentrates after three days of storage were in limits of normal meanings, but differed from those in whole blood platelets. Obtained data allow to enter the quantitative standards into investigation of platelets of the healthy people and at various alteration of thrombocytopoieses.
Prediction of normalized biodiesel properties by simulation of multiple feedstock blends.
García, Manuel; Gonzalo, Alberto; Sánchez, José Luis; Arauzo, Jesús; Peña, José Angel
2010-06-01
A continuous process for biodiesel production has been simulated using Aspen HYSYS V7.0 software. As fresh feed, feedstocks with a mild acid content have been used. The process flowsheet follows a traditional alkaline transesterification scheme constituted by esterification, transesterification and purification stages. Kinetic models taking into account the concentration of the different species have been employed in order to simulate the behavior of the CSTR reactors and the product distribution within the process. The comparison between experimental data found in literature and the predicted normalized properties, has been discussed. Additionally, a comparison between different thermodynamic packages has been performed. NRTL activity model has been selected as the most reliable of them. The combination of these models allows the prediction of 13 out of 25 parameters included in standard EN-14214:2003, and confers simulators a great value as predictive as well as optimization tool. (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sturrock, P. A.
2008-01-01
Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.
Viscoelastic analysis of a dental metal-ceramic system
NASA Astrophysics Data System (ADS)
Özüpek, Şebnem; Ünlü, Utku Cemal
2012-11-01
Porcelain-fused-to-metal (PFM) restorations used in prosthetic dentistry contain thermal stresses which develop during the cooling phase after firing. These thermal stresses coupled with the stresses produced by mechanical loads may be the dominant reasons for failures in clinical situations. For an accurate calculation of these stresses, viscoelastic behavior of ceramics at high temperatures should not be ignored. In this study, the finite element technique is used to evaluate the effect of viscoelasticity on stress distributions of a three-point flexure test specimen, which is the current international standard, ISO 9693, to characterize the interfacial bond strength of metal-ceramic restorative systems. Results indicate that the probability of interfacial debonding due to normal tensile stress is higher than that due to shear stress. This conclusion suggests modification of ISO 9693 bond strength definition from one in terms of the shear stress only to that accounting for both normal and shear stresses.
Fasting does not induce gastric emptying in rats.
Brito, Marcus Vinicius Henriques; Yasojima, Edson Yuzur; Teixeira, Renan Kleber Costa; Houat, Abdallah de Paula; Yamaki, Vitor Nagai; Costa, Felipe Lobato da Silva
2015-03-01
To evaluate the effect of fasting on gastric emptying in mice. Twenty-eight mice were distributed into three study groups: a normal group (N=4): normal standard animals; a total fasting group (N=12): subjected to food and water deprivation and a partial fasting group (N=12): subjected to food deprivation only. The fasting groups were subdivided into three subgroups of four animals each, according to the date of euthanasia: 24, 48 and 72 hours. Was analyzed: the gastric volume, degree of the gastric wall distention and the presence of food debris in gastrointestinal tract. The mean gastric volume was 1601 mm3 in the normal group, 847 mm3 in total fasting group and 997 mm3 in partial fasting group. There was difference between the fasting groups in any analyzed period (p<0.05). Regarding the presence of food debris in the gastrointestinal tract and the degree of distension of the stomach, there was no difference between the groups that underwent total or partial fasting (p>0.05). Total fasting or only-solids deprivation does not induce gastric emptying in mice.
Quantitative features in the computed tomography of healthy lungs.
Fromson, B H; Denison, D M
1988-01-01
This study set out to determine whether quantitative features of lung computed tomography scans could be identified that would lead to a tightly defined normal range for use in assessing patients. Fourteen normal subjects with apparently healthy lungs were studied. A technique was developed for rapid and automatic extraction of lung field data from the computed tomography scans. The Hounsfield unit histograms were constructed and, when normalised for predicted lung volumes, shown to be consistent in shape for all the subjects. A three dimensional presentation of the data in the form of a "net plot" was devised, and from this a logarithmic relationship between the area of each lung slice and its mean density was derived (r = 0.9, n = 545, p less than 0.0001). The residual density, calculated as the difference between measured density and density predicted from the relationship with area, was shown to be normally distributed with a mean of 0 and a standard deviation of 25 Hounsfield units (chi 2 test: p less than 0.05). A presentation combining this residual density with the net plot is described. PMID:3353883
Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
A general approach to double-moment normalization of drop size distributions
NASA Astrophysics Data System (ADS)
Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.
2003-04-01
Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.
A Bayesian Nonparametric Meta-Analysis Model
ERIC Educational Resources Information Center
Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.
2015-01-01
In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…
Cluster Stability Estimation Based on a Minimal Spanning Trees Approach
NASA Astrophysics Data System (ADS)
Volkovich, Zeev (Vladimir); Barzily, Zeev; Weber, Gerhard-Wilhelm; Toledano-Kitai, Dvora
2009-08-01
Among the areas of data and text mining which are employed today in science, economy and technology, clustering theory serves as a preprocessing step in the data analyzing. However, there are many open questions still waiting for a theoretical and practical treatment, e.g., the problem of determining the true number of clusters has not been satisfactorily solved. In the current paper, this problem is addressed by the cluster stability approach. For several possible numbers of clusters we estimate the stability of partitions obtained from clustering of samples. Partitions are considered consistent if their clusters are stable. Clusters validity is measured as the total number of edges, in the clusters' minimal spanning trees, connecting points from different samples. Actually, we use the Friedman and Rafsky two sample test statistic. The homogeneity hypothesis, of well mingled samples within the clusters, leads to asymptotic normal distribution of the considered statistic. Resting upon this fact, the standard score of the mentioned edges quantity is set, and the partition quality is represented by the worst cluster corresponding to the minimal standard score value. It is natural to expect that the true number of clusters can be characterized by the empirical distribution having the shortest left tail. The proposed methodology sequentially creates the described value distribution and estimates its left-asymmetry. Numerical experiments, presented in the paper, demonstrate the ability of the approach to detect the true number of clusters.
40 CFR 190.10 - Standards for normal operations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Standards for the Uranium Fuel Cycle § 190.10 Standards for normal operations. Operations covered by this... radioactive materials, radon and its daughters excepted, to the general environment from uranium fuel cycle... the general environment from the entire uranium fuel cycle, per gigawatt-year of electrical energy...
Davis, Joe M
2011-10-28
General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.
Application of a truncated normal failure distribution in reliability testing
NASA Technical Reports Server (NTRS)
Groves, C., Jr.
1968-01-01
Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.
A Note on the Estimator of the Alpha Coefficient for Standardized Variables Under Normality
ERIC Educational Resources Information Center
Hayashi, Kentaro; Kamata, Akihito
2005-01-01
The asymptotic standard deviation (SD) of the alpha coefficient with standardized variables is derived under normality. The research shows that the SD of the standardized alpha coefficient becomes smaller as the number of examinees and/or items increase. Furthermore, this research shows that the degree of the dependence of the SD on the number of…
Are the Stress Drops of Small Earthquakes Good Predictors of the Stress Drops of Larger Earthquakes?
NASA Astrophysics Data System (ADS)
Hardebeck, J.
2017-12-01
Uncertainty in PSHA could be reduced through better estimates of stress drop for possible future large earthquakes. Studies of small earthquakes find spatial variability in stress drop; if large earthquakes have similar spatial patterns, their stress drops may be better predicted using the stress drops of small local events. This regionalization implies the variance with respect to the local mean stress drop may be smaller than the variance with respect to the global mean. I test this idea using the Shearer et al. (2006) stress drop catalog for M1.5-3.1 events in southern California. I apply quality control (Hauksson, 2015) and remove near-field aftershocks (Wooddell & Abrahamson, 2014). The standard deviation of the distribution of the log10 stress drop is reduced from 0.45 (factor of 3) to 0.31 (factor of 2) by normalizing each event's stress drop by the local mean. I explore whether a similar variance reduction is possible when using the Shearer catalog to predict stress drops of larger southern California events. For catalogs of moderate-sized events (e.g. Kanamori, 1993; Mayeda & Walter, 1996; Boyd, 2017), normalizing by the Shearer catalog's local mean stress drop does not reduce the standard deviation compared to the unmodified stress drops. I compile stress drops of larger events from the literature, and identify 15 M5.5-7.5 earthquakes with at least three estimates. Because of the wide range of stress drop estimates for each event, and the different techniques and assumptions, it is difficult to assign a single stress drop value to each event. Instead, I compare the distributions of stress drop estimates for pairs of events, and test whether the means of the distributions are statistically significantly different. The events divide into 3 categories: low, medium, and high stress drop, with significant differences in mean stress drop between events in the low and the high stress drop categories. I test whether the spatial patterns of the Shearer catalog stress drops can predict the categories of the 15 events. I find that they cannot, rather the large event stress drops are uncorrelated with the local mean stress drop from the Shearer catalog. These results imply that the regionalization of stress drops of small events does not extend to the larger events, at least with current standard techniques of stress drop estimation.
Pandit, Jaideep J; Dexter, Franklin
2009-06-01
At multiple facilities including some in the United Kingdom's National Health Service, the following are features of many surgical-anesthetic teams: i) there is sufficient workload for each operating room (OR) list to almost always be fully scheduled; ii) the workdays are organized such that a single surgeon is assigned to each block of time (usually 8 h); iii) one team is assigned per block; and iv) hardly ever would a team "split" to do cases in more than one OR simultaneously. We used Monte-Carlo simulation using normal and Weibull distributions to estimate the times to complete lists of cases scheduled into such 8 h sessions. For each combination of mean and standard deviation, inefficiencies of use of OR time were determined for 10 h versus 8 h of staffing. When the mean actual hours of OR time used averages < or = 8 h 25 min, 8 h of staffing has higher OR efficiency than 10 h for all combinations of standard deviation and relative cost of over-run to under-run. When mean > or = 8 h 50 min, 10 h staffing has higher OR efficiency. For 8 h 25 min < mean < 8 h 50 min, the economic break-even point depends on conditions. For example, break-even is: (a) 8 h 27 min for Weibull, standard deviation of 60 min and relative cost of over-run to under-run of 2.0 versus (b) 8 h 48 min for normal, standard deviation of 0 min and relative cost ratio of 1.50. Although the simplest decision rule would be to staff for 8 h if the mean workload is < or = 8 h 40 min and to staff for 10 h otherwise, performance was poor. For example, for the Weibull distribution with mean 8 h 40 min, standard deviation 60 min, and relative cost ratio of 2.00, the inefficiency of use of OR time would be 34% larger if staffing were planned for 8 h instead of 10 h. For surgical teams with 8 h sessions, use the following decision rule for anesthesiology and OR nurse staffing. If actual hours of OR time used averages < or = 8 h 25 min, plan 8 h staffing. If average > or = 8 h 50 min, plan 10 h staffing. For averages in between, perform the full analysis of McIntosh et al. (Anesth Analg 2006;103:1499-516).
The transmembrane gradient of the dielectric constant influences the DPH lifetime distribution.
Konopásek, I; Kvasnicka, P; Amler, E; Kotyk, A; Curatola, G
1995-11-06
The fluorescence lifetime distribution of 1,6-diphenyl-1,3,5-hexatriene (DPH) and 1-[4-(trimethylamino)phenyl]-6-phenyl-1,3,5-hexatriene (TMA-DPH) in egg-phosphatidylcholine liposomes was measured in normal and heavy water. The lower dielectric constant (by approximately 12%) of heavy water compared with normal water was employed to provide direct evidence that the drop of the dielectric constant along the membrane normal shifts the centers of the distribution of both DPH and TMA-DPH to higher values and sharpens the widths of the distribution. The profile of the dielectric constant along the membrane normal was not found to be a linear gradient (in contrast to [1]) but a more complex function. Presence of cholesterol in liposomes further shifted the center of the distributions to higher value and sharpened them. In addition, it resulted in a more gradient-like profile of the dielectric constant (i.e. linearization) along the normal of the membrane. The effect of the change of dielectric constant on the membrane proteins is discussed.
2016-10-01
the nodule. The discriminability of benign and malignant nodules were analyzed using t- test and the normal distribution of the individual metric value...22 Surround Distribution Distribution of the 7 parenchymal exemplars (Normal, Honey comb, Reticular, Ground glass, mild low attenuation area...the distribution of honey comb, reticular and ground glass surrounding the nodule. 0.001
29 CFR 4044.73 - Lump sums and other alternative forms of distribution in lieu of annuities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... distribution is the present value of the normal form of benefit provided by the plan payable at normal... 29 Labor 9 2010-07-01 2010-07-01 false Lump sums and other alternative forms of distribution in... Benefits and Assets Non-Trusteed Plans § 4044.73 Lump sums and other alternative forms of distribution in...
Detection and Parameter Estimation of Chirped Radar Signals.
2000-01-10
Wigner - Ville distribution ( WVD ): The WVD belongs to the Cohen’s class of energy distributions ...length. 28 6. Pseudo Wigner - Ville distribution (PWVD): The PWVD introduces a time-window to the WVD definition, thereby reducing the interferences...Frequency normalized to sampling frequency 26 Figure V.2: Wigner - Ville distribution ; time normalized to the pulse length 28 Figure V.3:
Descriptive Quantitative Analysis of Rearfoot Alignment Radiographic Parameters.
Meyr, Andrew J; Wagoner, Matthew R
2015-01-01
Although the radiographic parameters of the transverse talocalcaneal angle (tTCA), calcaneocuboid angle (CCA), talar head uncovering (THU), calcaneal inclination angle (CIA), talar declination angle (TDA), lateral talar-first metatarsal angle (lTFA), and lateral talocalcaneal angle (lTCA) form the basis of the preoperative evaluation and procedure selection for pes planovalgus deformity, the so-called normal values of these measurements are not well-established. The objectives of the present study were to retrospectively evaluate the descriptive statistics of these radiographic parameters (tTCA, CCA, THU, CIA, TDA, lTFA, and lTCA) in a large population, and, second, to determine an objective basis for defining "normal" versus "abnormal" measurements. As a secondary outcome, the relationship of these variables to the body mass index was assessed. Anteroposterior and lateral foot radiographs from 250 consecutive patients without a history of previous foot and ankle surgery and/or trauma were evaluated. The results revealed a mean measurement of 24.12°, 13.20°, 74.32%, 16.41°, 26.64°, 8.37°, and 43.41° for the tTCA, CCA, THU, CIA, TDA, lTFA, and lTCA, respectively. These were generally in line with the reported historical normal values. Descriptive statistical analysis demonstrated that the tTCA, THU, and TDA met the standards to be considered normally distributed but that the CCA, CIA, lTFA, and lTCA demonstrated data characteristics of both parametric and nonparametric distributions. Furthermore, only the CIA (R = -0.2428) and lTCA (R = -0.2449) demonstrated substantial correlation with the body mass index. No differentiations in deformity progression were observed when the radiographic parameters were plotted against each other to lead to a quantitative basis for defining "normal" versus "abnormal" measurements. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Seol, Hyunsoo
2016-06-01
The purpose of this study was to apply the bootstrap procedure to evaluate how the bootstrapped confidence intervals (CIs) for polytomous Rasch fit statistics might differ according to sample sizes and test lengths in comparison with the rule-of-thumb critical value of misfit. A total of 25 simulated data sets were generated to fit the Rasch measurement and then a total of 1,000 replications were conducted to compute the bootstrapped CIs under each of 25 testing conditions. The results showed that rule-of-thumb critical values for assessing the magnitude of misfit were not applicable because the infit and outfit mean square error statistics showed different magnitudes of variability over testing conditions and the standardized fit statistics did not exactly follow the standard normal distribution. Further, they also do not share the same critical range for the item and person misfit. Based on the results of the study, the bootstrapped CIs can be used to identify misfitting items or persons as they offer a reasonable alternative solution, especially when the distributions of the infit and outfit statistics are not well known and depend on sample size. © The Author(s) 2016.
Development of accumulated heat stress index based on time-weighted function
NASA Astrophysics Data System (ADS)
Lee, Ji-Sun; Byun, Hi-Ryong; Kim, Do-Woo
2016-05-01
Heat stress accumulates in the human body when a person is exposed to a thermal condition for a long time. Considering this fact, we have defined the accumulated heat stress (AH) and have developed the accumulated heat stress index (AHI) to quantify the strength of heat stress. AH represents the heat stress accumulated in a 72-h period calculated by the use of a time-weighted function, and the AHI is a standardized index developed by the use of an equiprobability transformation (from a fitted Weibull distribution to the standard normal distribution). To verify the advantage offered by the AHI, it was compared with four thermal indices the humidex, the heat index, the wet-bulb globe temperature, and the perceived temperature used by national governments. AH and the AHI were found to provide better detection of thermal danger and were more useful than other indices. In particular, AH and the AHI detect deaths that were caused not only by extremely hot and humid weather, but also by the persistence of moderately hot and humid weather (for example, consecutive daily maximum temperatures of 28-32 °C), which the other indices fail to detect.
Statistical characterization of a large geochemical database and effect of sample size
Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.
2005-01-01
The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
Logistic Approximation to the Normal: The KL Rationale
ERIC Educational Resources Information Center
Savalei, Victoria
2006-01-01
A rationale is proposed for approximating the normal distribution with a logistic distribution using a scaling constant based on minimizing the Kullback-Leibler (KL) information, that is, the expected amount of information available in a sample to distinguish between two competing distributions using a likelihood ratio (LR) test, assuming one of…
Bivariate normal, conditional and rectangular probabilities: A computer program with applications
NASA Technical Reports Server (NTRS)
Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.
1980-01-01
Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.
ERIC Educational Resources Information Center
Ho, Andrew D.; Yu, Carol C.
2015-01-01
Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…
Hendry, Gordon J.; Rafferty, Danny; Barn, Ruth; Gardner-Medwin, Janet; Turner, Debbie E.; Woodburn, James
2013-01-01
Purpose The objective of this study was to compare disease activity, impairments, disability, foot function and gait characteristics between a well described cohort of juvenile idiopathic arthritis (JIA) patients and normal healthy controls using a 7-segment foot model and three-dimensional gait analysis. Methods Fourteen patients with JIA (mean (standard deviation) age of 12.4 years (3.2)) and a history of foot disease and 10 healthy children (mean (standard deviation) age of 12.5 years (3.4)) underwent three-dimensional gait analysis and plantar pressure analysis to measure biomechanical foot function. Localised disease impact and foot-specific disease activity were determined using the juvenile arthritis foot disability index, rear- and forefoot deformity scores, and clinical and musculoskeletal ultrasound examinations respectively. Mean differences between groups with associated 95% confidence intervals were calculated using the t distribution. Results Mild-to-moderate foot impairments and disability but low levels of disease activity were detected in the JIA group. In comparison with healthy subjects, minor trends towards increased midfoot dorsiflexion and reduced lateral forefoot abduction within a 3–5° range were observed in patients with JIA. The magnitude and timing of remaining kinematic, kinetic and plantar pressure distribution variables during the stance phase were similar for both groups. Conclusion In children and adolescents with JIA, foot function as determined by a multi-segment foot model did not differ from that of normal age- and gender-matched subjects despite moderate foot impairments and disability scores. These findings may indicate that tight control of active foot disease may prevent joint destruction and associated structural and functional impairments. PMID:23142184
Tu, Shu-Ju; Wang, Shun-Ping; Cheng, Fu-Chou; Weng, Chia-En; Huang, Wei-Tzu; Chang, Wei-Jeng; Chen, Ying-Ju
2017-01-01
The literature shows that bone mineral density (BMD) and the geometric architecture of trabecular bone in the femur may be affected by inadequate dietary intake of Mg. In this study, we used microcomputed tomography (micro-CT) to characterize and quantify the impact of a low-Mg diet on femoral trabecular bones in mice. Four-week-old C57BL/6J male mice were randomly assigned to 2 groups and supplied either a normal or low-Mg diet for 8weeks. Samples of plasma and urine were collected for biochemical analysis, and femur tissues were removed for micro-CT imaging. In addition to considering standard parameters, we regarded trabecular bone as a cylindrical rod and used computational algorithms for a technical assessment of the morphological characteristics of the bones. BMD (mg-HA/cm3) was obtained using a standard phantom. We observed a decline in the total tissue volume, bone volume, percent bone volume, fractal dimension, number of trabecular segments, number of connecting nodes, bone mineral content (mg-HA), and BMD, as well as an increase in the structural model index and surface-area-to-volume ratio in low-Mg mice. Subsequently, we examined the distributions of the trabecular segment length and radius, and a series of specific local maximums were identified. The biochemical analysis revealed a 43% (96%) decrease in Mg and a 40% (71%) decrease in Ca in plasma (urine excretion). This technical assessment performed using micro-CT revealed a lower population of femoral trabecular bones and a decrease in BMD at the distal metaphysis in the low-Mg mice. Examining the distributions of the length and radius of trabecular segments showed that the average length and radius of the trabecular segments in low-Mg mice are similar to those in normal mice.
Gough, Albert H.; Chen, Ning; Shun, Tong Ying; Lezon, Timothy R.; Boltz, Robert C.; Reese, Celeste E.; Wagner, Jacob; Vernetti, Lawrence A.; Grandis, Jennifer R.; Lee, Adrian V.; Stern, Andrew M.; Schurdak, Mark E.; Taylor, D. Lansing
2014-01-01
One of the greatest challenges in biomedical research, drug discovery and diagnostics is understanding how seemingly identical cells can respond differently to perturbagens including drugs for disease treatment. Although heterogeneity has become an accepted characteristic of a population of cells, in drug discovery it is not routinely evaluated or reported. The standard practice for cell-based, high content assays has been to assume a normal distribution and to report a well-to-well average value with a standard deviation. To address this important issue we sought to define a method that could be readily implemented to identify, quantify and characterize heterogeneity in cellular and small organism assays to guide decisions during drug discovery and experimental cell/tissue profiling. Our study revealed that heterogeneity can be effectively identified and quantified with three indices that indicate diversity, non-normality and percent outliers. The indices were evaluated using the induction and inhibition of STAT3 activation in five cell lines where the systems response including sample preparation and instrument performance were well characterized and controlled. These heterogeneity indices provide a standardized method that can easily be integrated into small and large scale screening or profiling projects to guide interpretation of the biology, as well as the development of therapeutics and diagnostics. Understanding the heterogeneity in the response to perturbagens will become a critical factor in designing strategies for the development of therapeutics including targeted polypharmacology. PMID:25036749
On a framework for generating PoD curves assisted by numerical simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in
2015-03-31
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less
On a framework for generating PoD curves assisted by numerical simulations
NASA Astrophysics Data System (ADS)
Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar
2015-03-01
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.
Characterizing pulmonary blood flow distribution measured using arterial spin labeling.
Henderson, A Cortney; Prisk, G Kim; Levin, David L; Hopkins, Susan R; Buxton, Richard B
2009-12-01
The arterial spin labeling (ASL) method provides images in which, ideally, the signal intensity of each image voxel is proportional to the local perfusion. For studies of pulmonary perfusion, the relative dispersion (RD, standard deviation/mean) of the ASL signal across a lung section is used as a reliable measure of flow heterogeneity. However, the RD of the ASL signals within the lung may systematically differ from the true RD of perfusion because the ASL image also includes signals from larger vessels, which can reflect the blood volume rather than blood flow if the vessels are filled with tagged blood during the imaging time. Theoretical studies suggest that the pulmonary vasculature exhibits a lognormal distribution for blood flow and thus an appropriate measure of heterogeneity is the geometric standard deviation (GSD). To test whether the ASL signal exhibits a lognormal distribution for pulmonary blood flow, determine whether larger vessels play an important role in the distribution, and extract physiologically relevant measures of heterogeneity from the ASL signal, we quantified the ASL signal before and after an intervention (head-down tilt) in six subjects. The distribution of ASL signal was better characterized by a lognormal distribution than a normal distribution, reducing the mean squared error by 72% (p < 0.005). Head-down tilt significantly reduced the lognormal scale parameter (p = 0.01) but not the shape parameter or GSD. The RD increased post-tilt and remained significantly elevated (by 17%, p < 0.05). Test case results and mathematical simulations suggest that RD is more sensitive than the GSD to ASL signal from tagged blood in larger vessels, a probable explanation of the change in RD without a statistically significant change in GSD. This suggests that the GSD is a useful measure of pulmonary blood flow heterogeneity with the advantage of being less affected by the ASL signal from tagged blood in larger vessels.
NASA Astrophysics Data System (ADS)
Lee, Jaeha; Tsutsui, Izumi
2017-05-01
We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.
Probability density functions for use when calculating standardised drought indices
NASA Astrophysics Data System (ADS)
Svensson, Cecilia; Prosdocimi, Ilaria; Hannaford, Jamie
2015-04-01
Time series of drought indices like the standardised precipitation index (SPI) and standardised flow index (SFI) require a statistical probability density function to be fitted to the observed (generally monthly) precipitation and river flow data. Once fitted, the quantiles are transformed to a Normal distribution with mean = 0 and standard deviation = 1. These transformed data are the SPI/SFI, which are widely used in drought studies, including for drought monitoring and early warning applications. Different distributions were fitted to rainfall and river flow data accumulated over 1, 3, 6 and 12 months for 121 catchments in the United Kingdom. These catchments represent a range of catchment characteristics in a mid-latitude climate. Both rainfall and river flow data have a lower bound at 0, as rains and flows cannot be negative. Their empirical distributions also tend to have positive skewness, and therefore the Gamma distribution has often been a natural and suitable choice for describing the data statistically. However, after transformation of the data to Normal distributions to obtain the SPIs and SFIs for the 121 catchments, the distributions are rejected in 11% and 19% of cases, respectively, by the Shapiro-Wilk test. Three-parameter distributions traditionally used in hydrological applications, such as the Pearson type 3 for rainfall and the Generalised Logistic and Generalised Extreme Value distributions for river flow, tend to make the transformed data fit better, with rejection rates of 5% or less. However, none of these three-parameter distributions have a lower bound at zero. This means that the lower tail of the fitted distribution may potentially go below zero, which would result in a lower limit to the calculated SPI and SFI values (as observations can never reach into this lower tail of the theoretical distribution). The Tweedie distribution can overcome the problems found when using either the Gamma or the above three-parameter distributions. The Tweedie is a three-parameter distribution which includes the Gamma distribution as a special case. It is bounded below at zero and has enough flexibility to fit most behaviours observed in the data. It does not always outperform the three-parameter distributions, but the rejection rates are similar. In addition, for certain parameter values the Tweedie distribution has a positive mass at zero, which means that ephemeral streams and months with zero rainfall can be modelled. It holds potential for wider application in drought studies in other climates and types of catchment.
ERIC Educational Resources Information Center
Haberman, Shelby J.; von Davier, Matthias; Lee, Yi-Hsuan
2008-01-01
Multidimensional item response models can be based on multivariate normal ability distributions or on multivariate polytomous ability distributions. For the case of simple structure in which each item corresponds to a unique dimension of the ability vector, some applications of the two-parameter logistic model to empirical data are employed to…
Ding, Changfeng; Ma, Yibing; Li, Xiaogang; Zhang, Taolin; Wang, Xingxiang
2018-04-01
Cadmium (Cd) is an environmental toxicant with high rates of soil-plant transfer. It is essential to establish an accurate soil threshold for the implementation of soil management practices. This study takes root vegetable as an example to derive soil thresholds for Cd based on the food quality standard as well as health risk assessment using species sensitivity distribution (SSD). A soil type-specific bioconcentration factor (BCF, ratio of Cd concentration in plant to that in soil) generated from soil with a proper Cd concentration gradient was calculated and applied in the derivation of soil thresholds instead of a generic BCF value to minimize the uncertainty. The sensitivity variations of twelve root vegetable cultivars for accumulating soil Cd and the empirical soil-plant transfer model were investigated and developed in greenhouse experiments. After normalization, the hazardous concentrations from the fifth percentile of the distribution based on added Cd (HC5 add ) were calculated from the SSD curves fitted by Burr Type III distribution. The derived soil thresholds were presented as continuous or scenario criteria depending on the combination of soil pH and organic carbon content. The soil thresholds based on food quality standard were on average 0.7-fold of those based on health risk assessment, and were further validated to be reliable using independent data from field survey and published articles. The results suggested that deriving soil thresholds for Cd using SSD method is robust and also applicable to other crops as well as other trace elements that have the potential to cause health risk issues. Copyright © 2017 Elsevier B.V. All rights reserved.
Gay, Hiram A.; Barthold, H. Joseph; O’Meara, Elizabeth; Bosch, Walter R.; El Naqa, Issam; Al-Lozi, Rawan; Rosenthal, Seth A.; Lawton, Colleen; Lee, W. Robert; Sandler, Howard; Zietman, Anthony; Myerson, Robert; Dawson, Laura A.; Willett, Christopher; Kachnic, Lisa A.; Jhingran, Anuja; Portelance, Lorraine; Ryu, Janice; Small, William; Gaffney, David; Viswanathan, Akila N.; Michalski, Jeff M.
2012-01-01
Purpose To define a male and female pelvic normal tissue contouring atlas for Radiation Therapy Oncology Group (RTOG) trials. Methods and Materials One male pelvis computed tomography (CT) data set and one female pelvis CT data set were shared via the Image-Guided Therapy QA Center. A total of 16 radiation oncologists participated. The following organs at risk were contoured in both CT sets: anus, anorectum, rectum (gastrointestinal and genitourinary definitions), bowel NOS (not otherwise specified), small bowel, large bowel, and proximal femurs. The following were contoured in the male set only: bladder, prostate, seminal vesicles, and penile bulb. The following were contoured in the female set only: uterus, cervix, and ovaries. A computer program used the binomial distribution to generate 95% group consensus contours. These contours and definitions were then reviewed by the group and modified. Results The panel achieved consensus definitions for pelvic normal tissue contouring in RTOG trials with these standardized names: Rectum, AnoRectum, SmallBowel, Colon, BowelBag, Bladder, UteroCervix, Adnexa_R, Adnexa_L, Prostate, SeminalVesc, PenileBulb, Femur_R, and Femur_L. Two additional normal structures whose purpose is to serve as targets in anal and rectal cancer were defined: AnoRectumSig and Mesorectum. Detailed target volume contouring guidelines and images are discussed. Conclusions Consensus guidelines for pelvic normal tissue contouring were reached and are available as a CT image atlas on the RTOG Web site. This will allow uniformity in defining normal tissues for clinical trials delivering pelvic radiation and will facilitate future normal tissue complication research. PMID:22483697
Statistical analysis of the 70 meter antenna surface distortions
NASA Technical Reports Server (NTRS)
Kiedron, K.; Chian, C. T.; Chuang, K. L.
1987-01-01
Statistical analysis of surface distortions of the 70 meter NASA/JPL antenna, located at Goldstone, was performed. The purpose of this analysis is to verify whether deviations due to gravity loading can be treated as quasi-random variables with normal distribution. Histograms of the RF pathlength error distribution for several antenna elevation positions were generated. The results indicate that the deviations from the ideal antenna surface are not normally distributed. The observed density distribution for all antenna elevation angles is taller and narrower than the normal density, which results in large positive values of kurtosis and a significant amount of skewness. The skewness of the distribution changes from positive to negative as the antenna elevation changes from zenith to horizon.
Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo
2018-01-01
Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66–96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges’ Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard. PMID:29513690
Edmunds, Kyle; Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo
2018-01-01
Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66-96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges' Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard.
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2018-05-01
In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.
Scoring in genetically modified organism proficiency tests based on log-transformed results.
Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P
2006-01-01
The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.
Differential models of twin correlations in skew for body-mass index (BMI).
Tsang, Siny; Duncan, Glen E; Dinescu, Diana; Turkheimer, Eric
2018-01-01
Body Mass Index (BMI), like most human phenotypes, is substantially heritable. However, BMI is not normally distributed; the skew appears to be structural, and increases as a function of age. Moreover, twin correlations for BMI commonly violate the assumptions of the most common variety of the classical twin model, with the MZ twin correlation greater than twice the DZ correlation. This study aimed to decompose twin correlations for BMI using more general skew-t distributions. Same sex MZ and DZ twin pairs (N = 7,086) from the community-based Washington State Twin Registry were included. We used latent profile analysis (LPA) to decompose twin correlations for BMI into multiple mixture distributions. LPA was performed using the default normal mixture distribution and the skew-t mixture distribution. Similar analyses were performed for height as a comparison. Our analyses are then replicated in an independent dataset. A two-class solution under the skew-t mixture distribution fits the BMI distribution for both genders. The first class consists of a relatively normally distributed, highly heritable BMI with a mean in the normal range. The second class is a positively skewed BMI in the overweight and obese range, with lower twin correlations. In contrast, height is normally distributed, highly heritable, and is well-fit by a single latent class. Results in the replication dataset were highly similar. Our findings suggest that two distinct processes underlie the skew of the BMI distribution. The contrast between height and weight is in accord with subjective psychological experience: both are under obvious genetic influence, but BMI is also subject to behavioral control, whereas height is not.
Bancone, Germana; Gornsawun, Gornpan; Chu, Cindy S; Porn, Pen; Pal, Sampa; Bansil, Pooja; Domingo, Gonzalo J; Nosten, Francois
2018-01-01
Glucose-6-phosphate dehydrogenase (G6PD) deficiency is the most common enzymopathy in the human population affecting an estimated 8% of the world population, especially those living in areas of past and present malaria endemicity. Decreased G6PD enzymatic activity is associated with drug-induced hemolysis and increased risk of severe neonatal hyperbilirubinemia leading to brain damage. The G6PD gene is on the X chromosome therefore mutations cause enzymatic deficiency in hemizygote males and homozygote females while the majority of heterozygous females have an intermediate activity (between 30-80% of normal) with a large distribution into the range of deficiency and normality. Current G6PD qualitative tests are unable to diagnose G6PD intermediate activities which could hinder wide use of 8-aminoquinolines for Plasmodium vivax elimination. The aim of the study was to assess the diagnostic performances of the new Carestart G6PD quantitative biosensor. A total of 150 samples of venous blood with G6PD deficient, intermediate and normal phenotypes were collected among healthy volunteers living along the north-western Thailand-Myanmar border. Samples were analyzed by complete blood count, by gold standard spectrophotometric assay using Trinity kits and by the latest model of Carestart G6PD biosensor which analyzes both G6PD and hemoglobin. Bland-Altman comparison of the CareStart normalized G6PD values to that of the gold standard assay showed a strong bias in values resulting in poor area under-the-curve values for both 30% and 80% thresholds. Performing a receiver operator curve identified threshold values for the CareStart product equivalent to the 30% and 80% gold standard values with good sensitivity and specificity values, 100% and 92% (for 30% G6PD activity) and 92% and 94% (for 80% activity) respectively. The Carestart G6PD biosensor represents a significant improvement for quantitative diagnosis of G6PD deficiency over previous versions. Further improvements and validation studies are required to assess its utility for informing radical cure decisions in malaria endemic settings.
Gornsawun, Gornpan; Chu, Cindy S.; Porn, Pen; Pal, Sampa; Bansil, Pooja
2018-01-01
Introduction Glucose-6-phosphate dehydrogenase (G6PD) deficiency is the most common enzymopathy in the human population affecting an estimated 8% of the world population, especially those living in areas of past and present malaria endemicity. Decreased G6PD enzymatic activity is associated with drug-induced hemolysis and increased risk of severe neonatal hyperbilirubinemia leading to brain damage. The G6PD gene is on the X chromosome therefore mutations cause enzymatic deficiency in hemizygote males and homozygote females while the majority of heterozygous females have an intermediate activity (between 30–80% of normal) with a large distribution into the range of deficiency and normality. Current G6PD qualitative tests are unable to diagnose G6PD intermediate activities which could hinder wide use of 8-aminoquinolines for Plasmodium vivax elimination. The aim of the study was to assess the diagnostic performances of the new Carestart G6PD quantitative biosensor. Methods A total of 150 samples of venous blood with G6PD deficient, intermediate and normal phenotypes were collected among healthy volunteers living along the north-western Thailand-Myanmar border. Samples were analyzed by complete blood count, by gold standard spectrophotometric assay using Trinity kits and by the latest model of Carestart G6PD biosensor which analyzes both G6PD and hemoglobin. Results Bland-Altman comparison of the CareStart normalized G6PD values to that of the gold standard assay showed a strong bias in values resulting in poor area under-the-curve values for both 30% and 80% thresholds. Performing a receiver operator curve identified threshold values for the CareStart product equivalent to the 30% and 80% gold standard values with good sensitivity and specificity values, 100% and 92% (for 30% G6PD activity) and 92% and 94% (for 80% activity) respectively. Conclusion The Carestart G6PD biosensor represents a significant improvement for quantitative diagnosis of G6PD deficiency over previous versions. Further improvements and validation studies are required to assess its utility for informing radical cure decisions in malaria endemic settings. PMID:29738562
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
An Evaluation of Normal versus Lognormal Distribution in Data Description and Empirical Analysis
ERIC Educational Resources Information Center
Diwakar, Rekha
2017-01-01
Many existing methods of statistical inference and analysis rely heavily on the assumption that the data are normally distributed. However, the normality assumption is not fulfilled when dealing with data which does not contain negative values or are otherwise skewed--a common occurrence in diverse disciplines such as finance, economics, political…
Computer program determines exact two-sided tolerance limits for normal distributions
NASA Technical Reports Server (NTRS)
Friedman, H. A.; Webb, S. R.
1968-01-01
Computer program determines by numerical integration the exact statistical two-sided tolerance limits, when the proportion between the limits is at least a specified number. The program is limited to situations in which the underlying probability distribution for the population sampled is the normal distribution with unknown mean and variance.
Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models
ERIC Educational Resources Information Center
Chun, So Yeon; Shapiro, Alexander
2009-01-01
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.
Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica
2014-05-01
The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.
Stick-slip behavior in a continuum-granular experiment.
Geller, Drew A; Ecke, Robert E; Dahmen, Karin A; Backhaus, Scott
2015-12-01
We report moment distribution results from a laboratory experiment, similar in character to an isolated strike-slip earthquake fault, consisting of sheared elastic plates separated by a narrow gap filled with a two-dimensional granular medium. Local measurement of strain displacements of the plates at 203 spatial points located adjacent to the gap allows direct determination of the event moments and their spatial and temporal distributions. We show that events consist of spatially coherent, larger motions and spatially extended (noncoherent), smaller events. The noncoherent events have a probability distribution of event moment consistent with an M(-3/2) power law scaling with Poisson-distributed recurrence times. Coherent events have a log-normal moment distribution and mean temporal recurrence. As the applied normal pressure increases, there are more coherent events and their log-normal distribution broadens and shifts to larger average moment.
Real-time modeling and simulation of distribution feeder and distributed resources
NASA Astrophysics Data System (ADS)
Singh, Pawan
The analysis of the electrical system dates back to the days when analog network analyzers were used. With the advent of digital computers, many programs were written for power-flow and short circuit analysis for the improvement of the electrical system. Real-time computer simulations can answer many what-if scenarios in the existing or the proposed power system. In this thesis, the standard IEEE 13-Node distribution feeder is developed and validated on a real-time platform OPAL-RT. The concept and the challenges of the real-time simulation are studied and addressed. Distributed energy resources include some of the commonly used distributed generation and storage devices like diesel engine, solar photovoltaic array, and battery storage system are modeled and simulated on a real-time platform. A microgrid encompasses a portion of an electric power distribution which is located downstream of the distribution substation. Normally, the microgrid operates in paralleled mode with the grid; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. The microgrid can operate in grid connected and islanded mode, both the operating modes are studied in the last chapter. Towards the end, a simple microgrid controller modeled and simulated on the real-time platform is developed for energy management and protection for the microgrid.
Wéra, A-C; Barazzuol, L; Jeynes, J C G; Merchant, M J; Suzuki, M; Kirkby, K J
2014-08-07
It is well known that broad beam irradiation with heavy ions leads to variation in the number of hit(s) received by each cell as the distribution of particles follows the Poisson statistics. Although the nucleus area will determine the number of hit(s) received for a given dose, variation amongst its irradiated cell population is generally not considered. In this work, we investigate the effect of the nucleus area's distribution on the survival fraction. More specifically, this work aims to explain the deviation, or tail, which might be observed in the survival fraction at high irradiation doses. For this purpose, the nucleus area distribution was added to the beam Poisson statistics and the Linear-Quadratic model in order to fit the experimental data. As shown in this study, nucleus size variation, and the associated Poisson statistics, can lead to an upward survival trend after broad beam irradiation. The influence of the distribution parameters (mean area and standard deviation) was studied using a normal distribution, along with the Linear-Quadratic model parameters (α and β). Finally, the model proposed here was successfully tested to the survival fraction of LN18 cells irradiated with a 85 keV µm(- 1) carbon ion broad beam for which the distribution in the area of the nucleus had been determined.
Hanson, James V M; Sromicki, Julian; Mangold, Mario; Golling, Matthias; Gerth-Kahlert, Christina
2016-04-01
Laser pointer devices have become increasingly available in recent years, and their misuse has caused a number of ocular injuries. Online distribution channels permit trade in devices which may not conform to international standards in terms of their output power and spectral content. We present a case study of ocular injury caused by one such device. The patient was examined approximately 9 months following laser exposure using full-field and multifocal electroretinography (ERG and MF-ERG), electrooculography (EOG), and optical coherence tomography (OCT), in addition to a full ophthalmological examination. MF-ERG, OCT, and the ophthalmological examination were repeated 7 months after the first examination. The output of the laser pointer was measured. Despite severe focal damage to the central retina visible fundoscopically and with OCT, all electrophysiological examinations were quantitatively normal; however, qualitatively the central responses of the MF-ERG appeared slightly reduced. When the MF-ERG was repeated 7 months later, all findings were normal. The laser pointer was found to emit both visible and infrared radiation in dangerous amounts. Loss of retinal function following laser pointer injury may not always be detectable using standard electrophysiological tests. Exposure to non-visible radiation should be considered as a possible aggravating factor when assessing cases of alleged laser pointer injury.
Quantile regression via vector generalized additive models.
Yee, Thomas W
2004-07-30
One of the most popular methods for quantile regression is the LMS method of Cole and Green. The method naturally falls within a penalized likelihood framework, and consequently allows for considerable flexible because all three parameters may be modelled by cubic smoothing splines. The model is also very understandable: for a given value of the covariate, the LMS method applies a Box-Cox transformation to the response in order to transform it to standard normality; to obtain the quantiles, an inverse Box-Cox transformation is applied to the quantiles of the standard normal distribution. The purposes of this article are three-fold. Firstly, LMS quantile regression is presented within the framework of the class of vector generalized additive models. This confers a number of advantages such as a unifying theory and estimation process. Secondly, a new LMS method based on the Yeo-Johnson transformation is proposed, which has the advantage that the response is not restricted to be positive. Lastly, this paper describes a software implementation of three LMS quantile regression methods in the S language. This includes the LMS-Yeo-Johnson method, which is estimated efficiently by a new numerical integration scheme. The LMS-Yeo-Johnson method is illustrated by way of a large cross-sectional data set from a New Zealand working population. Copyright 2004 John Wiley & Sons, Ltd.
Bayesian framework inspired no-reference region-of-interest quality measure for brain MRI images
Osadebey, Michael; Pedersen, Marius; Arnold, Douglas; Wendel-Mitoraj, Katrina
2017-01-01
Abstract. We describe a postacquisition, attribute-based quality assessment method for brain magnetic resonance imaging (MRI) images. It is based on the application of Bayes theory to the relationship between entropy and image quality attributes. The entropy feature image of a slice is segmented into low- and high-entropy regions. For each entropy region, there are three separate observations of contrast, standard deviation, and sharpness quality attributes. A quality index for a quality attribute is the posterior probability of an entropy region given any corresponding region in a feature image where quality attribute is observed. Prior belief in each entropy region is determined from normalized total clique potential (TCP) energy of the slice. For TCP below the predefined threshold, the prior probability for a region is determined by deviation of its percentage composition in the slice from a standard normal distribution built from 250 MRI volume data provided by Alzheimer’s Disease Neuroimaging Initiative. For TCP above the threshold, the prior is computed using a mathematical model that describes the TCP–noise level relationship in brain MRI images. Our proposed method assesses the image quality of each entropy region and the global image. Experimental results demonstrate good correlation with subjective opinions of radiologists for different types and levels of quality distortions. PMID:28630885
Elderly quality of life impacted by traditional chinese medicine techniques
Figueira, Helena A; Figueira, Olivia A; Figueira, Alan A; Figueira, Joana A; Giani, Tania S; Dantas, Estélio HM
2010-01-01
Background: The shift in age structure is having a profound impact, suggesting that the aged should be consulted as reporters on the quality of their own lives. Objectives: The aim of this research was to establish the possible impact of traditional Chinese medicine (TCM) techniques on the quality of life (QOL) of the elderly. Sample: Two non-selected, volunteer groups of Rio de Janeiro municipality inhabitants: a control group (36 individuals), not using TCM, and an experimental group (28 individuals), using TCM at ABACO/Sohaku-in Institute, Brazil. Methods: A questionnaire on elderly QOL devised by the World Health Organization, the WHOQOL-Old, was adopted and descriptive statistical techniques were used: mean and standard deviation. The Shapiro–Wilk test checked the normality of the distribution. Furthermore, based on its normality distribution for the intergroup comparison, the Student t test was applied to facets 2, 4, 5, 6, and total score, and the Mann–Whitney U rank test to facets 1 and 3, both tests aiming to analyze the P value between experimental and control groups. The significance level utilized was 95% (P < 0.05). Results: The experimental group reported the highest QOL for every facet and the total score. Conclusions: The results suggest that TCM raises the level of QOL. PMID:21103400
Stevanović, Vladica; Gulan, Ljiljana; Milenković, Biljana; Valjarević, Aleksandar; Zeremski, Tijana; Penjišević, Ivana
2018-03-13
Activity levels of natural and artificial radionuclides and content of ten heavy metals (As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Zn and Hg) were investigated in 41 soil samples collected from Toplica region located in the south part of Serbia. Radioactivity was determined by gamma spectrometry using HPGe detector. The obtained mean activity concentrations ± standard deviations of radionuclides 226 Ra, 232 Th, 40 K and 137 Cs were 29.9 ± 9.4, 36.6 ± 11.5, 492 ± 181 and 13.4 ± 18.7 Bq kg -1 , respectively. According to Shapiro-Wilk normality test, activity concentrations of 226 Ra and 232 Th were consistent with normal distribution. External exposure from radioactivity was estimated through dose and radiation risk assessments. Concentrations of heavy metals were measured by using ICP-OES, and their health risks were then determined. Enrichment by heavy metals and pollution level in soils were evaluated using the enrichment factor, the geoaccumulation index (I geo ), pollution index and pollution load index. Based on GIS approach, the spatial distribution maps of radionuclides and heavy metal contents were made. Spearman correlation coefficient was used for correlation analysis between radionuclide activity concentrations and heavy metal contents.
Rescaled earthquake recurrence time statistics: application to microrepeaters
NASA Astrophysics Data System (ADS)
Goltz, Christian; Turcotte, Donald L.; Abaimov, Sergey G.; Nadeau, Robert M.; Uchida, Naoki; Matsuzawa, Toru
2009-01-01
Slip on major faults primarily occurs during `characteristic' earthquakes. The recurrence statistics of characteristic earthquakes play an important role in seismic hazard assessment. A major problem in determining applicable statistics is the short sequences of characteristic earthquakes that are available worldwide. In this paper, we introduce a rescaling technique in which sequences can be superimposed to establish larger numbers of data points. We consider the Weibull and log-normal distributions, in both cases we rescale the data using means and standard deviations. We test our approach utilizing sequences of microrepeaters, micro-earthquakes which recur in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Microrepeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. In this paper, we present results for the analysis of recurrence times for several microrepeater sequences from Parkfield, CA as well as NE Japan. We find that, once the respective sequence can be considered to be of sufficient stationarity, the statistics can be well fitted by either a Weibull or a log-normal distribution. We clearly demonstrate this fact by our technique of rescaled combination. We conclude that the recurrence statistics of the microrepeater sequences we consider are similar to the recurrence statistics of characteristic earthquakes on major faults.
Outlier Detection in Urban Air Quality Sensor Networks.
van Zoest, V M; Stein, A; Hoek, G
2018-01-01
Low-cost urban air quality sensor networks are increasingly used to study the spatio-temporal variability in air pollutant concentrations. Recently installed low-cost urban sensors, however, are more prone to result in erroneous data than conventional monitors, e.g., leading to outliers. Commonly applied outlier detection methods are unsuitable for air pollutant measurements that have large spatial and temporal variations as occur in urban areas. We present a novel outlier detection method based upon a spatio-temporal classification, focusing on hourly NO 2 concentrations. We divide a full year's observations into 16 spatio-temporal classes, reflecting urban background vs. urban traffic stations, weekdays vs. weekends, and four periods per day. For each spatio-temporal class, we detect outliers using the mean and standard deviation of the normal distribution underlying the truncated normal distribution of the NO 2 observations. Applying this method to a low-cost air quality sensor network in the city of Eindhoven, the Netherlands, we found 0.1-0.5% of outliers. Outliers could reflect measurement errors or unusual high air pollution events. Additional evaluation using expert knowledge is needed to decide on treatment of the identified outliers. We conclude that our method is able to detect outliers while maintaining the spatio-temporal variability of air pollutant concentrations in urban areas.
Ha, Eun-Ho
2018-04-23
Standardized patients (SPs) boost self-confidence, improve problem solving, enhance critical thinking, and advance clinical judgment of nursing students. The aim of this study was to examine nursing students' experience with SPs in simulation-based learning. Q-methodology was used. Department of nursing in Seoul, South Korea. Fourth-year undergraduate nursing students (n = 47). A total of 47 fourth-year undergraduate nursing students ranked 42 Q statements about experiences with SPs into a normal distribution grid. The following three viewpoints were obtained: 1) SPs are helpful for patient care (patient-centered view), 2) SPs roles are important for nursing student learning (SPs roles-centered view), and 3) SPs can promote competency of nursing students (student-centered view). These results indicate that SPs may improve nursing students' confidence and nursing competency. Professors should reflect these three viewpoints in simulation-based learning to effectively engage SPs. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Goldsack, Stephen J.; Holzbach-Valero, A. A.; Waldrop, Raymond S.; Volz, Richard A.
1991-01-01
This paper describes how the main features of the proposed Ada language extensions intended to support distribution, and offered as possible solutions for Ada9X can be implemented by transformation into standard Ada83. We start by summarizing the features proposed in a paper (Gargaro et al, 1990) which constitutes the definition of the extensions. For convenience we have called the language in its modified form AdaPT which might be interpreted as Ada with partitions. These features were carefully chosen to provide support for the construction of executable modules for execution in nodes of a network of loosely coupled computers, but flexibly configurable for different network architectures and for recovery following failure, or adapting to mode changes. The intention in their design was to provide extensions which would not impact adversely on the normal use of Ada, and would fit well in style and feel with the existing standard. We begin by summarizing the features introduced in AdaPT.
Response time accuracy in Apple Macintosh computers.
Neath, Ian; Earle, Avery; Hallett, Darcy; Surprenant, Aimée M
2011-06-01
The accuracy and variability of response times (RTs) collected on stock Apple Macintosh computers using USB keyboards was assessed. A photodiode detected a change in the screen's luminosity and triggered a solenoid that pressed a key on the keyboard. The RTs collected in this way were reliable, but could be as much as 100 ms too long. The standard deviation of the measured RTs varied between 2.5 and 10 ms, and the distributions approximated a normal distribution. Surprisingly, two recent Apple-branded USB keyboards differed in their accuracy by as much as 20 ms. The most accurate RTs were collected when an external CRT was used to display the stimuli and Psychtoolbox was able to synchronize presentation with the screen refresh. We conclude that RTs collected on stock iMacs can detect a difference as small as 5-10 ms under realistic conditions, and this dictates which types of research should or should not use these systems.
Measurement of top quark polarization in t t ¯ lepton + jets final states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abazov, V. M.; Abbott, B.; Acharya, B. S.
We present a measurement of top quark polarization in t ¯ t pair production in p ¯ p collisions at √ s = 1.96 TeV using data corresponding to 9.7 fb -1 of integrated luminosity recorded with the D0 detector at the Fermilab Tevatron Collider. We consider final states containing a lepton and at least three jets. The polarization is measured through the distribution of lepton angles along three axes: the beam axis, the helicity axis, and the transverse axis normal to the t ¯ t production plane. This is the first measurement of top quark polarization at the Tevatronmore » using lepton + jet final states and the first measurement of the transverse polarization in t ¯ t production. The observed distributions are consistent with standard model predictions of nearly no polarization.« less
Exposing strangeness: Projections for kaon electromagnetic form factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Fei; Chang, Lei; Liu, Yu -Xin
A continuum approach to the kaon and pion bound-state problems is used to reveal their electromagnetic structure. For both systems, when used with parton distribution amplitudes appropriate to the scale of the experiment, Standard Model hard-scattering formulas are accurate to within 25% at momentum transfers Q 2 ≈ 8 GeV 2. There are measurable differences between the distribution of strange and normal matter within the kaons, e.g. the ratio of their separate contributions reaches a peak value of 1.5 at Q 2 ≈ 6 GeV 2. Its subsequent Q 2 evolution is accurately described by the hard scattering formulas. Projectionsmore » for the ratio of kaon and pion form factors at timelike momenta beyond the resonance region are also presented. In conclusion, these results and projections should prove useful in planning next-generation experiments.« less
Regression away from the mean: Theory and examples.
Schwarz, Wolf; Reike, Dennis
2018-02-01
Using a standard repeated measures model with arbitrary true score distribution and normal error variables, we present some fundamental closed-form results which explicitly indicate the conditions under which regression effects towards (RTM) and away from the mean are expected. Specifically, we show that for skewed and bimodal distributions many or even most cases will show a regression effect that is in expectation away from the mean, or that is not just towards but actually beyond the mean. We illustrate our results in quantitative detail with typical examples from experimental and biometric applications, which exhibit a clear regression away from the mean ('egression from the mean') signature. We aim not to repeal cautionary advice against potential RTM effects, but to present a balanced view of regression effects, based on a clear identification of the conditions governing the form that regression effects take in repeated measures designs. © 2017 The British Psychological Society.
Cohn, T.A.; England, J.F.; Berenbrock, C.E.; Mason, R.R.; Stedinger, J.R.; Lamontagne, J.R.
2013-01-01
he Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as “less-than” values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.
Exposing strangeness: Projections for kaon electromagnetic form factors
Gao, Fei; Chang, Lei; Liu, Yu -Xin; ...
2017-08-28
A continuum approach to the kaon and pion bound-state problems is used to reveal their electromagnetic structure. For both systems, when used with parton distribution amplitudes appropriate to the scale of the experiment, Standard Model hard-scattering formulas are accurate to within 25% at momentum transfers Q 2 ≈ 8 GeV 2. There are measurable differences between the distribution of strange and normal matter within the kaons, e.g. the ratio of their separate contributions reaches a peak value of 1.5 at Q 2 ≈ 6 GeV 2. Its subsequent Q 2 evolution is accurately described by the hard scattering formulas. Projectionsmore » for the ratio of kaon and pion form factors at timelike momenta beyond the resonance region are also presented. In conclusion, these results and projections should prove useful in planning next-generation experiments.« less
Use of multivariate measures of disability in health surveys.
Charlton, J R; Patrick, D L; Peach, H
1983-01-01
It has been claimed that the aggregation of information from several areas of life into a small set of global measures has certain advantages for describing disability. Global measures of disability were constructed from a modified version of an existing health survey instrument and the sickness impact profile (SIP) and their properties were tested. The disability items grouped satisfactorily into five global measures (physical, psychosocial, eating, communication, and work). All disability measures (global and original category scores) were poor predictors of service use by individuals but were related as expected to age and number of medical conditions. The global measures generally had lower standard errors and better repeatability. All scores exhibit J-shaped distributions for cross sectional data but the change in global measures over time was consistent with the normal distribution. Preferably, both global and category measures should be used for comparing changes over time between groups of individuals. PMID:6655420
NASA Astrophysics Data System (ADS)
Cohn, T. A.; England, J. F.; Berenbrock, C. E.; Mason, R. R.; Stedinger, J. R.; Lamontagne, J. R.
2013-08-01
The Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as "less-than" values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.
Improved Root Normal Size Distributions for Liquid Atomization
2015-11-01
Jackson, Primary Breakup of Round Aerated- Liquid Jets in Supersonic Crossflows, Atomization and Sprays, 16(6), 657-672, 2006 H. C. Simmons, The...Breakup in Liquid - Gas Mixing Layers, Atomization and Sprays, 1, 421-440, 1991 P.-K. Wu, L.-K. Tseng, and G. M. Faeth, Primary Breakup in Gas / Liquid ...Improved Root Normal Size Distributions for Liquid Atomization Distribution Statement A. Approved for public release; distribution is unlimited
Security Standards and Best Practice Considerations for Quantum Key Distribution (QKD)
2012-03-01
SECURITY STANDARDS AND BEST PRACTICE CONSIDERATIONS FOR QUANTUM KEY DISTRIBUTION (QKD) THESIS...protection in the United States. AFIT/GSE/ENV/12-M05 SECURITY STANDARDS AND BEST PRACTICE CONSIDERATIONS FOR QUANTUM KEY DISTRIBUTION (QKD...FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GSE/ENV/12-M05 SECURITY STANDARDS AND BEST PRACTICE CONSIDERATIONS FOR QUANTUM KEY
The normalization of deviance in healthcare delivery
Banja, John
2009-01-01
Many serious medical errors result from violations of recognized standards of practice. Over time, even egregious violations of standards of practice may become “normalized” in healthcare delivery systems. This article describes what leads to this normalization and explains why flagrant practice deviations can persist for years, despite the importance of the standards at issue. This article also provides recommendations to aid healthcare organizations in identifying and managing unsafe practice deviations before they become normalized and pose genuine risks to patient safety, quality care, and employee morale. PMID:20161685
Viscoelastic analysis of adhesively bonded joints
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.
1981-01-01
In this paper an adhesively bonded lap joint is analyzed by assuming that the adherends are elastic and the adhesive is linearly viscoelastic. After formulating the general problem a specific example for two identical adherends bonded through a three parameter viscoelastic solid adhesive is considered. The standard Laplace transform technique is used to solve the problem. The stress distribution in the adhesive layer is calculated for three different external loads namely, membrane loading, bending, and transverse shear loading. The results indicate that the peak value of the normal stress in the adhesive is not only consistently higher than the corresponding shear stress but also decays slower.
Statistical models of power-combining circuits for O-type traveling-wave tube amplifiers
NASA Astrophysics Data System (ADS)
Kats, A. M.; Klinaev, Iu. V.; Gleizer, V. V.
1982-11-01
The design outlined here allows for imbalances in the power of the devices being combined and for differences in phase. It is shown that the coefficient of combination is described by a beta distribution of the first type when a small number of devices are being combined and that the coefficient is asymptotically normal in relation to both the number of devices and the phase variance of the tube's output signals. Relations are derived that make it possible to calculate the efficiency of a power-combining circuit and the reproducibility of the design parameters when standard devices are used.
An Output Approach to Incentive Reimbursement for Hospitals
Ro, Kong-kyun; Auster, Richard
1969-01-01
A method of incentive reimbursement for health care institutions is described that is designed to stimulate the providers' efficiency. The two main features are: (1) reimbursement based on a weighted average of actual cost and mean cost plus or minus an appropriate number of standard deviations; (2) output defined as episodes of illness given adequate treatment instead of days of hospitalization. It is suggested that despite the operational difficulties involved in a method of payment based on an output approach, the flexibility incorporated into the determination of reimbursement by use of the properties of a normal frequency distribution would make the system workable. PMID:5349002
Serebrianyĭ, A M; Akleev, A V; Aleshchenko, A V; Antoshchina, M M; Kudriashova, O V; Riabchenko, N I; Semenova, L P; Pelevina, I I
2011-01-01
By micronucleus (MN) assay with cytokinetic cytochalasin B block, the mean frequency of blood lymphocytes with MN has been determined in 76 Moscow inhabitants, 35 people from Obninsk and 122 from Chelyabinsk region. In contrast to the distribution of individuals on spontaneous frequency of cells with aberrations, which was shown to be binomial (Kusnetzov et al., 1980), the distribution of individuals on the spontaneous frequency of cells with MN in all three massif can be acknowledged as log-normal (chi2 test). Distribution of individuals in the joined massifs (Moscow and Obninsk inhabitants) and in the unique massif of all inspected with great reliability must be acknowledged as log-normal (0.70 and 0.86 correspondingly), but it cannot be regarded as Poisson, binomial or normal. Taking into account that log-normal distribution of children by spontaneous frequency of lymphocytes with MN has been observed by the inspection of 473 children from different kindergartens in Moscow we can make the conclusion that log-normal is regularity inherent in this type of damage of lymphocytes genome. On the contrary the distribution of individuals on induced by irradiation in vitro lymphocytes with MN frequency in most cases must be acknowledged as normal. This distribution character points out that damage appearance in the individual (genomic instability) in a single lymphocytes increases the probability of the damage appearance in another lymphocytes. We can propose that damaged stem cells lymphocyte progenitor's exchange by information with undamaged cells--the type of the bystander effect process. It can also be supposed that transmission of damage to daughter cells occurs in the time of stem cells division.